I would also object by saying that a human is also a “mere machine”.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious or aware of what it is doing, and consequently it does not perceive anything, because “perceiving” includes being conscious and being aware.
In that way, humans are not mere machines, because they are conscious beings that are aware of what they are doing and they perceive things.
I have no idea what “coherent whole” means. Is roomba incoherent is some way?
The human performs the unified action of “perceiving” and we know that it is unified because we experience it as a unified whole. The roomba just has each part of it moved by other parts, and we have no reason to think that these form a unified whole, since we have no reason to think it experiences anything.
In all of these cases, of course, the situation would be quite different if the roomba was conscious. Then it would also perceive what it was doing, it would not be a mere machine, and its actions would be unified.
Ok, but that just pushes the problem one step back. There are various feelings similar to stubbing a toe, and there are various feelings similar to eating candy. How do you know which group is pain and which is pleasure?
The mind does the work of recognizing similarity for us. We don’t have to give a verbal description in order to recognize similarity, much less a third person description, as you are seeking here.
I’m saying that pain isn’t really that similar to heat, and then offered a couple of explanations why you might imagine that it is.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious
Oh, so “mere machine” just a pure synonym of “not conscious”? Then I guess you were right about what my problem is. Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
The roomba just has each part of it moved by other parts
Are you implying that humans do not have parts that move other parts?
The mind does the work of recognizing similarity for us.
No, you misunderstood my question. I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
You’re wrong.
Maybe one of us is really a sentient roomba, pretending to be human? Who knows!
Are you saying that we must have dualism, and that consciousness is something that certainly cannot be reduced to “parts moved by other parts”? It’s not just that some arrangements of matter are conscious and others are not?
If there are parts, there is also a whole. A whole is not the same as parts. So if you mean by “reductionism” that there are only parts and no wholes, then reductionism is false.
If you mean by reductionism that a thing is made of its parts rather than made of its parts plus one other part, then reductionism is true: a whole is made out of its parts, not of the parts plus another part (which would be redundant and absurd.). But it is made “out of” it—it is not the same as the parts.
Oh, so “mere machine” just a pure synonym of “not conscious”?
No. It also means not any other thing similar to consciousness, even if not exactly consciousness.
Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
My reason is that we have no reason to think that a roomba is conscious.
I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
It also means not any other thing similar to consciousness, even if not exactly consciousness.
I have not idea what that means (a few typos maybe?). Obviously, there are things that are unconscious but are not machines, so the words aren’t identical. But if there is some difference between “mere machine” and “unconscious machine”, you have to point it out for me.
My reason is that we have no reason to think that a roomba is conscious.
Hypothetically, what could a reason to think that a robot is conscious look like?
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
“Pain” is a word and humans aren’t born knowing it. What does “no extra step” even mean? There are a few obvious steps. You have this habit of claiming something to be self-evident, when you’re clearly just confused.
I have not idea what that means (a few typos maybe?).
No typos. I meant we know that there are two kinds of things: objective facts and subjective perceptions. As far as anyone knows, there could be a third thing intermediate between those (for example.) So the robot might have something else that we don’t know about.
Hypothetically, what could a reason to think that a robot is conscious look like?
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
You have this habit of claiming something to be self-evident, when you’re clearly just confused.
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
Why is this a probable reason? You have one data point—yourself. Sure, you have human-like behavior, but you also have many other properties, like five fingers on each hand. Why does behavior seem like a more significant indicator of consciousness than having hands with five fingers? How did you come to that conclusion?
If a robot has hands with five fingers, that will also be evidence that it is conscious. This is how induction works; similarity in some properties is evidence of similarity in other properties.
I perform many human behaviors because I am conscious. So the fact that the robot performs similar behaviors is inductive evidence that it performs those behaviors because it is conscious. This does not apply to the number of fingers, which is only evidence by correlation.
I perform many human behaviors because I am conscious.
Another bold claim. Why do you think that there is a causal relationship between having consciousness and behavior? Are you sure that consciousness isn’t just a passive observer? Also, why do you think that there is no causal relationship between having consciousness and five fingers?
Why do you think that there is a causal relationship between having consciousness and behavior?
I am conscious. The reason why I wrote the previous sentence is because I am conscious. As for how I know that this statement is true and I am not just a passive observer, how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
Are you sure that consciousness isn’t just a passive observer?
Yes, for the above reason.
Also, why do you think that there is no causal relationship between having consciousness and five fingers?
In general, because there is no reason to believe that there is. Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
The reason why I wrote the previous sentence is because I am conscious.
That’s just paraphrasing your previous claim.
how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
I have no problems here. First, everything is mechanical. Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own. I then identify “myself” with this mind, rather than the one that’s mute.
Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
It is not just paraphrasing. It is giving an example of a particular case where it is obviously true.
Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own.
Nonsense. Google could easily add a module to Google Translate that would convert a statement into its opposite. That would not give Google Translate a mind of its own.
I then identify “myself” with this mind, rather than the one that’s mute.
Nope. You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
Obviously I do not take this seriously, but I take it just as seriously as the claim that my consciousness does not cause me to say that I am conscious.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
I replied with an example, namely that I say I am conscious precisely because I am conscious. I do not need to argue for this, and I will not.
Google could easily add a module to Google Translate that would convert a statement into its opposite.
No, google could maybe add “not” before every “conscious”, in a grammatically correct way, but it is very far from figuring out what other beliefs need to be altered to make these claims consistent. When it can do that, it will be conscious in my book.
You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
What is “you” in this sentence? The mute mind identifies with the mute mind, and the translation process identifies with the translation process.
I say I am conscious precisely because I am conscious.
There are possible reasons for saying you are conscious, other than being conscious. A tape recorder can also say it is conscious. Saying something doesn’t make it true.
There are possible reasons for saying you are conscious, other than being conscious.
Yes. I have pointed this out myself. This does not suggest in any way that I have such a reason, other than being conscious.
A tape recorder can also say it is conscious.
Exactly. This is why tests like “does it say it is conscious?” or any other third person test are not valid. You can only notice that you yourself are conscious. Only a first person test is valid.
Saying something doesn’t make it true.
Exactly, and you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious or aware of what it is doing, and consequently it does not perceive anything, because “perceiving” includes being conscious and being aware.
In that way, humans are not mere machines, because they are conscious beings that are aware of what they are doing and they perceive things.
The human performs the unified action of “perceiving” and we know that it is unified because we experience it as a unified whole. The roomba just has each part of it moved by other parts, and we have no reason to think that these form a unified whole, since we have no reason to think it experiences anything.
In all of these cases, of course, the situation would be quite different if the roomba was conscious. Then it would also perceive what it was doing, it would not be a mere machine, and its actions would be unified.
The mind does the work of recognizing similarity for us. We don’t have to give a verbal description in order to recognize similarity, much less a third person description, as you are seeking here.
You’re wrong.
Oh, so “mere machine” just a pure synonym of “not conscious”? Then I guess you were right about what my problem is. Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
Are you implying that humans do not have parts that move other parts?
No, you misunderstood my question. I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
Maybe one of us is really a sentient roomba, pretending to be human? Who knows!
No. I said the roomba “just” has that. Humans are also aware of what they are doing.
Are you saying that we must have dualism, and that consciousness is something that certainly cannot be reduced to “parts moved by other parts”? It’s not just that some arrangements of matter are conscious and others are not?
If there are parts, there is also a whole. A whole is not the same as parts. So if you mean by “reductionism” that there are only parts and no wholes, then reductionism is false.
If you mean by reductionism that a thing is made of its parts rather than made of its parts plus one other part, then reductionism is true: a whole is made out of its parts, not of the parts plus another part (which would be redundant and absurd.). But it is made “out of” it—it is not the same as the parts.
No. It also means not any other thing similar to consciousness, even if not exactly consciousness.
My reason is that we have no reason to think that a roomba is conscious.
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
I have not idea what that means (a few typos maybe?). Obviously, there are things that are unconscious but are not machines, so the words aren’t identical. But if there is some difference between “mere machine” and “unconscious machine”, you have to point it out for me.
Hypothetically, what could a reason to think that a robot is conscious look like?
“Pain” is a word and humans aren’t born knowing it. What does “no extra step” even mean? There are a few obvious steps. You have this habit of claiming something to be self-evident, when you’re clearly just confused.
No typos. I meant we know that there are two kinds of things: objective facts and subjective perceptions. As far as anyone knows, there could be a third thing intermediate between those (for example.) So the robot might have something else that we don’t know about.
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
Wrong.
Why is this a probable reason? You have one data point—yourself. Sure, you have human-like behavior, but you also have many other properties, like five fingers on each hand. Why does behavior seem like a more significant indicator of consciousness than having hands with five fingers? How did you come to that conclusion?
If a robot has hands with five fingers, that will also be evidence that it is conscious. This is how induction works; similarity in some properties is evidence of similarity in other properties.
But surely, you believe that human-like behavior is stronger evidence than a hand with five fingers. Why is that?
I perform many human behaviors because I am conscious. So the fact that the robot performs similar behaviors is inductive evidence that it performs those behaviors because it is conscious. This does not apply to the number of fingers, which is only evidence by correlation.
Another bold claim. Why do you think that there is a causal relationship between having consciousness and behavior? Are you sure that consciousness isn’t just a passive observer? Also, why do you think that there is no causal relationship between having consciousness and five fingers?
I am conscious. The reason why I wrote the previous sentence is because I am conscious. As for how I know that this statement is true and I am not just a passive observer, how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
Yes, for the above reason.
In general, because there is no reason to believe that there is. Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
That’s just paraphrasing your previous claim.
I have no problems here. First, everything is mechanical. Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own. I then identify “myself” with this mind, rather than the one that’s mute.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
It is not just paraphrasing. It is giving an example of a particular case where it is obviously true.
Nonsense. Google could easily add a module to Google Translate that would convert a statement into its opposite. That would not give Google Translate a mind of its own.
Nope. You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
Obviously I do not take this seriously, but I take it just as seriously as the claim that my consciousness does not cause me to say that I am conscious.
I replied with an example, namely that I say I am conscious precisely because I am conscious. I do not need to argue for this, and I will not.
No, google could maybe add “not” before every “conscious”, in a grammatically correct way, but it is very far from figuring out what other beliefs need to be altered to make these claims consistent. When it can do that, it will be conscious in my book.
What is “you” in this sentence? The mute mind identifies with the mute mind, and the translation process identifies with the translation process.
There are possible reasons for saying you are conscious, other than being conscious. A tape recorder can also say it is conscious. Saying something doesn’t make it true.
Yes. I have pointed this out myself. This does not suggest in any way that I have such a reason, other than being conscious.
Exactly. This is why tests like “does it say it is conscious?” or any other third person test are not valid. You can only notice that you yourself are conscious. Only a first person test is valid.
Exactly, and you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
What the hell does “not questionable” mean?