Finally, being conscious doesn’t mean anything at all. It has no relationship to reality. At best, “X is conscious” means “X has behaviors in some sense similar to a human’s”. If a computationalist answers “no” to the first two questions, and “yes” to the last one, they’re not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That’s, by the way, similar to what compatibilists do with free will.
You say that like its a good thing.
If you look for consciousness from the outside, you’ll find nothing, or you’ll find behaviour. That’s because consciousness is on the inside, is about subjectivity.
You won’t find penguins in the arctic, but that doesn’t mean you get to define penguins as nonexisent, or redefine “penguin” to mean “polar bear”.
No, I’m not personally in favor of changing definitions of broken words. It leads to stupid arguments. But people do that.
If you look for consciousness from the outside, you’ll find nothing, or you’ll find behaviour. That’s because consciousness is on the inside, is about subjectivity.
It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain. I’m under the impression that cousin_it believes you can have the latter without the former. I say you must have both. Are you saying you don’t need either? That you could have two physically identical agents, one conscious, the other not?
It would be preferable to find consciousness in the real world.
Meaning the world of exteriors? If so, is that not question begging?
: Either reflected in behavior or in the physical structure of the brain.
Well, it;’s defintiely reflected in the physical structure of the brain, because you can tell
whether someone is conscious with an FMRI scan.
I’m under the impression that cousin_it believes you can have the latter without the former. I say you must have both.
OK. Now you you have asserted it, how about justifying it.
Are you saying you don’t need either? That you could have two physically identical agents, one conscious, the other not?
No. I am saying you shouldn’t beg questions, and you shouldn’t confuse the evidence for X with the meaning of X.
You are collapsing a bunch of issues here. You can believe that is possible to meaningfully refer to phenomena that are not fully understood. You can believe that something exists without believing it exists dualistically. And so on.
No, meaning the material, physical world. I’m glad you agree it’s there. Frankly, I have not a slightest clue what “exterior” means. Did you draw an arbitrary wall around your brain, and decided that everything that happens on one side is interior, and everything that happens on another is exterior? I’m sure you didn’t. But I’d rather not answer your other points, when I have no clue about what it is that we disagree about.
because you can tell whether someone is conscious with an FMRI scan.
No, you can tell if their brain is active. It’s fine to define “consciousness” = “human brain activity”, but that doesn’t generalize well.
I have not a slightest clue what “exterior” means.
It’s where you are willing to look, as opposed to where you are not. You keep insisting that cosnciousness can only be found in the behaviour of someone else: your opponents keep pointing out that you have the option of accessing your own.
No, you can tell if their brain is active. It’s fine to define “consciousness” = “human brain activity”,
We don’t do that. We use a medical definition. “Consciousness” has a number of uses in science.
It’s where you are willing to look, as opposed to where you are not.
That’s hardly a definition. I think it’s you who is begging the question here.
You keep insisting that cosnciousness can only be found in the behaviour of someone else
I have no idea where you got that. I explicitly state “I say you must have both”, just a couple of posts above.
The state of being aware, or perceiving physical facts or mental concepts; a state of general wakefulness and responsiveness to environment; a functioning sensorium.
Here’s a google result for “medical definition of consciousness”. It is quite close to “brain activity”, dreaming aside. If you extended the definition to non-human agents, any dumb robot would qualify. Did you have some other definition in mind?
I explicitly state “I say you must have both”, just a couple of posts above
Behaviour alone versus behaviour plus brain scans doesn’t make a relevant difference..
Brain scans are still objective data about someone else. It’sll an attempt to deal with subjectivity on an objective basis.
The medical definition of consciousness is not brain activity because there is some dirt if brain activity during, sleep states and even coma. The brain is not a PC.
“It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain.”
“It would be preferable” expresses wishful thinking. The word refers to subjective experience, which is subjective by definition, while you are looking at objective things instead.
No, “it’s preferable”, same as “you should”, is fine when there is a goal specified. e.g. “it’s preferable to do X, if you want Y”. Here, the goal is implicit—“not to have stupid beliefs”. Hopefully that’s a goal we all share.
By the way, “should” with implicit goals is quite common, you should be able to handle it. (Notice the second “should’. The implicit goal is now “to participate in normal human communication”).
“Subjective perception,” is opposite, in the relevant way, to “objective description.”
Suppose there were two kinds of things, physical and non-physical. This would not help in any way to explain consciousness, as long as you were describing the physical and non-physical things in an objective way. So you are quite right that subjective is not the opposite of physical; physicality is utterly irrelevant to it.
The point is that the word consciousness refers to subjective perception, not to any objective description, whether physical or otherwise.
Can you find another subjective concept that does not have an objective description? I’m predicting that we disagree about what “objective description” means.
Yes, I can find many others. “You seem to me to be currently mistaken,” does not have any objective descripion; it is how things seem to me. It however is correlated with various objective descriptions, such as the fact that I am arguing against you. However none of those things summarize the meaning, which is a subjective experience.
“No, physical things have objective descriptions.”
If a physical thing has a subjective experience, that experience does not have an objective description, but a subjective one.
You observed something interesting happening in your brain, you labeled it “consciousness”. You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans “conscious”. You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it “not conscious”. Then you observed a robot, and you asked “is it conscious?”. If you asked the full question—“are the things happening in a robot similar to the things happening in my brain”—it would be obvious that you won’t get a yes/no answer. They’re similar in some ways and different in others.
But if you go back to the original question, you can’t rule out that the robot is fully conscious , despite having some physical differences. The point being that translating questions about consciousness into questions about brain activity and function (in a wholesale and unguided way) isn’t superior, it’s potentially misleading.
I can rule out that the robot is conscious, because the word “conscious” has very little meaning. It’s a label of an artificial category. You can redefine “conscious” to include or exclude the robot, but that doesn’t change reality in any way. The robot is exactly as “conscious” as you are “roboticious”. You can either ask questions about brain activity and function, or you can ask no questions at all.
I can rule out that the robot is conscious, because the word “conscious” has very little meaning.
To whom? To most people, it indicates having a first person perspective, which is something rather general. It seems to mean little to you because of your gerrymnadered definition of meaning.Going only be external signs, consciousness might just be some unimportant behavioural quirks.
You can redefine “conscious” to include or exclude the robot, but that doesn’t change reality in any way.
The point is not to make it vacuously true that robots are conscious. The point is to use a definition of consciousness that includes it’s central feature: subjectivity.
You can either ask questions about brain activity and function, or you can ask no questions at all.
Says who? I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste. The fact that having consiousness fgives you that kind of access is central.
What does “not having a first person perspective” look like?
gerrymnadered definition of meaning
I find my definition of meaning (of statements) very natural. Do you want to offer a better one?
subjectivity
I think you use that word as equivalent to consciousness, not as a property that consciousness has.
I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste.
All of these things have perfectly good physical representations. All of them can be done by a fairly simple bot. I don’t think that’s what you mean by consciousness.
You observed something interesting happening in your brain, you labeled it “consciousness”.
You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans “conscious”.
Yes, that sounds about right, with the caveat that I would say that other humans are almost certainly conscious. Obviously there are people (e.g. solipsists) who don’t think that conscious minds other than their own exist.
You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it “not conscious”.
That sounds approximately right, albeit it is not just the fact that a rock is dissimilar to me that leads me to believe it to be unconscious. I am open to the possibility that entities very different from myself might be conscious.
Then you observed a robot, and you asked “is it conscious?”. If you asked the full question—“are the things happening in a robot similar to the things happening in my brain”—it would be obvious that you won’t get a yes/no answer. They’re similar in some ways and different in others.
I’m not sure that “is the robot conscious” is really equivalent to “are the things happening in a robot similar to the things happening in my brain”. It could be that some things happening in the robot’s brain are similar in some ways to some things happening in my brain, but the specific things that are similar might have little or nothing to do with consciousness. Moreover, even if a robot’s brain used mechanisms that are very different from those used by my own brain, this would not mean that the robot is necessarily not conscious. That is what makes the consciousness question difficult—we don’t have an objective way of detecting it in others, particularly in others whose physiology differs significantly from our own. Note that this does not make consciousness unreal, however.
I would be willing to answer “no” to the “is the robot conscious” question for any current robot that I have seen or even read about. But, that is not to say that no robot will ever be conscious.I do agree that there could be varying degrees of consciousness (rather than a yes/no answer), e.g. I suspect that animals have varying degrees of consciousness, e.g. non-human apes a fairly high degree, ants a low or zero degree, etc.
I don’t see why any of this would lead to the conclusion that consciousness or pain are not real phenomena.
Let me say it differently. There is a category in your head called “conscious entities”. Categories are formed from definitions or by picking some examples and extrapolating (or both). I say category, but it doesn’t really have to be hard and binary. I’m saying that “conscious entities” is an extrapolated category. It includes yourself, and it excludes inanimate objects. That’s something we all agree on (even “inanimate objects” may be a little shaky).
My point is that this is the whole specification of “conscious entities”. There is nothing more to help us decide, which objects belong to it, besides wishful thinking. Usually we choose to include all humans or all animals. Some choose to keep themselves as the only member. Others may want to accept plants. It’s all arbitrary. You may choose to pick some precise definition, based on something measurable, but that will just be you. You’ll be better off using another label for your definition.
That it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer’s is conscious is not really in question—pretty much everyone on this thread has said that. It doesn’t follow that I should drop the term or a “use another label”; there is a common understanding of the term “conscious” that makes it useful even if we can’t know whether “X is conscious” is true in many cases.
it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer’s is conscious
There is a big gap between “difficult” and “impossible”. If a thing is “difficult to measure”, then you’re supposed to know in principle what sort of measurement you’d want to do, or what evidence you could in theory find, that proves or disproves it. If a thing is “impossible to measure”, then the thing is likely bullshit.
there is a common understanding of the term “conscious”
What understanding exactly? Besides “I’m conscious” and “rocks aren’t conscious”, what is it that you understand about consciousness?
If a thing is “impossible to measure”, then the thing is likely bullshit.
In the case of consciousness, we are talking about subjective experience. I don’t think that the fact that we can’t measure it makes it bullshit. For another example, you might wonder whether I have a belief as to whether P=NP, and if so, what that belief is. You can’t get the answer to either of those things via measurement, but I don’t think that they are bullshit questions (albeit they are not particularly useful questions).
What understanding exactly? Besides “I’m conscious” and “rocks aren’t conscious”, what is it that you understand about consciousness?
In brief, my understanding of consciousness is that it is the ability to have self-awareness and first-person experiences.
You can’t get the answer to either of those things via measurement
What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right? Again, there is a difference between difficult and impossible.
self-awareness and first-person experiences
Those sound like synonyms, not in any way more precise than the word “consciousness” itself.
What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right?
To clarify: at the present you can’t obtain a person’s beliefs by measurement, just as at the present we have no objective test for consciousness in entities with a physiology significantly different from our own. These things are subjective but not unreal.
Those sound like synonyms, not in any way more precise than the word “consciousness” itself.
And yet I know that I have first person experiences and I know that I am self-aware via direct experience. Other people likewise know these things about themselves via direct experience. And it is possible to discuss these things based on that common understanding. So, there is no reason to stop using the word “consciousness”.
Did you mean, “at present subjective”? Because if something is objectively measurable then it is objective. Are these things both subjective and objective? Or will we stop being conscious, when we get a better understanding of the brain.
I know that I have first person experiences and I know that I am self-aware via direct experience.
Are those different experiences or different words for the same thing? What would it feel like to be self-aware without having first person experiences or vice versa?
Did you mean, “at present subjective”? Because if something is objectively measurable then it is objective. Are these things both subjective and objective?
To clarify, consciousness is a subjective experience, or more precisely it is the ability to have (subjective) first person experiences. Beliefs are similarly “in the head of the believer”. Whether either of these things will be measurable/detectable by an outside observer in the future is an open question.
Are those different experiences or different words for the same thing? What would it feel like to be self-aware without having first person experiences or vice versa?
Interesting questions. It seems to me that self awareness is a first person experience, so I am doubtful that you could have self awareness without the ability to have first person experiences. I don’t think that they are different words for the same thing though—I suspect that there are first-person experiences other than self awareness. I don’t see how my argument or yours depends on whether or not first-person experiences and self-awareness are the same; do you ask the questions for any particular reason, or did you just find them to be interesting questions?
Whether either of these things will be measurable/detectable by an outside observer in the future is an open question.
Suppose, as a thought experiment, that these things become measurable tomorrow. You said that beliefs are subjective. But how can a thing be both subjective and objectively measurable? Do beliefs stop being subjective the moment measurement becomes possible?
do you ask the questions for any particular reason
I ask them because I wanted you to play rationalist taboo (for “consciousness”), and I’m trying to decide if you succeeded or failed. I think “self awareness” could be defined as “thoughts about self” (although I’m not sure that’s what you meant). But “first person experiences” seems to be a perfect synonym for “consciousness”. Can you try again?
But how can a thing be both subjective and objectively measurable? Do beliefs stop being subjective the moment measurement becomes possible?
It is possible that there is some objective description which is 100% correlated with a subjective experience. If there is, and we are reasonably sure that it is, we would be likely to call the objective measurement a measurement of subjective experience. And it might be that the objective thing is factually identical to the subjective experience. But “this objective description is true” will never have the same meaning as “someone is having this subjective experience,” as I explained earlier.
I ask them because I wanted you to play rationalist taboo (for “consciousness”), and I’m trying to decide if you succeeded or failed. I think “self awareness” could be defined as “thoughts about self” (although I’m not sure that’s what you meant). But “first person experiences” seems to be a perfect synonym for “consciousness”. Can you try again?
Note that anyone who brings up another description which is not a synonym for “consciousness,” is not explaining consciousness, but something else. Any explanation which is actually an explanation of consciousness, and not of something else, should have the same meaning as “consciousness.”
That’s the general problem with your game of “rationalist taboo.” In essence, you are saying, “These words seem capable of expressing your position. Avoid all the words that could possibly express your position, and then see what you have to say.” Sorry, but I decline to play.
That’s the general problem with your game of “rationalist taboo.”
I can briefly explain “banana” as “bent yellow fruit”. Each of those words has a clear meaning when separated from the others. They would be meaningful even if bananas didn’t exist. On the other hand, “first person experiences” isn’t like that. There are no “third person experiences” that I’m aware of. Likewise, the only “first person” thing is “experience”. And there can be no experiences if there is no consciousness.
There are no “third person experiences” that I’m aware of.
There are no third person experiences that you have first person experiences of. But anyone else’s first person experiences will be third person experiences for you.
Likewise, the only “first person” thing is “experience”.
This is like saying that “thing” must be meaningless because the only things that exist are things. Obviously, if you keep generalizing, you will come to something most general. That does not mean it is meaningless. I would agree that we might use “experience” for the most general kind of subjective thing. But there are clearly more specific subjective things, notably like the example of feeling pain.
But anyone else’s first person experiences will be third person experiences for you.
Wow, now you’re not just assuming that conscience exists, but that there is more than one.
This is like saying that “thing” must be meaningless because the only things that exist are things.
“Thing” is to some extent a grammatical placeholder. Everything is a thing, and there are no properties that every “thing” shares. I wouldn’t know how to play rationalist taboo for “thing”, but this isn’t true for most words, and your arguments that this must be true for “consciousness” or “experience” are pretty weak.
But there are clearly more specific subjective things, notably like the example of feeling pain.
Nobody is disagreeing. If, in another context, I asked for an explanation of “pain”, saying “experience of stubbing your toe” would be fine.
Wow, now you’re not just assuming that consciousness exists, but that there is more than one.
I am not “assuming” that consciousness exists; I know it from direct experience. I do assume that other people have it as well, because they have many properties in common with me and I expect them to have others in common as well, such as the fact that the reason I say I am conscious is that I am in fact conscious. If other people are not conscious, they would be saying this for a different reason, and there is no reason to believe that. You can certainly imagine coming to the opposite conclusion. For example, I know a fellow who says that when he was three years old, he thought his parents were not conscious beings, because their behavior was too different from his own: e.g. they do not go to the freezer and get the ice cream, even though no one is stopping them.
Nobody is disagreeing. If, in another context, I asked for an explanation of “pain”, saying “experience of stubbing your toe” would be fine.
This means you should know what the word “experience” means. In practice you are pretending not to know what it means.
Are you sure? I don’t know how to interpret your “In practice you are pretending not to know what it means”, if you do. Pretending is how the game works.
I have already said why I will not play.
No one can force you, if you don’t want to. But your arguments that there is something wrong with the game are weak.
I don’t know how to interpret your “In practice you are pretending not to know what it means”, if you do. Pretending is how the game works.
You should interpret it to mean what it says, namely that in practice you have been pretending not to know what it means. If pretending is how the game works, and you are playing that game, then it is not surprising that you are pretending. Nothing complicated about this.
Perhaps your objection is that I should not have said it in an accusatory manner. But the truth is that it is rude to play that game with someone who does not want to play, and I already explained that I do not, and why.
No one can force you, if you don’t want to. But your arguments that there is something wrong with the game are weak.
You certainly haven’t provided any refutation of my reasons for that. Once again, in essence you are saying, “describe conscious experience from a third person point of view.” But that cannot be done, even in principle. If you describe anything from a third person point of view, you are not describing a personal experience. So it would be like saying, “describe a banana, but make sure you don’t say anything that would imply the conclusion that it is a kind of fruit.” A banana really is a fruit, so any description that cannot imply that it is, is necessarily incomplete. And a pain really is a subjective feeling, so any description which does not include subjectivity or something equivalent cannot be a description of pain.
Once again, in essence you are saying, “describe conscious experience from a third person point of view.”
I don’t think I actually said something like that. I’m just asking you to describe “conscious experience” without the words “conscious” and “experience”. You expect that I will reject every description you could offer, but you haven’t actually tried any. If you did try a few descriptions and I did find something wrong with each of them (which is not unlikely), your arguments would look a lot more serious.
But now I can only assume that you simply can’t think of any such descriptions. You see, “I don’t want to play” is different from “I give up”. I think you’re confusing them.
A banana really is a fruit, so any description that cannot imply that it is, is necessarily incomplete.
All descriptions are incomplete. You just have to provide a description that matches bananas better than it matches apples or sausages. A malicious adversary can always construct some object which would match your description without really being a banana, but at some point the construction will have to be so long and bizarre and the difference so small that we can disregard it.
Obviously, if someone says “ouch” because he wishes to deceive you that he is feeling pain, pain will not be the wish to deceive someone that he is feeling pain.
Again, all descriptions are incomplete. “What makes someone say ouch” is quite accurate considering it’s length.
You expect that I will reject every description you could offer, but you haven’t actually tried any. If you did try a few descriptions and I did find something wrong with each of them (which is not unlikely), your arguments would look a lot more serious.
There is a reason I expect that. Namely, you criticized a proposed definition on the grounds that it was “synonymous” with consciousness. But that’s exactly what it was supposed to be: we are talking about consciousness, not something else. So any definition I propose is going to be synonymous or extremely close to that; otherwise I would not propose it.
But now I can only assume that you simply can’t think of any such descriptions.
Your assumption is false. Let’s say “personal perception.” Obviously I can anticipate your criticism, just as I said above.
All descriptions are incomplete. You just have to provide a description that matches bananas better than it matches apples or sausages. A malicious adversary can always construct some object which would match your description without really being a banana, but at some point the construction will have to be so long and bizarre and the difference so small that we can disregard it.
If your description of a banana does not suggest that it is fruit, your description will be extremely incomplete, not just a little incomplete. In the same way, if a description of consciousness does not imply that it is subjective, it will be extremely incomplete.
Again, all descriptions are incomplete. “What makes someone say ouch” is quite accurate considering it’s length.
The point is that you are ignoring what is obviously central to the idea of pain, which is the way it feels.
So any definition I propose is going to be synonymous or extremely close to that; otherwise I would not propose it.
Again you confirm that you don’t understand what the game taboo is (rationalist or not). “Yellow bent fruit” is not a synonym of “banana”.
“personal perception.”
My criticism is that this description obviously matches a roomba. It can definitely perceive walls (it can become aware of them through sensors) and I don’t see why this perception wouldn’t be personal (it happens completely withing the roomba), although I suspect that this word might mean something special for you. Now, as I say this, I assume that you don’t consider roomba conscious. If you do, then maybe I have not criticisms.
Is that the criticism you anticipated?
If your description of a banana does not suggest that it is fruit, your description will be extremely incomplete
I don’t know what sort of scale of incompleteness you have. Actually, there could be an agent who can recognize bananas exactly as well as you, without actually knowing whether they grow on plants or are made in factories. A banana has many distinctive properties, growing on plants is not the most important one.
The point is that you are ignoring what is obviously central to the idea of pain, which is the way it feels.
How does it feel? It feels bad, of course, but what else?
I don’t think that a roomba notices or perceives anything.
Why do you not think that? If there is something I’m not getting about that word, try making your taboo explanation longer and more precise.
By the way, I have some problems with “subjective”. There is a meaning that I find reasonable (something similar to “different” or “secret”), and there is a meaning that exactly corresponds to consciousness (I can just replace the “subjectively” in your last post with “consciously” ans lose nothing). Try not to use it either.
Among other things, it usually feels a bit like heat. Why do you ask?
More specifically I want to know, of all the feelings that you are capable of, how do you recognize that the feeling that follows stubbing your toe is the one that is pain? What distinctive properties does it have?
Off topic, does it really feel like heat? I’m sweating right now, and I don’t think that’s very similar to pain. Of course, getting burned causes pain. Also, hurting yourself can produce swelling, which does feel warm, so that’s another way to explain your association.
I could say that a roomba is a mere machine, but you would probably object that this is just saying it is not conscious. Another way to describe this, in this particular context, is that the the roomba’s actions do not constitute a coherent whole, and “perception” is a single coherent activity, and therefore conscious.
As I said, I’m not playing your game anyway, and I feel no obligation to describe what I think in your words rather than mine, especially since you know quite well what I am talking about here, even if you pretend to fail to understand.
More specifically I want to know, of all the feelings that you are capable of, how do you recognize that the feeling that follows stubbing your toe is the one that is pain?
By recognizing that it is similar to the other feelings that I have called pain. It absolutely is not by verbally describing how it feels or anything else, even if I can do so if I wish. That is true of all words: when we recognize that something is a chair or a lamp, we simply immediately note that the thing is similar to other things that we have called chairs or lamps. We do not need to come up with some verbal description, and especially some third person description, as you were fishing for there, in order to see that the thing falls into its category.
I’m sweating right now, and I don’t think that’s very similar to pain. Of course, getting burned causes pain.
It is not just that getting burned causes pain, but intense pain also feels similar to intense heat. Sweating is not an intense case of anything, so there wouldn’t be much similarity.
Also, hurting yourself can produce swelling, which does feel warm, so that’s another way to explain your association.
I am talking about how it feels at the time, not afterwards. And the “association” does not need to be explained by anything except how it feels at the time, not by any third person description like “this swelled up afterwards.”
but you would probably object that this is just saying it is not conscious
I would also object by saying that a human is also a “mere machine”.
the the roomba’s actions do not constitute a coherent whole
I have no idea what “coherent whole” means. Is roomba incoherent is some way?
you know quite well what I am talking about here
At times I honestly don’t.
By recognizing that it is similar to the other feelings that I have called pain.
Ok, but that just pushes the problem one step back. There are various feelings similar to stubbing a toe, and there are various feelings similar to eating candy. How do you know which group is pain and which is pleasure?
Sweating is not an intense case of anything, so there wouldn’t be much similarity.
I think you misunderstood me. Sweating is what people do when they’re hot. I’m saying that pain isn’t really that similar to heat, and then offered a couple of explanations why you might imagine that it is.
I would also object by saying that a human is also a “mere machine”.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious or aware of what it is doing, and consequently it does not perceive anything, because “perceiving” includes being conscious and being aware.
In that way, humans are not mere machines, because they are conscious beings that are aware of what they are doing and they perceive things.
I have no idea what “coherent whole” means. Is roomba incoherent is some way?
The human performs the unified action of “perceiving” and we know that it is unified because we experience it as a unified whole. The roomba just has each part of it moved by other parts, and we have no reason to think that these form a unified whole, since we have no reason to think it experiences anything.
In all of these cases, of course, the situation would be quite different if the roomba was conscious. Then it would also perceive what it was doing, it would not be a mere machine, and its actions would be unified.
Ok, but that just pushes the problem one step back. There are various feelings similar to stubbing a toe, and there are various feelings similar to eating candy. How do you know which group is pain and which is pleasure?
The mind does the work of recognizing similarity for us. We don’t have to give a verbal description in order to recognize similarity, much less a third person description, as you are seeking here.
I’m saying that pain isn’t really that similar to heat, and then offered a couple of explanations why you might imagine that it is.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious
Oh, so “mere machine” just a pure synonym of “not conscious”? Then I guess you were right about what my problem is. Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
The roomba just has each part of it moved by other parts
Are you implying that humans do not have parts that move other parts?
The mind does the work of recognizing similarity for us.
No, you misunderstood my question. I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
You’re wrong.
Maybe one of us is really a sentient roomba, pretending to be human? Who knows!
Are you saying that we must have dualism, and that consciousness is something that certainly cannot be reduced to “parts moved by other parts”? It’s not just that some arrangements of matter are conscious and others are not?
If there are parts, there is also a whole. A whole is not the same as parts. So if you mean by “reductionism” that there are only parts and no wholes, then reductionism is false.
If you mean by reductionism that a thing is made of its parts rather than made of its parts plus one other part, then reductionism is true: a whole is made out of its parts, not of the parts plus another part (which would be redundant and absurd.). But it is made “out of” it—it is not the same as the parts.
Oh, so “mere machine” just a pure synonym of “not conscious”?
No. It also means not any other thing similar to consciousness, even if not exactly consciousness.
Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
My reason is that we have no reason to think that a roomba is conscious.
I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
It also means not any other thing similar to consciousness, even if not exactly consciousness.
I have not idea what that means (a few typos maybe?). Obviously, there are things that are unconscious but are not machines, so the words aren’t identical. But if there is some difference between “mere machine” and “unconscious machine”, you have to point it out for me.
My reason is that we have no reason to think that a roomba is conscious.
Hypothetically, what could a reason to think that a robot is conscious look like?
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
“Pain” is a word and humans aren’t born knowing it. What does “no extra step” even mean? There are a few obvious steps. You have this habit of claiming something to be self-evident, when you’re clearly just confused.
I have not idea what that means (a few typos maybe?).
No typos. I meant we know that there are two kinds of things: objective facts and subjective perceptions. As far as anyone knows, there could be a third thing intermediate between those (for example.) So the robot might have something else that we don’t know about.
Hypothetically, what could a reason to think that a robot is conscious look like?
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
You have this habit of claiming something to be self-evident, when you’re clearly just confused.
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
Why is this a probable reason? You have one data point—yourself. Sure, you have human-like behavior, but you also have many other properties, like five fingers on each hand. Why does behavior seem like a more significant indicator of consciousness than having hands with five fingers? How did you come to that conclusion?
If a robot has hands with five fingers, that will also be evidence that it is conscious. This is how induction works; similarity in some properties is evidence of similarity in other properties.
I perform many human behaviors because I am conscious. So the fact that the robot performs similar behaviors is inductive evidence that it performs those behaviors because it is conscious. This does not apply to the number of fingers, which is only evidence by correlation.
I perform many human behaviors because I am conscious.
Another bold claim. Why do you think that there is a causal relationship between having consciousness and behavior? Are you sure that consciousness isn’t just a passive observer? Also, why do you think that there is no causal relationship between having consciousness and five fingers?
Why do you think that there is a causal relationship between having consciousness and behavior?
I am conscious. The reason why I wrote the previous sentence is because I am conscious. As for how I know that this statement is true and I am not just a passive observer, how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
Are you sure that consciousness isn’t just a passive observer?
Yes, for the above reason.
Also, why do you think that there is no causal relationship between having consciousness and five fingers?
In general, because there is no reason to believe that there is. Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
The reason why I wrote the previous sentence is because I am conscious.
That’s just paraphrasing your previous claim.
how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
I have no problems here. First, everything is mechanical. Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own. I then identify “myself” with this mind, rather than the one that’s mute.
Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
It is not just paraphrasing. It is giving an example of a particular case where it is obviously true.
Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own.
Nonsense. Google could easily add a module to Google Translate that would convert a statement into its opposite. That would not give Google Translate a mind of its own.
I then identify “myself” with this mind, rather than the one that’s mute.
Nope. You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
Obviously I do not take this seriously, but I take it just as seriously as the claim that my consciousness does not cause me to say that I am conscious.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
I replied with an example, namely that I say I am conscious precisely because I am conscious. I do not need to argue for this, and I will not.
Google could easily add a module to Google Translate that would convert a statement into its opposite.
No, google could maybe add “not” before every “conscious”, in a grammatically correct way, but it is very far from figuring out what other beliefs need to be altered to make these claims consistent. When it can do that, it will be conscious in my book.
You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
What is “you” in this sentence? The mute mind identifies with the mute mind, and the translation process identifies with the translation process.
I say I am conscious precisely because I am conscious.
There are possible reasons for saying you are conscious, other than being conscious. A tape recorder can also say it is conscious. Saying something doesn’t make it true.
There are possible reasons for saying you are conscious, other than being conscious.
Yes. I have pointed this out myself. This does not suggest in any way that I have such a reason, other than being conscious.
A tape recorder can also say it is conscious.
Exactly. This is why tests like “does it say it is conscious?” or any other third person test are not valid. You can only notice that you yourself are conscious. Only a first person test is valid.
Saying something doesn’t make it true.
Exactly, and you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
You say that like its a good thing.
If you look for consciousness from the outside, you’ll find nothing, or you’ll find behaviour. That’s because consciousness is on the inside, is about subjectivity.
You won’t find penguins in the arctic, but that doesn’t mean you get to define penguins as nonexisent, or redefine “penguin” to mean “polar bear”.
No, I’m not personally in favor of changing definitions of broken words. It leads to stupid arguments. But people do that.
It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain. I’m under the impression that cousin_it believes you can have the latter without the former. I say you must have both. Are you saying you don’t need either? That you could have two physically identical agents, one conscious, the other not?
Meaning the world of exteriors? If so, is that not question begging?
Well, it;’s defintiely reflected in the physical structure of the brain, because you can tell whether someone is conscious with an FMRI scan.
OK. Now you you have asserted it, how about justifying it.
No. I am saying you shouldn’t beg questions, and you shouldn’t confuse the evidence for X with the meaning of X.
You are collapsing a bunch of issues here. You can believe that is possible to meaningfully refer to phenomena that are not fully understood. You can believe that something exists without believing it exists dualistically. And so on.
No, meaning the material, physical world. I’m glad you agree it’s there. Frankly, I have not a slightest clue what “exterior” means. Did you draw an arbitrary wall around your brain, and decided that everything that happens on one side is interior, and everything that happens on another is exterior? I’m sure you didn’t. But I’d rather not answer your other points, when I have no clue about what it is that we disagree about.
No, you can tell if their brain is active. It’s fine to define “consciousness” = “human brain activity”, but that doesn’t generalize well.
It’s where you are willing to look, as opposed to where you are not. You keep insisting that cosnciousness can only be found in the behaviour of someone else: your opponents keep pointing out that you have the option of accessing your own.
We don’t do that. We use a medical definition. “Consciousness” has a number of uses in science.
That’s hardly a definition. I think it’s you who is begging the question here.
I have no idea where you got that. I explicitly state “I say you must have both”, just a couple of posts above.
Here’s a google result for “medical definition of consciousness”. It is quite close to “brain activity”, dreaming aside. If you extended the definition to non-human agents, any dumb robot would qualify. Did you have some other definition in mind?
Behaviour alone versus behaviour plus brain scans doesn’t make a relevant difference.. Brain scans are still objective data about someone else. It’sll an attempt to deal with subjectivity on an objective basis.
The medical definition of consciousness is not brain activity because there is some dirt if brain activity during, sleep states and even coma. The brain is not a PC.
“It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain.”
“It would be preferable” expresses wishful thinking. The word refers to subjective experience, which is subjective by definition, while you are looking at objective things instead.
No, “it’s preferable”, same as “you should”, is fine when there is a goal specified. e.g. “it’s preferable to do X, if you want Y”. Here, the goal is implicit—“not to have stupid beliefs”. Hopefully that’s a goal we all share.
By the way, “should” with implicit goals is quite common, you should be able to handle it. (Notice the second “should’. The implicit goal is now “to participate in normal human communication”).
We can understand that the word consciousness refers to something subjective (as it obviously does) without having stupid beliefs.
Subjective is not the opposite of physical.
Indeed.
“Subjective perception,” is opposite, in the relevant way, to “objective description.”
Suppose there were two kinds of things, physical and non-physical. This would not help in any way to explain consciousness, as long as you were describing the physical and non-physical things in an objective way. So you are quite right that subjective is not the opposite of physical; physicality is utterly irrelevant to it.
The point is that the word consciousness refers to subjective perception, not to any objective description, whether physical or otherwise.
No, physical things have objective descriptions.
Can you find another subjective concept that does not have an objective description? I’m predicting that we disagree about what “objective description” means.
Yes, I can find many others. “You seem to me to be currently mistaken,” does not have any objective descripion; it is how things seem to me. It however is correlated with various objective descriptions, such as the fact that I am arguing against you. However none of those things summarize the meaning, which is a subjective experience.
“No, physical things have objective descriptions.”
If a physical thing has a subjective experience, that experience does not have an objective description, but a subjective one.
I find myself to be conscious every day. I don’t understand what you find “unreal” about direct experience.
Here’s what I think happened.
You observed something interesting happening in your brain, you labeled it “consciousness”.
You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans “conscious”.
You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it “not conscious”.
Then you observed a robot, and you asked “is it conscious?”. If you asked the full question—“are the things happening in a robot similar to the things happening in my brain”—it would be obvious that you won’t get a yes/no answer. They’re similar in some ways and different in others.
But if you go back to the original question, you can’t rule out that the robot is fully conscious , despite having some physical differences. The point being that translating questions about consciousness into questions about brain activity and function (in a wholesale and unguided way) isn’t superior, it’s potentially misleading.
I can rule out that the robot is conscious, because the word “conscious” has very little meaning. It’s a label of an artificial category. You can redefine “conscious” to include or exclude the robot, but that doesn’t change reality in any way. The robot is exactly as “conscious” as you are “roboticious”. You can either ask questions about brain activity and function, or you can ask no questions at all.
There’s nothing artificial about direct experience.
To whom? To most people, it indicates having a first person perspective, which is something rather general. It seems to mean little to you because of your gerrymnadered definition of meaning.Going only be external signs, consciousness might just be some unimportant behavioural quirks.
The point is not to make it vacuously true that robots are conscious. The point is to use a definition of consciousness that includes it’s central feature: subjectivity.
Says who? I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste. The fact that having consiousness fgives you that kind of access is central.
What does “not having a first person perspective” look like?
I find my definition of meaning (of statements) very natural. Do you want to offer a better one?
I think you use that word as equivalent to consciousness, not as a property that consciousness has.
All of these things have perfectly good physical representations. All of them can be done by a fairly simple bot. I don’t think that’s what you mean by consciousness.
Not if “perfectly good” means “known”.
It’s ok, it doesn’t. Why do people keep bringing up current knowledge?
Because we are trying to communicate now, but your semantic scheme requires knowledge that is only available in the future , if at all.
Yes, that sounds about right, with the caveat that I would say that other humans are almost certainly conscious. Obviously there are people (e.g. solipsists) who don’t think that conscious minds other than their own exist.
That sounds approximately right, albeit it is not just the fact that a rock is dissimilar to me that leads me to believe it to be unconscious. I am open to the possibility that entities very different from myself might be conscious.
I’m not sure that “is the robot conscious” is really equivalent to “are the things happening in a robot similar to the things happening in my brain”. It could be that some things happening in the robot’s brain are similar in some ways to some things happening in my brain, but the specific things that are similar might have little or nothing to do with consciousness. Moreover, even if a robot’s brain used mechanisms that are very different from those used by my own brain, this would not mean that the robot is necessarily not conscious. That is what makes the consciousness question difficult—we don’t have an objective way of detecting it in others, particularly in others whose physiology differs significantly from our own. Note that this does not make consciousness unreal, however.
I would be willing to answer “no” to the “is the robot conscious” question for any current robot that I have seen or even read about. But, that is not to say that no robot will ever be conscious.I do agree that there could be varying degrees of consciousness (rather than a yes/no answer), e.g. I suspect that animals have varying degrees of consciousness, e.g. non-human apes a fairly high degree, ants a low or zero degree, etc.
I don’t see why any of this would lead to the conclusion that consciousness or pain are not real phenomena.
Let me say it differently. There is a category in your head called “conscious entities”. Categories are formed from definitions or by picking some examples and extrapolating (or both). I say category, but it doesn’t really have to be hard and binary. I’m saying that “conscious entities” is an extrapolated category. It includes yourself, and it excludes inanimate objects. That’s something we all agree on (even “inanimate objects” may be a little shaky).
My point is that this is the whole specification of “conscious entities”. There is nothing more to help us decide, which objects belong to it, besides wishful thinking. Usually we choose to include all humans or all animals. Some choose to keep themselves as the only member. Others may want to accept plants. It’s all arbitrary. You may choose to pick some precise definition, based on something measurable, but that will just be you. You’ll be better off using another label for your definition.
That it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer’s is conscious is not really in question—pretty much everyone on this thread has said that. It doesn’t follow that I should drop the term or a “use another label”; there is a common understanding of the term “conscious” that makes it useful even if we can’t know whether “X is conscious” is true in many cases.
There is a big gap between “difficult” and “impossible”. If a thing is “difficult to measure”, then you’re supposed to know in principle what sort of measurement you’d want to do, or what evidence you could in theory find, that proves or disproves it. If a thing is “impossible to measure”, then the thing is likely bullshit.
What understanding exactly? Besides “I’m conscious” and “rocks aren’t conscious”, what is it that you understand about consciousness?
In the case of consciousness, we are talking about subjective experience. I don’t think that the fact that we can’t measure it makes it bullshit. For another example, you might wonder whether I have a belief as to whether P=NP, and if so, what that belief is. You can’t get the answer to either of those things via measurement, but I don’t think that they are bullshit questions (albeit they are not particularly useful questions).
In brief, my understanding of consciousness is that it is the ability to have self-awareness and first-person experiences.
What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right? Again, there is a difference between difficult and impossible.
Those sound like synonyms, not in any way more precise than the word “consciousness” itself.
To clarify: at the present you can’t obtain a person’s beliefs by measurement, just as at the present we have no objective test for consciousness in entities with a physiology significantly different from our own. These things are subjective but not unreal.
And yet I know that I have first person experiences and I know that I am self-aware via direct experience. Other people likewise know these things about themselves via direct experience. And it is possible to discuss these things based on that common understanding. So, there is no reason to stop using the word “consciousness”.
Did you mean, “at present subjective”? Because if something is objectively measurable then it is objective. Are these things both subjective and objective? Or will we stop being conscious, when we get a better understanding of the brain.
Are those different experiences or different words for the same thing? What would it feel like to be self-aware without having first person experiences or vice versa?
To clarify, consciousness is a subjective experience, or more precisely it is the ability to have (subjective) first person experiences. Beliefs are similarly “in the head of the believer”. Whether either of these things will be measurable/detectable by an outside observer in the future is an open question.
Interesting questions. It seems to me that self awareness is a first person experience, so I am doubtful that you could have self awareness without the ability to have first person experiences. I don’t think that they are different words for the same thing though—I suspect that there are first-person experiences other than self awareness. I don’t see how my argument or yours depends on whether or not first-person experiences and self-awareness are the same; do you ask the questions for any particular reason, or did you just find them to be interesting questions?
Suppose, as a thought experiment, that these things become measurable tomorrow. You said that beliefs are subjective. But how can a thing be both subjective and objectively measurable? Do beliefs stop being subjective the moment measurement becomes possible?
I ask them because I wanted you to play rationalist taboo (for “consciousness”), and I’m trying to decide if you succeeded or failed. I think “self awareness” could be defined as “thoughts about self” (although I’m not sure that’s what you meant). But “first person experiences” seems to be a perfect synonym for “consciousness”. Can you try again?
It is possible that there is some objective description which is 100% correlated with a subjective experience. If there is, and we are reasonably sure that it is, we would be likely to call the objective measurement a measurement of subjective experience. And it might be that the objective thing is factually identical to the subjective experience. But “this objective description is true” will never have the same meaning as “someone is having this subjective experience,” as I explained earlier.
Note that anyone who brings up another description which is not a synonym for “consciousness,” is not explaining consciousness, but something else. Any explanation which is actually an explanation of consciousness, and not of something else, should have the same meaning as “consciousness.”
That’s the general problem with your game of “rationalist taboo.” In essence, you are saying, “These words seem capable of expressing your position. Avoid all the words that could possibly express your position, and then see what you have to say.” Sorry, but I decline to play.
I can briefly explain “banana” as “bent yellow fruit”. Each of those words has a clear meaning when separated from the others. They would be meaningful even if bananas didn’t exist. On the other hand, “first person experiences” isn’t like that. There are no “third person experiences” that I’m aware of. Likewise, the only “first person” thing is “experience”. And there can be no experiences if there is no consciousness.
There are no third person experiences that you have first person experiences of. But anyone else’s first person experiences will be third person experiences for you.
This is like saying that “thing” must be meaningless because the only things that exist are things. Obviously, if you keep generalizing, you will come to something most general. That does not mean it is meaningless. I would agree that we might use “experience” for the most general kind of subjective thing. But there are clearly more specific subjective things, notably like the example of feeling pain.
Wow, now you’re not just assuming that conscience exists, but that there is more than one.
“Thing” is to some extent a grammatical placeholder. Everything is a thing, and there are no properties that every “thing” shares. I wouldn’t know how to play rationalist taboo for “thing”, but this isn’t true for most words, and your arguments that this must be true for “consciousness” or “experience” are pretty weak.
Nobody is disagreeing. If, in another context, I asked for an explanation of “pain”, saying “experience of stubbing your toe” would be fine.
I am not “assuming” that consciousness exists; I know it from direct experience. I do assume that other people have it as well, because they have many properties in common with me and I expect them to have others in common as well, such as the fact that the reason I say I am conscious is that I am in fact conscious. If other people are not conscious, they would be saying this for a different reason, and there is no reason to believe that. You can certainly imagine coming to the opposite conclusion. For example, I know a fellow who says that when he was three years old, he thought his parents were not conscious beings, because their behavior was too different from his own: e.g. they do not go to the freezer and get the ice cream, even though no one is stopping them.
This means you should know what the word “experience” means. In practice you are pretending not to know what it means.
Yes, I said “in another context”. In current context it’s both “conscience” and “experience” that I need explained.
You don’t know what rationalist taboo (or even regular taboo) is, do you? Here: https://wiki.lesswrong.com/wiki/Rationalist_taboo maybe that will clear some things up for you.
Sentences like this are exactly why I need you to play taboo.
Yes, I do know what you are talking about here.
I have already said why I will not play.
Are you sure? I don’t know how to interpret your “In practice you are pretending not to know what it means”, if you do. Pretending is how the game works.
No one can force you, if you don’t want to. But your arguments that there is something wrong with the game are weak.
Quite sure.
You should interpret it to mean what it says, namely that in practice you have been pretending not to know what it means. If pretending is how the game works, and you are playing that game, then it is not surprising that you are pretending. Nothing complicated about this.
Perhaps your objection is that I should not have said it in an accusatory manner. But the truth is that it is rude to play that game with someone who does not want to play, and I already explained that I do not, and why.
You certainly haven’t provided any refutation of my reasons for that. Once again, in essence you are saying, “describe conscious experience from a third person point of view.” But that cannot be done, even in principle. If you describe anything from a third person point of view, you are not describing a personal experience. So it would be like saying, “describe a banana, but make sure you don’t say anything that would imply the conclusion that it is a kind of fruit.” A banana really is a fruit, so any description that cannot imply that it is, is necessarily incomplete. And a pain really is a subjective feeling, so any description which does not include subjectivity or something equivalent cannot be a description of pain.
I don’t think I actually said something like that. I’m just asking you to describe “conscious experience” without the words “conscious” and “experience”. You expect that I will reject every description you could offer, but you haven’t actually tried any. If you did try a few descriptions and I did find something wrong with each of them (which is not unlikely), your arguments would look a lot more serious.
But now I can only assume that you simply can’t think of any such descriptions. You see, “I don’t want to play” is different from “I give up”. I think you’re confusing them.
All descriptions are incomplete. You just have to provide a description that matches bananas better than it matches apples or sausages. A malicious adversary can always construct some object which would match your description without really being a banana, but at some point the construction will have to be so long and bizarre and the difference so small that we can disregard it.
Again, all descriptions are incomplete. “What makes someone say ouch” is quite accurate considering it’s length.
There is a reason I expect that. Namely, you criticized a proposed definition on the grounds that it was “synonymous” with consciousness. But that’s exactly what it was supposed to be: we are talking about consciousness, not something else. So any definition I propose is going to be synonymous or extremely close to that; otherwise I would not propose it.
Your assumption is false. Let’s say “personal perception.” Obviously I can anticipate your criticism, just as I said above.
If your description of a banana does not suggest that it is fruit, your description will be extremely incomplete, not just a little incomplete. In the same way, if a description of consciousness does not imply that it is subjective, it will be extremely incomplete.
The point is that you are ignoring what is obviously central to the idea of pain, which is the way it feels.
Again you confirm that you don’t understand what the game taboo is (rationalist or not). “Yellow bent fruit” is not a synonym of “banana”.
My criticism is that this description obviously matches a roomba. It can definitely perceive walls (it can become aware of them through sensors) and I don’t see why this perception wouldn’t be personal (it happens completely withing the roomba), although I suspect that this word might mean something special for you. Now, as I say this, I assume that you don’t consider roomba conscious. If you do, then maybe I have not criticisms.
Is that the criticism you anticipated?
I don’t know what sort of scale of incompleteness you have. Actually, there could be an agent who can recognize bananas exactly as well as you, without actually knowing whether they grow on plants or are made in factories. A banana has many distinctive properties, growing on plants is not the most important one.
How does it feel? It feels bad, of course, but what else?
“Perception” includes subjectively noticing something, not just being affected by it. I don’t think that a roomba notices or perceives anything.
Among other things, it usually feels a bit like heat. Why do you ask?
Why do you not think that? If there is something I’m not getting about that word, try making your taboo explanation longer and more precise.
By the way, I have some problems with “subjective”. There is a meaning that I find reasonable (something similar to “different” or “secret”), and there is a meaning that exactly corresponds to consciousness (I can just replace the “subjectively” in your last post with “consciously” ans lose nothing). Try not to use it either.
More specifically I want to know, of all the feelings that you are capable of, how do you recognize that the feeling that follows stubbing your toe is the one that is pain? What distinctive properties does it have?
Off topic, does it really feel like heat? I’m sweating right now, and I don’t think that’s very similar to pain. Of course, getting burned causes pain. Also, hurting yourself can produce swelling, which does feel warm, so that’s another way to explain your association.
I could say that a roomba is a mere machine, but you would probably object that this is just saying it is not conscious. Another way to describe this, in this particular context, is that the the roomba’s actions do not constitute a coherent whole, and “perception” is a single coherent activity, and therefore conscious.
As I said, I’m not playing your game anyway, and I feel no obligation to describe what I think in your words rather than mine, especially since you know quite well what I am talking about here, even if you pretend to fail to understand.
By recognizing that it is similar to the other feelings that I have called pain. It absolutely is not by verbally describing how it feels or anything else, even if I can do so if I wish. That is true of all words: when we recognize that something is a chair or a lamp, we simply immediately note that the thing is similar to other things that we have called chairs or lamps. We do not need to come up with some verbal description, and especially some third person description, as you were fishing for there, in order to see that the thing falls into its category.
It is not just that getting burned causes pain, but intense pain also feels similar to intense heat. Sweating is not an intense case of anything, so there wouldn’t be much similarity.
I am talking about how it feels at the time, not afterwards. And the “association” does not need to be explained by anything except how it feels at the time, not by any third person description like “this swelled up afterwards.”
I would also object by saying that a human is also a “mere machine”.
I have no idea what “coherent whole” means. Is roomba incoherent is some way?
At times I honestly don’t.
Ok, but that just pushes the problem one step back. There are various feelings similar to stubbing a toe, and there are various feelings similar to eating candy. How do you know which group is pain and which is pleasure?
I think you misunderstood me. Sweating is what people do when they’re hot. I’m saying that pain isn’t really that similar to heat, and then offered a couple of explanations why you might imagine that it is.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious or aware of what it is doing, and consequently it does not perceive anything, because “perceiving” includes being conscious and being aware.
In that way, humans are not mere machines, because they are conscious beings that are aware of what they are doing and they perceive things.
The human performs the unified action of “perceiving” and we know that it is unified because we experience it as a unified whole. The roomba just has each part of it moved by other parts, and we have no reason to think that these form a unified whole, since we have no reason to think it experiences anything.
In all of these cases, of course, the situation would be quite different if the roomba was conscious. Then it would also perceive what it was doing, it would not be a mere machine, and its actions would be unified.
The mind does the work of recognizing similarity for us. We don’t have to give a verbal description in order to recognize similarity, much less a third person description, as you are seeking here.
You’re wrong.
Oh, so “mere machine” just a pure synonym of “not conscious”? Then I guess you were right about what my problem is. Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
Are you implying that humans do not have parts that move other parts?
No, you misunderstood my question. I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
Maybe one of us is really a sentient roomba, pretending to be human? Who knows!
No. I said the roomba “just” has that. Humans are also aware of what they are doing.
Are you saying that we must have dualism, and that consciousness is something that certainly cannot be reduced to “parts moved by other parts”? It’s not just that some arrangements of matter are conscious and others are not?
If there are parts, there is also a whole. A whole is not the same as parts. So if you mean by “reductionism” that there are only parts and no wholes, then reductionism is false.
If you mean by reductionism that a thing is made of its parts rather than made of its parts plus one other part, then reductionism is true: a whole is made out of its parts, not of the parts plus another part (which would be redundant and absurd.). But it is made “out of” it—it is not the same as the parts.
No. It also means not any other thing similar to consciousness, even if not exactly consciousness.
My reason is that we have no reason to think that a roomba is conscious.
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
I have not idea what that means (a few typos maybe?). Obviously, there are things that are unconscious but are not machines, so the words aren’t identical. But if there is some difference between “mere machine” and “unconscious machine”, you have to point it out for me.
Hypothetically, what could a reason to think that a robot is conscious look like?
“Pain” is a word and humans aren’t born knowing it. What does “no extra step” even mean? There are a few obvious steps. You have this habit of claiming something to be self-evident, when you’re clearly just confused.
No typos. I meant we know that there are two kinds of things: objective facts and subjective perceptions. As far as anyone knows, there could be a third thing intermediate between those (for example.) So the robot might have something else that we don’t know about.
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
Wrong.
Why is this a probable reason? You have one data point—yourself. Sure, you have human-like behavior, but you also have many other properties, like five fingers on each hand. Why does behavior seem like a more significant indicator of consciousness than having hands with five fingers? How did you come to that conclusion?
If a robot has hands with five fingers, that will also be evidence that it is conscious. This is how induction works; similarity in some properties is evidence of similarity in other properties.
But surely, you believe that human-like behavior is stronger evidence than a hand with five fingers. Why is that?
I perform many human behaviors because I am conscious. So the fact that the robot performs similar behaviors is inductive evidence that it performs those behaviors because it is conscious. This does not apply to the number of fingers, which is only evidence by correlation.
Another bold claim. Why do you think that there is a causal relationship between having consciousness and behavior? Are you sure that consciousness isn’t just a passive observer? Also, why do you think that there is no causal relationship between having consciousness and five fingers?
I am conscious. The reason why I wrote the previous sentence is because I am conscious. As for how I know that this statement is true and I am not just a passive observer, how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
Yes, for the above reason.
In general, because there is no reason to believe that there is. Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
That’s just paraphrasing your previous claim.
I have no problems here. First, everything is mechanical. Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own. I then identify “myself” with this mind, rather than the one that’s mute.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
It is not just paraphrasing. It is giving an example of a particular case where it is obviously true.
Nonsense. Google could easily add a module to Google Translate that would convert a statement into its opposite. That would not give Google Translate a mind of its own.
Nope. You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
Obviously I do not take this seriously, but I take it just as seriously as the claim that my consciousness does not cause me to say that I am conscious.
I replied with an example, namely that I say I am conscious precisely because I am conscious. I do not need to argue for this, and I will not.
No, google could maybe add “not” before every “conscious”, in a grammatically correct way, but it is very far from figuring out what other beliefs need to be altered to make these claims consistent. When it can do that, it will be conscious in my book.
What is “you” in this sentence? The mute mind identifies with the mute mind, and the translation process identifies with the translation process.
There are possible reasons for saying you are conscious, other than being conscious. A tape recorder can also say it is conscious. Saying something doesn’t make it true.
Yes. I have pointed this out myself. This does not suggest in any way that I have such a reason, other than being conscious.
Exactly. This is why tests like “does it say it is conscious?” or any other third person test are not valid. You can only notice that you yourself are conscious. Only a first person test is valid.
Exactly, and you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
What the hell does “not questionable” mean?