The three examples deal with different kinds of things.
Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don’t, they should be physically stored somehow. In that sense they are the most real of the three.
Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimately, “I have a skill to do X” means “I believe I’m better than most at X” and while it is a belief as good as the previous one, but it’s a little less direct.
Finally, being conscious doesn’t mean anything at all. It has no relationship to reality. At best, “X is conscious” means “X has behaviors in some sense similar to a human’s”. If a computationalist answers “no” to the first two questions, and “yes” to the last one, they’re not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That’s, by the way, similar to what compatibilists do with free will.
Finally, being conscious doesn’t mean anything at all. It has no relationship to reality. At best, “X is conscious” means “X has behaviors in some sense similar to a human’s”. If a computationalist answers “no” to the first two questions, and “yes” to the last one, they’re not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That’s, by the way, similar to what compatibilists do with free will.
You say that like its a good thing.
If you look for consciousness from the outside, you’ll find nothing, or you’ll find behaviour. That’s because consciousness is on the inside, is about subjectivity.
You won’t find penguins in the arctic, but that doesn’t mean you get to define penguins as nonexisent, or redefine “penguin” to mean “polar bear”.
No, I’m not personally in favor of changing definitions of broken words. It leads to stupid arguments. But people do that.
If you look for consciousness from the outside, you’ll find nothing, or you’ll find behaviour. That’s because consciousness is on the inside, is about subjectivity.
It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain. I’m under the impression that cousin_it believes you can have the latter without the former. I say you must have both. Are you saying you don’t need either? That you could have two physically identical agents, one conscious, the other not?
It would be preferable to find consciousness in the real world.
Meaning the world of exteriors? If so, is that not question begging?
: Either reflected in behavior or in the physical structure of the brain.
Well, it;’s defintiely reflected in the physical structure of the brain, because you can tell
whether someone is conscious with an FMRI scan.
I’m under the impression that cousin_it believes you can have the latter without the former. I say you must have both.
OK. Now you you have asserted it, how about justifying it.
Are you saying you don’t need either? That you could have two physically identical agents, one conscious, the other not?
No. I am saying you shouldn’t beg questions, and you shouldn’t confuse the evidence for X with the meaning of X.
You are collapsing a bunch of issues here. You can believe that is possible to meaningfully refer to phenomena that are not fully understood. You can believe that something exists without believing it exists dualistically. And so on.
No, meaning the material, physical world. I’m glad you agree it’s there. Frankly, I have not a slightest clue what “exterior” means. Did you draw an arbitrary wall around your brain, and decided that everything that happens on one side is interior, and everything that happens on another is exterior? I’m sure you didn’t. But I’d rather not answer your other points, when I have no clue about what it is that we disagree about.
because you can tell whether someone is conscious with an FMRI scan.
No, you can tell if their brain is active. It’s fine to define “consciousness” = “human brain activity”, but that doesn’t generalize well.
I have not a slightest clue what “exterior” means.
It’s where you are willing to look, as opposed to where you are not. You keep insisting that cosnciousness can only be found in the behaviour of someone else: your opponents keep pointing out that you have the option of accessing your own.
No, you can tell if their brain is active. It’s fine to define “consciousness” = “human brain activity”,
We don’t do that. We use a medical definition. “Consciousness” has a number of uses in science.
It’s where you are willing to look, as opposed to where you are not.
That’s hardly a definition. I think it’s you who is begging the question here.
You keep insisting that cosnciousness can only be found in the behaviour of someone else
I have no idea where you got that. I explicitly state “I say you must have both”, just a couple of posts above.
The state of being aware, or perceiving physical facts or mental concepts; a state of general wakefulness and responsiveness to environment; a functioning sensorium.
Here’s a google result for “medical definition of consciousness”. It is quite close to “brain activity”, dreaming aside. If you extended the definition to non-human agents, any dumb robot would qualify. Did you have some other definition in mind?
I explicitly state “I say you must have both”, just a couple of posts above
Behaviour alone versus behaviour plus brain scans doesn’t make a relevant difference..
Brain scans are still objective data about someone else. It’sll an attempt to deal with subjectivity on an objective basis.
The medical definition of consciousness is not brain activity because there is some dirt if brain activity during, sleep states and even coma. The brain is not a PC.
“It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain.”
“It would be preferable” expresses wishful thinking. The word refers to subjective experience, which is subjective by definition, while you are looking at objective things instead.
No, “it’s preferable”, same as “you should”, is fine when there is a goal specified. e.g. “it’s preferable to do X, if you want Y”. Here, the goal is implicit—“not to have stupid beliefs”. Hopefully that’s a goal we all share.
By the way, “should” with implicit goals is quite common, you should be able to handle it. (Notice the second “should’. The implicit goal is now “to participate in normal human communication”).
“Subjective perception,” is opposite, in the relevant way, to “objective description.”
Suppose there were two kinds of things, physical and non-physical. This would not help in any way to explain consciousness, as long as you were describing the physical and non-physical things in an objective way. So you are quite right that subjective is not the opposite of physical; physicality is utterly irrelevant to it.
The point is that the word consciousness refers to subjective perception, not to any objective description, whether physical or otherwise.
Can you find another subjective concept that does not have an objective description? I’m predicting that we disagree about what “objective description” means.
Yes, I can find many others. “You seem to me to be currently mistaken,” does not have any objective descripion; it is how things seem to me. It however is correlated with various objective descriptions, such as the fact that I am arguing against you. However none of those things summarize the meaning, which is a subjective experience.
“No, physical things have objective descriptions.”
If a physical thing has a subjective experience, that experience does not have an objective description, but a subjective one.
You observed something interesting happening in your brain, you labeled it “consciousness”. You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans “conscious”. You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it “not conscious”. Then you observed a robot, and you asked “is it conscious?”. If you asked the full question—“are the things happening in a robot similar to the things happening in my brain”—it would be obvious that you won’t get a yes/no answer. They’re similar in some ways and different in others.
But if you go back to the original question, you can’t rule out that the robot is fully conscious , despite having some physical differences. The point being that translating questions about consciousness into questions about brain activity and function (in a wholesale and unguided way) isn’t superior, it’s potentially misleading.
I can rule out that the robot is conscious, because the word “conscious” has very little meaning. It’s a label of an artificial category. You can redefine “conscious” to include or exclude the robot, but that doesn’t change reality in any way. The robot is exactly as “conscious” as you are “roboticious”. You can either ask questions about brain activity and function, or you can ask no questions at all.
I can rule out that the robot is conscious, because the word “conscious” has very little meaning.
To whom? To most people, it indicates having a first person perspective, which is something rather general. It seems to mean little to you because of your gerrymnadered definition of meaning.Going only be external signs, consciousness might just be some unimportant behavioural quirks.
You can redefine “conscious” to include or exclude the robot, but that doesn’t change reality in any way.
The point is not to make it vacuously true that robots are conscious. The point is to use a definition of consciousness that includes it’s central feature: subjectivity.
You can either ask questions about brain activity and function, or you can ask no questions at all.
Says who? I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste. The fact that having consiousness fgives you that kind of access is central.
What does “not having a first person perspective” look like?
gerrymnadered definition of meaning
I find my definition of meaning (of statements) very natural. Do you want to offer a better one?
subjectivity
I think you use that word as equivalent to consciousness, not as a property that consciousness has.
I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste.
All of these things have perfectly good physical representations. All of them can be done by a fairly simple bot. I don’t think that’s what you mean by consciousness.
You observed something interesting happening in your brain, you labeled it “consciousness”.
You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans “conscious”.
Yes, that sounds about right, with the caveat that I would say that other humans are almost certainly conscious. Obviously there are people (e.g. solipsists) who don’t think that conscious minds other than their own exist.
You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it “not conscious”.
That sounds approximately right, albeit it is not just the fact that a rock is dissimilar to me that leads me to believe it to be unconscious. I am open to the possibility that entities very different from myself might be conscious.
Then you observed a robot, and you asked “is it conscious?”. If you asked the full question—“are the things happening in a robot similar to the things happening in my brain”—it would be obvious that you won’t get a yes/no answer. They’re similar in some ways and different in others.
I’m not sure that “is the robot conscious” is really equivalent to “are the things happening in a robot similar to the things happening in my brain”. It could be that some things happening in the robot’s brain are similar in some ways to some things happening in my brain, but the specific things that are similar might have little or nothing to do with consciousness. Moreover, even if a robot’s brain used mechanisms that are very different from those used by my own brain, this would not mean that the robot is necessarily not conscious. That is what makes the consciousness question difficult—we don’t have an objective way of detecting it in others, particularly in others whose physiology differs significantly from our own. Note that this does not make consciousness unreal, however.
I would be willing to answer “no” to the “is the robot conscious” question for any current robot that I have seen or even read about. But, that is not to say that no robot will ever be conscious.I do agree that there could be varying degrees of consciousness (rather than a yes/no answer), e.g. I suspect that animals have varying degrees of consciousness, e.g. non-human apes a fairly high degree, ants a low or zero degree, etc.
I don’t see why any of this would lead to the conclusion that consciousness or pain are not real phenomena.
Let me say it differently. There is a category in your head called “conscious entities”. Categories are formed from definitions or by picking some examples and extrapolating (or both). I say category, but it doesn’t really have to be hard and binary. I’m saying that “conscious entities” is an extrapolated category. It includes yourself, and it excludes inanimate objects. That’s something we all agree on (even “inanimate objects” may be a little shaky).
My point is that this is the whole specification of “conscious entities”. There is nothing more to help us decide, which objects belong to it, besides wishful thinking. Usually we choose to include all humans or all animals. Some choose to keep themselves as the only member. Others may want to accept plants. It’s all arbitrary. You may choose to pick some precise definition, based on something measurable, but that will just be you. You’ll be better off using another label for your definition.
That it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer’s is conscious is not really in question—pretty much everyone on this thread has said that. It doesn’t follow that I should drop the term or a “use another label”; there is a common understanding of the term “conscious” that makes it useful even if we can’t know whether “X is conscious” is true in many cases.
it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer’s is conscious
There is a big gap between “difficult” and “impossible”. If a thing is “difficult to measure”, then you’re supposed to know in principle what sort of measurement you’d want to do, or what evidence you could in theory find, that proves or disproves it. If a thing is “impossible to measure”, then the thing is likely bullshit.
there is a common understanding of the term “conscious”
What understanding exactly? Besides “I’m conscious” and “rocks aren’t conscious”, what is it that you understand about consciousness?
If a thing is “impossible to measure”, then the thing is likely bullshit.
In the case of consciousness, we are talking about subjective experience. I don’t think that the fact that we can’t measure it makes it bullshit. For another example, you might wonder whether I have a belief as to whether P=NP, and if so, what that belief is. You can’t get the answer to either of those things via measurement, but I don’t think that they are bullshit questions (albeit they are not particularly useful questions).
What understanding exactly? Besides “I’m conscious” and “rocks aren’t conscious”, what is it that you understand about consciousness?
In brief, my understanding of consciousness is that it is the ability to have self-awareness and first-person experiences.
You can’t get the answer to either of those things via measurement
What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right? Again, there is a difference between difficult and impossible.
self-awareness and first-person experiences
Those sound like synonyms, not in any way more precise than the word “consciousness” itself.
What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right?
To clarify: at the present you can’t obtain a person’s beliefs by measurement, just as at the present we have no objective test for consciousness in entities with a physiology significantly different from our own. These things are subjective but not unreal.
Those sound like synonyms, not in any way more precise than the word “consciousness” itself.
And yet I know that I have first person experiences and I know that I am self-aware via direct experience. Other people likewise know these things about themselves via direct experience. And it is possible to discuss these things based on that common understanding. So, there is no reason to stop using the word “consciousness”.
Did you mean, “at present subjective”? Because if something is objectively measurable then it is objective. Are these things both subjective and objective? Or will we stop being conscious, when we get a better understanding of the brain.
I know that I have first person experiences and I know that I am self-aware via direct experience.
Are those different experiences or different words for the same thing? What would it feel like to be self-aware without having first person experiences or vice versa?
Did you mean, “at present subjective”? Because if something is objectively measurable then it is objective. Are these things both subjective and objective?
To clarify, consciousness is a subjective experience, or more precisely it is the ability to have (subjective) first person experiences. Beliefs are similarly “in the head of the believer”. Whether either of these things will be measurable/detectable by an outside observer in the future is an open question.
Are those different experiences or different words for the same thing? What would it feel like to be self-aware without having first person experiences or vice versa?
Interesting questions. It seems to me that self awareness is a first person experience, so I am doubtful that you could have self awareness without the ability to have first person experiences. I don’t think that they are different words for the same thing though—I suspect that there are first-person experiences other than self awareness. I don’t see how my argument or yours depends on whether or not first-person experiences and self-awareness are the same; do you ask the questions for any particular reason, or did you just find them to be interesting questions?
Whether either of these things will be measurable/detectable by an outside observer in the future is an open question.
Suppose, as a thought experiment, that these things become measurable tomorrow. You said that beliefs are subjective. But how can a thing be both subjective and objectively measurable? Do beliefs stop being subjective the moment measurement becomes possible?
do you ask the questions for any particular reason
I ask them because I wanted you to play rationalist taboo (for “consciousness”), and I’m trying to decide if you succeeded or failed. I think “self awareness” could be defined as “thoughts about self” (although I’m not sure that’s what you meant). But “first person experiences” seems to be a perfect synonym for “consciousness”. Can you try again?
But how can a thing be both subjective and objectively measurable? Do beliefs stop being subjective the moment measurement becomes possible?
It is possible that there is some objective description which is 100% correlated with a subjective experience. If there is, and we are reasonably sure that it is, we would be likely to call the objective measurement a measurement of subjective experience. And it might be that the objective thing is factually identical to the subjective experience. But “this objective description is true” will never have the same meaning as “someone is having this subjective experience,” as I explained earlier.
I ask them because I wanted you to play rationalist taboo (for “consciousness”), and I’m trying to decide if you succeeded or failed. I think “self awareness” could be defined as “thoughts about self” (although I’m not sure that’s what you meant). But “first person experiences” seems to be a perfect synonym for “consciousness”. Can you try again?
Note that anyone who brings up another description which is not a synonym for “consciousness,” is not explaining consciousness, but something else. Any explanation which is actually an explanation of consciousness, and not of something else, should have the same meaning as “consciousness.”
That’s the general problem with your game of “rationalist taboo.” In essence, you are saying, “These words seem capable of expressing your position. Avoid all the words that could possibly express your position, and then see what you have to say.” Sorry, but I decline to play.
That’s the general problem with your game of “rationalist taboo.”
I can briefly explain “banana” as “bent yellow fruit”. Each of those words has a clear meaning when separated from the others. They would be meaningful even if bananas didn’t exist. On the other hand, “first person experiences” isn’t like that. There are no “third person experiences” that I’m aware of. Likewise, the only “first person” thing is “experience”. And there can be no experiences if there is no consciousness.
There are no “third person experiences” that I’m aware of.
There are no third person experiences that you have first person experiences of. But anyone else’s first person experiences will be third person experiences for you.
Likewise, the only “first person” thing is “experience”.
This is like saying that “thing” must be meaningless because the only things that exist are things. Obviously, if you keep generalizing, you will come to something most general. That does not mean it is meaningless. I would agree that we might use “experience” for the most general kind of subjective thing. But there are clearly more specific subjective things, notably like the example of feeling pain.
But anyone else’s first person experiences will be third person experiences for you.
Wow, now you’re not just assuming that conscience exists, but that there is more than one.
This is like saying that “thing” must be meaningless because the only things that exist are things.
“Thing” is to some extent a grammatical placeholder. Everything is a thing, and there are no properties that every “thing” shares. I wouldn’t know how to play rationalist taboo for “thing”, but this isn’t true for most words, and your arguments that this must be true for “consciousness” or “experience” are pretty weak.
But there are clearly more specific subjective things, notably like the example of feeling pain.
Nobody is disagreeing. If, in another context, I asked for an explanation of “pain”, saying “experience of stubbing your toe” would be fine.
Wow, now you’re not just assuming that consciousness exists, but that there is more than one.
I am not “assuming” that consciousness exists; I know it from direct experience. I do assume that other people have it as well, because they have many properties in common with me and I expect them to have others in common as well, such as the fact that the reason I say I am conscious is that I am in fact conscious. If other people are not conscious, they would be saying this for a different reason, and there is no reason to believe that. You can certainly imagine coming to the opposite conclusion. For example, I know a fellow who says that when he was three years old, he thought his parents were not conscious beings, because their behavior was too different from his own: e.g. they do not go to the freezer and get the ice cream, even though no one is stopping them.
Nobody is disagreeing. If, in another context, I asked for an explanation of “pain”, saying “experience of stubbing your toe” would be fine.
This means you should know what the word “experience” means. In practice you are pretending not to know what it means.
Are you sure? I don’t know how to interpret your “In practice you are pretending not to know what it means”, if you do. Pretending is how the game works.
I have already said why I will not play.
No one can force you, if you don’t want to. But your arguments that there is something wrong with the game are weak.
I don’t know how to interpret your “In practice you are pretending not to know what it means”, if you do. Pretending is how the game works.
You should interpret it to mean what it says, namely that in practice you have been pretending not to know what it means. If pretending is how the game works, and you are playing that game, then it is not surprising that you are pretending. Nothing complicated about this.
Perhaps your objection is that I should not have said it in an accusatory manner. But the truth is that it is rude to play that game with someone who does not want to play, and I already explained that I do not, and why.
No one can force you, if you don’t want to. But your arguments that there is something wrong with the game are weak.
You certainly haven’t provided any refutation of my reasons for that. Once again, in essence you are saying, “describe conscious experience from a third person point of view.” But that cannot be done, even in principle. If you describe anything from a third person point of view, you are not describing a personal experience. So it would be like saying, “describe a banana, but make sure you don’t say anything that would imply the conclusion that it is a kind of fruit.” A banana really is a fruit, so any description that cannot imply that it is, is necessarily incomplete. And a pain really is a subjective feeling, so any description which does not include subjectivity or something equivalent cannot be a description of pain.
Once again, in essence you are saying, “describe conscious experience from a third person point of view.”
I don’t think I actually said something like that. I’m just asking you to describe “conscious experience” without the words “conscious” and “experience”. You expect that I will reject every description you could offer, but you haven’t actually tried any. If you did try a few descriptions and I did find something wrong with each of them (which is not unlikely), your arguments would look a lot more serious.
But now I can only assume that you simply can’t think of any such descriptions. You see, “I don’t want to play” is different from “I give up”. I think you’re confusing them.
A banana really is a fruit, so any description that cannot imply that it is, is necessarily incomplete.
All descriptions are incomplete. You just have to provide a description that matches bananas better than it matches apples or sausages. A malicious adversary can always construct some object which would match your description without really being a banana, but at some point the construction will have to be so long and bizarre and the difference so small that we can disregard it.
Obviously, if someone says “ouch” because he wishes to deceive you that he is feeling pain, pain will not be the wish to deceive someone that he is feeling pain.
Again, all descriptions are incomplete. “What makes someone say ouch” is quite accurate considering it’s length.
You expect that I will reject every description you could offer, but you haven’t actually tried any. If you did try a few descriptions and I did find something wrong with each of them (which is not unlikely), your arguments would look a lot more serious.
There is a reason I expect that. Namely, you criticized a proposed definition on the grounds that it was “synonymous” with consciousness. But that’s exactly what it was supposed to be: we are talking about consciousness, not something else. So any definition I propose is going to be synonymous or extremely close to that; otherwise I would not propose it.
But now I can only assume that you simply can’t think of any such descriptions.
Your assumption is false. Let’s say “personal perception.” Obviously I can anticipate your criticism, just as I said above.
All descriptions are incomplete. You just have to provide a description that matches bananas better than it matches apples or sausages. A malicious adversary can always construct some object which would match your description without really being a banana, but at some point the construction will have to be so long and bizarre and the difference so small that we can disregard it.
If your description of a banana does not suggest that it is fruit, your description will be extremely incomplete, not just a little incomplete. In the same way, if a description of consciousness does not imply that it is subjective, it will be extremely incomplete.
Again, all descriptions are incomplete. “What makes someone say ouch” is quite accurate considering it’s length.
The point is that you are ignoring what is obviously central to the idea of pain, which is the way it feels.
So any definition I propose is going to be synonymous or extremely close to that; otherwise I would not propose it.
Again you confirm that you don’t understand what the game taboo is (rationalist or not). “Yellow bent fruit” is not a synonym of “banana”.
“personal perception.”
My criticism is that this description obviously matches a roomba. It can definitely perceive walls (it can become aware of them through sensors) and I don’t see why this perception wouldn’t be personal (it happens completely withing the roomba), although I suspect that this word might mean something special for you. Now, as I say this, I assume that you don’t consider roomba conscious. If you do, then maybe I have not criticisms.
Is that the criticism you anticipated?
If your description of a banana does not suggest that it is fruit, your description will be extremely incomplete
I don’t know what sort of scale of incompleteness you have. Actually, there could be an agent who can recognize bananas exactly as well as you, without actually knowing whether they grow on plants or are made in factories. A banana has many distinctive properties, growing on plants is not the most important one.
The point is that you are ignoring what is obviously central to the idea of pain, which is the way it feels.
How does it feel? It feels bad, of course, but what else?
I don’t think that a roomba notices or perceives anything.
Why do you not think that? If there is something I’m not getting about that word, try making your taboo explanation longer and more precise.
By the way, I have some problems with “subjective”. There is a meaning that I find reasonable (something similar to “different” or “secret”), and there is a meaning that exactly corresponds to consciousness (I can just replace the “subjectively” in your last post with “consciously” ans lose nothing). Try not to use it either.
Among other things, it usually feels a bit like heat. Why do you ask?
More specifically I want to know, of all the feelings that you are capable of, how do you recognize that the feeling that follows stubbing your toe is the one that is pain? What distinctive properties does it have?
Off topic, does it really feel like heat? I’m sweating right now, and I don’t think that’s very similar to pain. Of course, getting burned causes pain. Also, hurting yourself can produce swelling, which does feel warm, so that’s another way to explain your association.
I could say that a roomba is a mere machine, but you would probably object that this is just saying it is not conscious. Another way to describe this, in this particular context, is that the the roomba’s actions do not constitute a coherent whole, and “perception” is a single coherent activity, and therefore conscious.
As I said, I’m not playing your game anyway, and I feel no obligation to describe what I think in your words rather than mine, especially since you know quite well what I am talking about here, even if you pretend to fail to understand.
More specifically I want to know, of all the feelings that you are capable of, how do you recognize that the feeling that follows stubbing your toe is the one that is pain?
By recognizing that it is similar to the other feelings that I have called pain. It absolutely is not by verbally describing how it feels or anything else, even if I can do so if I wish. That is true of all words: when we recognize that something is a chair or a lamp, we simply immediately note that the thing is similar to other things that we have called chairs or lamps. We do not need to come up with some verbal description, and especially some third person description, as you were fishing for there, in order to see that the thing falls into its category.
I’m sweating right now, and I don’t think that’s very similar to pain. Of course, getting burned causes pain.
It is not just that getting burned causes pain, but intense pain also feels similar to intense heat. Sweating is not an intense case of anything, so there wouldn’t be much similarity.
Also, hurting yourself can produce swelling, which does feel warm, so that’s another way to explain your association.
I am talking about how it feels at the time, not afterwards. And the “association” does not need to be explained by anything except how it feels at the time, not by any third person description like “this swelled up afterwards.”
but you would probably object that this is just saying it is not conscious
I would also object by saying that a human is also a “mere machine”.
the the roomba’s actions do not constitute a coherent whole
I have no idea what “coherent whole” means. Is roomba incoherent is some way?
you know quite well what I am talking about here
At times I honestly don’t.
By recognizing that it is similar to the other feelings that I have called pain.
Ok, but that just pushes the problem one step back. There are various feelings similar to stubbing a toe, and there are various feelings similar to eating candy. How do you know which group is pain and which is pleasure?
Sweating is not an intense case of anything, so there wouldn’t be much similarity.
I think you misunderstood me. Sweating is what people do when they’re hot. I’m saying that pain isn’t really that similar to heat, and then offered a couple of explanations why you might imagine that it is.
I would also object by saying that a human is also a “mere machine”.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious or aware of what it is doing, and consequently it does not perceive anything, because “perceiving” includes being conscious and being aware.
In that way, humans are not mere machines, because they are conscious beings that are aware of what they are doing and they perceive things.
I have no idea what “coherent whole” means. Is roomba incoherent is some way?
The human performs the unified action of “perceiving” and we know that it is unified because we experience it as a unified whole. The roomba just has each part of it moved by other parts, and we have no reason to think that these form a unified whole, since we have no reason to think it experiences anything.
In all of these cases, of course, the situation would be quite different if the roomba was conscious. Then it would also perceive what it was doing, it would not be a mere machine, and its actions would be unified.
Ok, but that just pushes the problem one step back. There are various feelings similar to stubbing a toe, and there are various feelings similar to eating candy. How do you know which group is pain and which is pleasure?
The mind does the work of recognizing similarity for us. We don’t have to give a verbal description in order to recognize similarity, much less a third person description, as you are seeking here.
I’m saying that pain isn’t really that similar to heat, and then offered a couple of explanations why you might imagine that it is.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious
Oh, so “mere machine” just a pure synonym of “not conscious”? Then I guess you were right about what my problem is. Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
The roomba just has each part of it moved by other parts
Are you implying that humans do not have parts that move other parts?
The mind does the work of recognizing similarity for us.
No, you misunderstood my question. I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
You’re wrong.
Maybe one of us is really a sentient roomba, pretending to be human? Who knows!
Are you saying that we must have dualism, and that consciousness is something that certainly cannot be reduced to “parts moved by other parts”? It’s not just that some arrangements of matter are conscious and others are not?
If there are parts, there is also a whole. A whole is not the same as parts. So if you mean by “reductionism” that there are only parts and no wholes, then reductionism is false.
If you mean by reductionism that a thing is made of its parts rather than made of its parts plus one other part, then reductionism is true: a whole is made out of its parts, not of the parts plus another part (which would be redundant and absurd.). But it is made “out of” it—it is not the same as the parts.
Oh, so “mere machine” just a pure synonym of “not conscious”?
No. It also means not any other thing similar to consciousness, even if not exactly consciousness.
Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
My reason is that we have no reason to think that a roomba is conscious.
I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
It also means not any other thing similar to consciousness, even if not exactly consciousness.
I have not idea what that means (a few typos maybe?). Obviously, there are things that are unconscious but are not machines, so the words aren’t identical. But if there is some difference between “mere machine” and “unconscious machine”, you have to point it out for me.
My reason is that we have no reason to think that a roomba is conscious.
Hypothetically, what could a reason to think that a robot is conscious look like?
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
“Pain” is a word and humans aren’t born knowing it. What does “no extra step” even mean? There are a few obvious steps. You have this habit of claiming something to be self-evident, when you’re clearly just confused.
I have not idea what that means (a few typos maybe?).
No typos. I meant we know that there are two kinds of things: objective facts and subjective perceptions. As far as anyone knows, there could be a third thing intermediate between those (for example.) So the robot might have something else that we don’t know about.
Hypothetically, what could a reason to think that a robot is conscious look like?
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
You have this habit of claiming something to be self-evident, when you’re clearly just confused.
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
Why is this a probable reason? You have one data point—yourself. Sure, you have human-like behavior, but you also have many other properties, like five fingers on each hand. Why does behavior seem like a more significant indicator of consciousness than having hands with five fingers? How did you come to that conclusion?
If a robot has hands with five fingers, that will also be evidence that it is conscious. This is how induction works; similarity in some properties is evidence of similarity in other properties.
I perform many human behaviors because I am conscious. So the fact that the robot performs similar behaviors is inductive evidence that it performs those behaviors because it is conscious. This does not apply to the number of fingers, which is only evidence by correlation.
I perform many human behaviors because I am conscious.
Another bold claim. Why do you think that there is a causal relationship between having consciousness and behavior? Are you sure that consciousness isn’t just a passive observer? Also, why do you think that there is no causal relationship between having consciousness and five fingers?
Why do you think that there is a causal relationship between having consciousness and behavior?
I am conscious. The reason why I wrote the previous sentence is because I am conscious. As for how I know that this statement is true and I am not just a passive observer, how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
Are you sure that consciousness isn’t just a passive observer?
Yes, for the above reason.
Also, why do you think that there is no causal relationship between having consciousness and five fingers?
In general, because there is no reason to believe that there is. Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
The reason why I wrote the previous sentence is because I am conscious.
That’s just paraphrasing your previous claim.
how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
I have no problems here. First, everything is mechanical. Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own. I then identify “myself” with this mind, rather than the one that’s mute.
Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
It is not just paraphrasing. It is giving an example of a particular case where it is obviously true.
Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own.
Nonsense. Google could easily add a module to Google Translate that would convert a statement into its opposite. That would not give Google Translate a mind of its own.
I then identify “myself” with this mind, rather than the one that’s mute.
Nope. You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
Obviously I do not take this seriously, but I take it just as seriously as the claim that my consciousness does not cause me to say that I am conscious.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
I replied with an example, namely that I say I am conscious precisely because I am conscious. I do not need to argue for this, and I will not.
Google could easily add a module to Google Translate that would convert a statement into its opposite.
No, google could maybe add “not” before every “conscious”, in a grammatically correct way, but it is very far from figuring out what other beliefs need to be altered to make these claims consistent. When it can do that, it will be conscious in my book.
You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
What is “you” in this sentence? The mute mind identifies with the mute mind, and the translation process identifies with the translation process.
I say I am conscious precisely because I am conscious.
There are possible reasons for saying you are conscious, other than being conscious. A tape recorder can also say it is conscious. Saying something doesn’t make it true.
There are possible reasons for saying you are conscious, other than being conscious.
Yes. I have pointed this out myself. This does not suggest in any way that I have such a reason, other than being conscious.
A tape recorder can also say it is conscious.
Exactly. This is why tests like “does it say it is conscious?” or any other third person test are not valid. You can only notice that you yourself are conscious. Only a first person test is valid.
Saying something doesn’t make it true.
Exactly, and you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
Let’s try another situation. Imagine two people in sealed rooms. You press a button and both of them scream in pain. However you know that only the first person is really suffering, while the second one is pretending and the button actually gives him pleasure. The two rooms have the same reaction to pressing the button, but the moral value of pressing the button is different. If you propose an AI that ignores all such differences in principle, and assigns moral value only based on external behavior without figuring out the nature of pain/pleasure/other qualia, then I won’t invest in your AI because it will likely lead to horror.
Hence the title “steelmanning the chinese room argument”. To have any shot at FAI, we need to figure out morality the hard way. Playing rationalist taboo isn’t good enough. The hope of reducing all morally relevant properties (not just consciousness) to outward behavior is just that—a hope. You have zero arguments why it’s true, and the post gives several arguments why it’s false. Don’t bet the world on it.
However you know that only the first person is really suffering <...>
Let’s pause right there. How do you know it? Obviously, you know it by observing evidence for past differences in behavior. This, of course, includes being told by a third party that the rooms are different and other forms of indirect observations.
<...> an AI that ignores all such differences in principle <...>
If the AI has observed evidence for the difference between the rooms then it will take it into account. If AI has not observed any difference then it will not. The word “ignore” is completely inappropriate here. You can’t ignore something you can’t know. It’s usage here suggests that, you expect, there is some type of evidence that you accept, but the AI wouldn’t. Is that true? Maybe you expect the AI to have no long term memory? Or maybe you think it wouldn’t trust what people tell it?
You assume that all my knowledge about humans comes from observing their behavior. That’s not true. I know that I have certain internal experiences, and that other people are biologically similar to me, so they are likely to also have such experiences. That would still be true even if the experience was never described in words, or was impossible to describe in words, or if words didn’t exist.
You are right that communicating such knowledge to an AI is hard. But we must find a way.
You may know about being human, but how does that help you with the problem you suggested? You may know that some people can fake screams of pain, but as long as you don’t know which of the two people is really in pain, the moral action is to treat them both the same. What else can you do? Guess?
The knowledge that “only the first person is really suffering” has very little to do with your internal experience, it comes entirely from real observation and it is completely sufficient to choose the moral action.
At best, “X is conscious” means “X has behaviors in some sense similar to a human’s”.
I’m trying to show that’s not good enough. Seeing red isn’t the same as claiming to see red, feeling pain isn’t the same as claiming to feel pain, etc. There are morally relevant facts about agents that aren’t reducible to their behavior. Each behavior can arise from multiple internal experiences, some preferable to others. Humans can sometimes infer each other’s experiences by similarity, but that doesn’t work for all possible agents (including optimized uploads etc) that are built differently from humans. FAI needs to make such judgments in general, so it will need to understand how internal experience works in general. Otherwise we might get a Disneyland with no children, or with suffering children claiming to be happy. That’s the point of the post.
You could try to patch the problem by making the AI create only agents that aren’t too different from biological humans, for which the problem of suffering could be roughly solved by looking at neurons or something. But that leaves the door open to accidental astronomical suffering in other kinds of agents, so I wouldn’t accept that solution. We need to figure out internal experience the hard way.
A record player looping the words “I see red” is very different from how humans see, both internally and behaviorally. A robot which takes a picture, finds the most common pixel color, and if that’s red, plays the same “I see red” sound, is still in some ways different, but a lot less so. And if someone wanted to call this second robot conscious, as far as color is concerned, there would be no problem with that.
You may feel that pain is special, and that if we recognize a robot which says “ouch” when pushed, to feel pain, that would be in some sense bad. But it wouldn’t. We already recognize that different agents can have equally valid experiences of pain, that aren’t equally important to us (e.g. torturing rats vs humans. or foreigners vs family). This is not a new problem, and suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution.
You may feel that pain is special, and that if we recognize a robot which says “ouch” when pushed, to feel pain, that would be in some sense bad. But it wouldn’t. We already recognize that different agents can have equally valid experiences of pain, that aren’t equally important to us (e.g. torturing rats vs humans. or foreigners vs family).
I don’t see how it follows from the fact that foreigners and animals feel pain that it is reasonable to recognize that a robot that is programmed to say “ouch” when pushed feels pain. Can you clarify that inference?
suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution
I don’t see anything magical about consciousness—it is something that is presumably nearly universally held by people, and no one on this thread has suggested a supernatural explanation for it. Just because we don’t as-of-yet have an objective metric for consciousness in others does not make it magical.
it is reasonable to recognize that a robot that is programmed to say “ouch” when pushed feels pain
No, I’m saying that “feels pain” is not a meaningful category. Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated. My example or rats was meant to point out that “feels pain” is very different from “deserves human rights”.
no one on this thread has suggested a supernatural explanation for it
No one has suggested any explanation for it at all. And I do use “magical” in a loose sense.
Move a human from one internal state to another, that they prefer. “Preference” is not without it’s own complications, but it’s a lot more general than “pain”.
To be clear, the concept of pain, when applied to humans, mammals, and possibly most animals, can be meaningful. It’s only a problem when we ask whether robots feel pain.
I″m with EntirelyUseless. You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”.
There are two ways to look at a meaningless phrase. One is to say that perfectly good words are being used wrong. Another is to say that there is some problem with the words themselves. If we look at, say, “purple is bitter”, the first way is clearly more appropriate. If we look at “robot feels pain”, it’s hard for me to tell, which way I prefer.
Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated.
Here is my claim that “robot feels pain” is a meaningless statement. More generally, a question is meaningless, if an answer to it transfers no information about the real world. I can answer “is purple bitter” either way, and that would tell you nothing about the color purple. Likewise, I could answer “does this robot feel pain” and that would tell you nothing about the robot or what you should do with it. At best, a “yes” would mean that the robot can detect pressure or damage, and then say “ouch” or run away. But that’s clearly not the kind of pain we’re talking about.
More generally, a question is meaningless, if an answer to it transfers no information about the real world.
Since you are equaiting reality with objectivity, you are simply declaring statements about subjectivity meaningless by fiat.
I can answer “is purple bitter” either way, and that would tell you nothing about the color purple
That’s because it is a category error.
Likewise, I could answer “does this robot feel pain” and that would tell you nothing about the robot or what you should do with it.
Of course it tells me what I should do. It’s ethically relevant if a robot feels pain. if it feels pain when damaged, I should not damage it.
t best, a “yes” would mean that the robot can detect pressure or damage, and then say “ouch” or run away. But that’s clearly not the kind of pain we’re talking about.
How do you know? You aer assuming that no roobt can have a real subjective sensation of pain, and you have no way of knowing that one way or the other, and your arguments are question begging and inconsistent. (If “robots do (not) feel pain” is meaningless, as you sometimes say, it cannot also be false, as you sometimes also say).
Since you are equaiting reality with objectivity, you are simply declaring statements about subjectivity meaningless by fiat.
I’m claiming that all subjective experiences have objective descriptions. Please give an example of a subjective experience, other than consciousness, that has no physical evidence. Obviously, I will try to argue that either there is something objective you missed, or that the subjective experience is as poorly defined as consciousness.
<”bitter purple”> is a category error.
But “robot pain” isn’t? How did you come to those conclusions?
if it feels pain when damaged, I should not damage it.
That’s not how this works. Rats feel pain without a doubt, but we destroy them quite freely. Whether you will damage the robot is decided by many factors. E.g. if there is some benefit to the damage, if the robot will scream out in pain, or if it’s likely to damage you in return. The robot’s subjective experience of pain only matters if you decide that it matters—this is true for all categories, no matter how artificial.
How do you know?
Are you asking about the “at best” part? Because the rest of that sentence seems quite mundane. Here “at best” is about the limits of my own imagination. You’re welcome to suggest something better.
If “robots do (not) feel pain” is meaningless, as you sometimes say, it cannot also be false, as you sometimes also say.
That’s not contradictory, even if slightly inconsistent. “is purple bitter” is a meaningless question, but the answer “no” is clearly a lot more appropriate than “yes”. The line between falsehood and nonsense is quite blurry. I think we can freely call all nonsense statements false, without any negative consequences.
That’s not how this works. Rats feel pain without a doubt, but we destroy them quite freely. Whether you will damage the robot is decided by many factors.
That’s not how it works either. You can’t infer zero moral relevance of some factor by noting that other factors acan countervail.
The robot’s subjective experience of pain only matters if you decide that it matters
I’m not morally omniscient. The robots experience of pain matters if it features in some scheme of ideal moral reasoning. To put i another way, you just proved that nothing is morally relevant, if you proved anything at all.
Are you asking about the “at best” part? Because the rest of that sentence seems quite mundane. Here “at best” is about the limits of my own imagination.
Well, you do seem to have a subjective intuition that robots will never feel pain. Others intuit differently. What happened to all the science stuff?
The robots experience of pain matters if it features in some scheme of ideal moral reasoning.
Gosh, I really don’t want to start talking about morality now. But I have to point out that the “bitterness of purple” can also matter, if it features in some scheme of ideal moral reasoning. At least if you accept that this moral reasoning could require arbitrary concepts and not just ones grounded in reality.
Well, you do seem to have a subjective intuition that robots will never feel pain.
No, I ran a deterministic procedure in my brain, called “is X well defined”, on “robot pain”, and it returned “no”. It’s only subjective in the sense that mine is different from yours, if you have such a procedure at all. The procedure, by the way, works by searching for alternative definitions of things, such that the given concept is neither trivial nor stupid. Unfortunately, failure to find such definitions does not produce a proof of non-existence, so I’m quite open to the idea that I missed something, it’s just that you inspire little confidence.
I did not mean to imply that ideal moral reasoning is weird and unguessable....only that you should not take imperfect moral reasoning (whose?) to be the last word. The idea that deliberately causing pain is wrong is not contentious, and you don’t actually have an argument against it.
It’s only subjective in the sense that mine is different from yours
That’s not a very interesting sense. Is height also subjective, since we are not equally tall? This sense is also very far from the magical “subjective experience” you’ve used. I guess the problematic word in that phrase is “experience”, not “subjective”?
Height is not a subjective judgement because it is not a judgement. If judgements are going to vary, that matters, because then who knows what the truth is?
I’m claiming that all subjective experiences have objective descriptions.
I would say that almost none have descriptions, where description means description. But is sounds as though you might actually be talking about physical correlates.
Please give an example of a subjective experience, other than consciousness, that has no physical evidence.
I can’t make much sense of that, since all subjective experiences occur within consciosuness.
I don’t know why you think I am denying that changes in consciousness have correlations in physical activity.
I am pointing out that we cannot determine much about conscious subjective states from external physicial evidence, because we don’t know how to work back from the one to the other. We can’t recover the richness of conscious experience from externals. But we know it is there, in ourselves at least, because being conscious mean having access to your own conscious experience. You are putting the blame on consciousness itself, saying it is a nothingy thing, when the problem is your techniques.
The requrrement that everything be rooted in (external) reality in order to be meaningful is unreasonable, because i, in cases like this, it requires you to have a sort of omniscience before you can talk at all. (It’s fine to defiene temperature as what thermometers measure once you have accurate thermometers).
you might actually be talking about physical correlates.
You see, I’m proposing the radical new view that the world is made of atoms and other “stuff”, and that most words refer to some configurations of this stuff. In this view “pain” doesn’t just correlate with some brain activity, it is that brain activity. The brain activity of pain is an objective fact, and if you were to describe that objective fact, you get an objective description. In this view, the existence of human pain is as real as the existence of chairs. But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
I am pointing out that we cannot determine much about conscious subjective states from external physicial evidence, because we don’t know how to work back from the one to the other.
I’m pointing out that you cannot work out one from another, because your concept of consciousness has no properties or attributes that are more grounded in reality than consciousness itself. You need to play rationalist taboo. If you defined consciousness as “ability to process external events” or “ability to generate thoughts” or “the process that makes some people say they’re conscious”, finding a correspondence between consciousness and brain states would be possible, even if not easy. But you seem to refuse such definitions, you call them correlates, which suggests that there could be a consciousness that satisfies none of them. I’m pointing out that even if you had complete knowledge about everything going on in a particular brain, you still wouldn’t be able to tell if it is conscious, because your concept of consciousness is broken.
It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter, and also without denying that it eventually does. Treating pain semantically as some specific brain activity buys you nothing in terms of the ability to communicate and understand …. when you don’t know which precise kind...which you don’t. If Purple and Bitter are both Brain Activity Not Otherwise Specified, they are the same. If you can solve the mind body problem , then you will be in the position to specify the different kinds of brain activity they are. But you can also distinguish them , here and now, using the subjectively obvious difference. And without committing yourself to evil dualism.
It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter
I have never claimed otherwise. In fact, there is literally nothing that I have exact description of, in terms of matter—neither pain nor chairs. But you have to know something. I know that “chair is what I sit on” and from that there is a natural way to derive many statements about chairs. I know that “gravity is what makes things fall down”, and from that there is a fairly straightforward way to the current modern understanding of gravity. There is nothing that you know about consciousness, from which you can derive a more accurate and more material description.
Treating pain semantically as some specific brain activity buys you nothing
It buys me the ability to look at “do robots feel pain” and see that it’s a stupid question.
I see a model that claims to reproduce some of the behaviors of the human mind. Why is that relevant? Where are your subjective experiences in it?
Also, to clarify, when I say “you know nothing”, I’m not asking for some complex model or theory, I’m asking for the starting point from which those models and theories were constructed.
prove that it is a stupid question.
Proof is a high bar, and I don’t know how to reach it. You could teach me by showing a proof, for example, that “is purple bitter” is a stupid question. Although I suspect that I would find your proof circular.
Well, for one, you have been unwilling to share any such knowledge. Is it a secret, perhaps?
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
Where are your subjective experiences in it?
I was responding to your claim that “there is nothing that you know about consciousness, from which you can derive a more accurate and more material description.”. This has been done, so that claim was false. You have shifted the ground.
that “is purple bitter” is a stupid question.
Purple is a colour, bitter is taste, therefore category error.
Proof is a high bar
Then why be so sure about things? Why not say “dunno” to “can robots feel pain?”.
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it. I ask you if it has subjective experiences, because that seems to be the most important aspect of consciousness to you. If you can’t find them in this model, then the model is on my side, not yours.
Purple is a colour, bitter is taste, therefore category error.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
<...>
Then why be so sure about things?
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple. Maybe we mean different things when we say “proof”?
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it
That’s still an irrelevant objection. The issue is whether the concept of consciousness can be built on and refined, or whether it should be abandoned. GWT shows that it can be built on, and it is unreasonable to demand perfection.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple.
Is that worse than saying you know things you don’t know?
Sometimes different people use the same words to mean different things. I deduce that GWT does not build on consciousness as you understand it, because it doesn’t have the most important feature to you. It builds on consciousness as I understand it. How is that irrelevant?
Is that worse than saying you know things you don’t know?
You mean, is saying “dunno” to everything worse than saying something is true without having absolute 100% confidence? Yes. What kind of question is that?
Also, why did you quote my “category error” response? This doesn’t answer that at all.
But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
Why is the question about the “are tables also chairs” not meaningful? Structured knowledge databases like Wikidata have to answer that question.
Image that a country has a general tariff for furniture and there’s a tariff exemption for chairs. One clever businessman who sells tables starts to say that his tables are chairs. In that case, the question can become important enough that a large sum of money is spent on a legal process to answer the question.
This seems like a good comment to illustrate, once again, your abuse of the idea of meaning.
I’m proposing the radical new view that the world is made of atoms and other “stuff”, and that most words refer to some configurations of this stuff.
There are two ways to understand this claim: 1) most words refer to things which happen also to be configurations of atoms and stuff. 2) most words mean certain configurations of atoms.
The first interpretation would be fairly sensible. In practice you are adopting the second interpretation. This second interpretation is utterly false.
Consider the word “chair.” Does the word chair mean a configuration of atoms that has a particular shape that we happen to consider chairlike?
Suppose someone approached a chair in your house with an atomic microscope and discovered that it was not made of atoms, but was a continuous substance without any boundaries in it. Would you suddenly say that it was not a chair? Not at all. You would say “this chair is not made of atoms.” This proves conclusively that the meaning of the word chair has nothing whatsoever to do with “a configuration of atoms.” A chair is in fact a configuration of atoms; but this is a description of a thing, not a description of a word.
In this view “pain” doesn’t just correlate with some brain activity, it is that brain activity.
This could be true, if you mean this as a factual statement. It is utterly false, if you mean it as an explanation of the word “pain,” which refers to a certain subjective experience. The word “pain” is not about brain activity in the same way that the word “chair” is not about atoms, as explained above.
But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
I would just note that “are tables also chairs” has a definite answer, and is quite meaningful.
I’m pointing out that you cannot work out one from another, because your concept of consciousness has no properties or attributes that are more grounded in reality than consciousness itself. You need to play rationalist taboo. If you defined consciousness as “ability to process external events” or “ability to generate thoughts” or “the process that makes some people say they’re conscious”, finding a correspondence between consciousness and brain states would be possible, even if not easy. But you seem to refuse such definitions, you call them correlates, which suggests that there could be a consciousness that satisfies none of them.
I would say that being a chair (according to the meaning of the word) is correlated with being made of atoms. It may be perfectly correlated in fact; there may be no chair which is not made of atoms, and it may be factually impossible to find or make a chair which is not. But this is a matter for empirical investigation; it is not a matter of the meaning of the word. The meaning of the word is quite open to the possibility that there is a chair not made of atoms. In the same, the meaning of the word “consciousness” refers to a subjective experience, not to any objective description, and consequently in principle the meaning of the word is open to application to a consciousness which does not satisfy any particular objective description, as long as the subjective experience is present.
Suppose someone approached a chair in your house with an atomic microscope and discovered that it was not made of atoms
I explicitly added “other stuff” to my sentence to avoid this sort of argument. I don’t want or need to be tied to current understanding of physics here.
But even if I had only said “atoms”, this would not be a problem. After seeing a chair that I previously thought was impossible, I can update what I mean by “chair”. In the same, but more mundane way, I can go to a chair expo, see a radical new design of chair, and update my category as well. The meaning of “chair” does not come down from the sky fully formed, it is constructed by me.
I would just note that “are tables also chairs” has a definite answer, and is quite meaningful.
For one thing (not the only thing), chairs are things that are normally used for sitting. Tables are not normally used for sitting, so they are not chairs. Nothing arbitrary about that reasoning.
Where do those definitions come from? Do you know what “arbitrary” means? By the way, I have chairs that I have never sat on, and there are tables I’ve sat on quite a bit. What is “normally”?
The meaning of words comes from people’s usage (which is precisely why words do not mean anything like what you think they do.)
Do you know what “arbitrary” means?
Yes.
What is “normally”?
The vast majority of tables are rarely or never sat on. The table in my house has never been sat on. The vast majority of chairs are frequently sat on, like the ones in my house. It may not be the only normal thing, but certainly what happens in the vast majority of cases is normal.
Also, I said “for one thing.” Even if people normally sat on tables, they would not be chairs, because they do not have the appropriate structure, just as benches are not chairs.
Also, I’d point out that what I mean by “chair” is not equivalent to people’s usage. You could call it “reverse engineered” from people’s usage. There are some differences. Do you know where those come from?
Obviously I don’t even know how most people use those words—I only know about my acquaintances and people on TV, I could be living in a bubble, I could be using many words wrong.
benches are not chairs.
Stools are chairs, but benches are just wide stools. So if I have a small table (such as a coffee table), and use it for sitting, it’s not a bench, it’s a stool and therefore a chair?
In case it’s not obvious what I’m doing, I intend to ask you these stupid questions until you realize that they are stupid questions, that they don’t matter and that the correct way to answer them is to pull answers out of your ass (i.e. arbitrarily).
Roughly speaking, because if one performed factor analysis on their life experiences, they would have factors more or less corresponding to the words they use.
There are some differences. Do you know where those come from?
Yes, largely from your own experience of the usage contexts of the word chair, which as you say could be somewhat different from the overall usage patterns, although it is unlikely that there are large differences.
So if I have a small table (such as a coffee table), and use it for sitting, it’s not a bench, it’s a stool and therefore a chair?
No. As I said before, “for one thing.” There are still reasons why a coffee table would not suddenly become a stool, even if in fact you use it for that.
I intend to ask you these stupid questions until you realize that they are stupid questions, that they don’t matter and that the correct way to answer them is to pull answers out of your ass (i.e. arbitrarily).
That’s incorrect, so I won’t realize that no matter how many such questions you ask.
That said, it is true that we extend words to additional things when we think the new things are similar enough to the old things.
The problem of “consciousness” is that we have no idea how similar the new thing is to the old thing, no matter how many objective descriptions we come up with for the new thing. That is: the problem is that we have no idea whether the robot is conscious or not, no matter what objective facts we know about the robot. This does not mean that we can arbitrarily decide to say “let’s extend the word conscious to the robot.” It’s like this situation: there is an object behind a screen, and you are never allowed to look behind the screen. Should we call the object a “chair” or not?
The fact that language is both vague and extendible does not suddenly entitle you to say that we can say that an object behind a screen is either “a chair” or “not a chair” without first looking behind the screen. And in the case under discussion, no one yet knows a way to look behind the screen, and possibly there is no such way.
Roughly speaking, because if one performed factor analysis on their life experiences, they would have factors more or less corresponding to the words they use.
What makes you believe that? Ideally we’d want something like this to be true, but assuming that it is true seems a bit naive. There are also some serious technical problems with the idea (how do you quantify experiences? what do you do when different people have different experiences but have to use the same words? etc).
No. As I said before, “for one thing.” There are still reasons why a coffee table would not suddenly become a stool, even if in fact you use it for that.
I don’t think you know what “arbitrary” means. It does not mean “completely random”. In a deterministic world everything has some explanation. It’s just that sometimes the explanations are long, bizarre and kind of stupid, if you look at them.
That is: the problem is that we have no idea whether the robot is conscious or not, no matter what objective facts we know about the robot.
Likewise, we cannot say whether some new object is a chair only by knowing objective facts about the object. We also need to know what the word “chair” refers to. And in the case that our definition of “chair” doesn’t help us, we’re going to have to extend it in some arbitrary way.
It’s like this situation: there is an object behind a screen, and you are never allowed to look behind the screen.
Replace “you are never allowed” with “it is impossible”. Then I will suggest that the object does not exist.
What makes you believe that? Ideally we’d want something like this to be true, but assuming that it is true seems a bit naive.
If nothing like that were true, words would actually be arbitrary, as you suppose. Then if we looked for reasonable boundaries, they would fall in random places. For example, it might turn out that the word “chair” refers in some cases to physical objects that people sit on, and in other cases to tiny underground animals. Words don’t work like this, which gives me good reason to think that something like this is true. It is not “assuming” anything.
I don’t think you know what “arbitrary” means. It does not mean “completely random”. In a deterministic world everything has some explanation. It’s just that sometimes the explanations are long, bizarre and kind of stupid, if you look at them.
My point is that the meaning of words is not long, bizarre, and stupid. The meanings of words are actually quite reasonable. Also, you are mistaken about the meaning of “arbitrary”. It does in fact mean not having a reason; you are just saying that sometimes “this doesn’t have a reason” is shorthand for saying that it doesn’t have a good reason. But even understood like this, the meaning of words is not arbitrary.
Likewise, we cannot say whether some new object is a chair only by knowing objective facts about the object. We also need to know what the word “chair” refers to. And in the case that our definition of “chair” doesn’t help us, we’re going to have to extend it in some arbitrary way.
We will have to extend it, but it will not have to be in an arbitrary way. There may be a good reason why we extend it the way that we do, not a stupid reason.
Replace “you are never allowed” with “it is impossible”. Then I will suggest that the object does not exist.
Then your suggestion is false; even if there is a screen that you cannot go behind, it would not mean that there is nothing inside it. You cannot look beyond the event horizon of the visible universe, but there are things beyond it.
For example, it might turn out that the word “chair” refers in some cases to physical objects that people sit on, and in other cases to tiny underground animals.
“Chair” happens not to refer to animals, although it can mean “chairman”. “Stool” can refer to several things, including poop. Finally, “shrew” refers both to a tiny animal and an annoying woman. Words do work like this. Surely there are some bizarre historical reasons how the word got those two meanings. But you have to admit, that these reasons have very little to do with the properties of small animals and annoying women.
I’m not saying that there are no forces working to simplify language. But there is a very large gap between that and “factor analysis on life experiences”.
Also, you are mistaken about the meaning of “arbitrary”. It does in fact mean not having a reason
“Arbitrary” literally means “decided by arbiter”, as opposed to “decided by the rule of law”, i.e. there was a question that “the law” couldn’t answer directly, and an arbiter had to decide “arbitrarily”. It doesn’t mean that the arbiter flipped a coin.
Going back to chairs and tables, you can always find some excuse why a coffee table I sit on isn’t a chair, and I can always find some excuse why it should be (I mean, how the hell is http://www.ikea.com/us/en/catalog/products/20299829/ not a stool?). We could live perfectly well is a world were my excuses are right and yours are wrong. The reasons we don’t are long bizarre and stupid. That falls under “arbitrary” perfectly well.
You cannot look beyond the event horizon of the visible universe, but there are things beyond it.
I’d rather not bring modern physics into this, but I have to point out, that suggesting those things don’t exist will cause no problems to anyone. At worst, this suggestion would lose to another idea through Occam’s razor.
Also, cases where a “single” word has two meanings are cases of two words. They are not examples of words that have arbitrary meanings.
You first said that if there was nothing like factor analysis, some words would have two unrelated meanings, then I point out that lots of words have two unrelated meanings, and now you say that one word with two meanings is two words (by definition, I assume?), contradicting your own claim we started from. Do you see how bad this looks?
Sure, there are words that have different meanings and different origins, that, thanks to some arbitrary modifications, end up sounding the same. There is an argument to discount those. But lots of words do have the same origin and the new meaning is a direct modification of the old one.
You first said that if there was nothing like factor analysis, some words would have two unrelated meanings, then I point out that lots of words have two unrelated meanings, and now you say that one word with two meanings is two words (by definition, I assume?), contradicting your own claim we started from. Do you see how bad this looks?
You misunderstood. The point is that if there was not some common meaning, the applications of a word would be random. This does not happen in any of the cases we have discussed, and two entirely unrelated usages are cases of two words.
But lots of words do have the same origin and the new meaning is a direct modification of the old one.
This is true, and there is nothing stupid or arbitrary about this way of getting a secondary meaning.
The point is that if there was not some common meaning, the applications of a word would be random.
I have never denied that the ways different people use the same words are similar. This however does nothing to support your “factor analysis” theory, nor does it have anything to do with words that have multiple unrelated meanings.
and two entirely unrelated usages are cases of two words.
This is a claim with no justification. The whole “one word is two words” formulation is inherently bizarre. Of course, saying that “committee chair” and “armchair” are both “chairs”, doesn’t mean the two things are actually similar. Likewise putting both “armchair” and “stool” in under one label does not reduce their differences, and putting “stool” and “coffee table” in different categories does not reduce their similarities.
I have never denied that the ways different people use the same words are similar. This however does nothing to support your “factor analysis” theory
Sure it does. People use words in similar ways because their lives have similar factors.
nor does it have anything to do with words that have multiple unrelated meanings.
Technically, there are no such words. As I said, these are multiple words that use similar spellings.
This is a claim with no justification. The whole “one word is two words” formulation is inherently bizarre. Of course, saying that “committee chair” and “armchair” are both “chairs”, doesn’t mean the two things are actually similar.
Consider these two statements:
1) A committee chair and an armchair are both “chairs.”
2) A committee chair and an armchair are both chairs.
The first statement is true, and simply says that both a committee chair and an armchair can be named with the sound “chair”.
The second statement, of course, is utterly false, because there is no meaning of “chairs” that makes it true. And that is because there is not a word that has both of those meanings; there are two words which are spelled and spoken alike.
Likewise putting both “armchair” and “stool” in under one label does not reduce their differences, and putting “stool” and “coffee table” in different categories does not reduce their similarities.
In fact, using different names adds a difference: the fact that the things are named differently. Still, overall you are more right than wrong about this, even though you have the tendency to ignore the real reasons for names in favor of appearances, as when you say that “pain” means “what makes someone say ouch.” Obviously, if someone says “ouch” because he wishes to deceive you that he is feeling pain, pain will not be the wish to deceive someone that he is feeling pain. Pain is a subjective feeling; and in a similar way, a coffee table is not merely something that has a certain shape, but something that was made for a certain intended use, even if you personally use it for sitting.
People use words in similar ways because their lives have similar factors.
No, people use words in similar ways, because they want to communicate with each other. And because word meanings are usually inherited rather than constructed. It’s not false that the factors are usually similar, but not all true statements follow from one another. Some people with very different factors may use words similarly and others with similar factors will eventually use them differently.
The second statement, of course, is utterly false, because there is no meaning of “chairs” that makes it true.
Again, nobody thinks that the two things are similar or share properties, but that’s exactly what you asked for. If you want a milder example, I can offer “computer”, which can refer to an electronic device or to a human who does arithmetic (old usage). The two meanings are still very different, but they do share a property (they both compute), and it’s easy to see that a sentence “I had computers calculate this solution” is natural and could refer to either (or both). At the same time, using two different words for them (e.g., let’s call humans who compute “analysts”) would also be natural. The reasons we don’t use two words have very little to do with the properties of humans or electronic devices.
“Arbitrary” literally means “decided by arbiter”, as opposed to “decided by the rule of law”
Etymology is not meaning.
I’d rather not bring modern physics into this,
That’s not up to you; you made the argument that if there is a screen that you cannot look behind, there is nothing behind it. That argument is false.
suggesting those things don’t exist will cause no problems to anyone.
The suggestion will be false, whether or not it causes problems for anyone.
At worst, this suggestion would lose to another idea through Occam’s razor.
Exactly like suggesting that other people’s conscious experiences do not exist, since this would mean that the reason for your own talk about your own experiences differs from the reason for other people’s talk about their experiences. There is no reason to believe in such a difference.
on the contrary, it would be stupid to attempt to make meaning exactly correspond with etymology.
That’s a bold claim. God forbid words have consistent meanings over time!
Going back to the word “arbitrary”, do we actually disagree about what it means, or is this just a useless tangent? You said it means “without any reason”. The obvious problem is that almost everything has some reasons, including whims of small children, delusions of the insane and results of fair coin flips. I suggest that if the word meant “without good reason”, the word would be more useful.
Going back to the word “arbitrary”, do we actually disagree about what it means, or is this just a useless tangent?
We probably do not disagree about what it means, but we disagree about what we are saying it means. I do say it means without any reason, although we can take it more specifically as “without any reason of the kind we are currently thinking about.”
I suggest that if the word meant “without good reason”, the word would be more useful.
If we take as I suggested, it would be possible in some cases to mean “without good reason,” namely without a reason of the kind we are currently thinking about, namely a good one.
In general, this topic came up because you were asserting that questions like “are tables also chairs” are stupid and only have arbitrary answers. If arbitrary means that there cannot be a good reason, then you are mistaken, because we have good reason for saying that tables are not chairs, and the stupidity would only be in saying that they are chairs, not in saying that they are not.
In regard to the issue of consciousness, the question is indeed a useless distraction. It is true that words like “pain” or even “consciousness” itself are vague, as are all words, and we exercise judgement when we extend them to new cases. That does not mean there is never a good reason to extend them. But more importantly, when we consider whether to extend “chair” to a new case, we can at least see what the thing looks like. In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain. And so the case is quite different from the case of the chair: as I said before, it is like asking if an unknown object behind a screen is a chair or not. Unknown, but definitely not arbitrary.
“without any reason of the kind we are currently thinking about.”
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
a coffee table is not merely something that has a certain shape, but something that was made for a certain intended use, even if you personally use it for sitting.
Here’e an example of weird reasons. How can shape not determine the difference? If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects? IKEA can have whatever intentions they want, but http://www.ikea.com/us/en/catalog/products/20299829/ is a stool. Are you seriously telling me that it isn’t? Now, one could say that a stool can be both a chair and a table, and I think that’s what IKEA does, but then you’ve already claimed this to be impossible.
In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem. There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
“Properties of the objects being classified” are much more extensive than you realize. For example, it is property of pain that it is subjective and only perceived by the one suffering it. Likewise, it is a property of a chair that someone made it for a certain purpose.
If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects?
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
In most cases of that kind, the thing being called a table really is a table, and not a stool. Obviously I cannot confirm this in the particular case since I do not intend to buy it. But it is related to the fact that it is made for a certain purpose, as I said. In other words, in most cases the thing is not suitable for use as a stool: it might collapse after one occasion of sitting on it, or anyway after several days. In other words, being made as a table, it is physically unsuitable to be used as a seat. And consequently if it did collapse, it would quite correct to say, “This collapsed because you were using it as a stool even though it is not one.”
That said, I already said that the intention of the makers is not 100% determining.
That’s assuming that “feeling” is a meaningful category.
That’s not subject to falsification, in the same way that it is not subject to falsification that the thing I am sitting on is called a “chair.” In other words, I already notice the similarity between all the things that are called feelings in the same way that I notice the similarity between chairs.
If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem.
Talk about assumptions. I assume, and you are assuming here, that I have a brain, because we know in most cases that when people have been examined, they turned out to have brains inside their heads. But the fact that my toe hurts when I stub it, is not an assumption. If it turned out that I did not have a brain, I would not say, “I must have been wrong about suffering pain.” I would say “My pain does not depend on a brain.” I pointed out your error in this matter several times earlier—the meaning of pain has absolutely nothing at all to do with brain activities or even the existence of a brain. As far as anyone knows, the pain I feel when I stub my toe could depend on a property of the moon, and the pain I feel when I bump into a lamppost on a property of Mt. Everest. If that were the case, it would affect in no way the fact that those two pains feel similar.
There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
This is completely wrong, for the reason I just stated. We are not talking about similarities between brain states—we are talking about the similarity of two feelings. So it does not matter if the robot’s brain state is similar to mine. It matters whether it feels similar, just as I noted that my different pains feel similar to one other, and would remain feeling similar, even if they depended on radically different physical objects like the moon and Mr. Everest.
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
When exactly is the intention relevant? If two objects have the same shape but different intended uses, and you still classify them the same, then the intention is not relevant. More generally, if we have variables X, Y and want to test if a function f(X,Y) depends not only on X, but also on Y, we have to find a point where f(X1, Y1) is not equal to f(X1, Y2). Are you saying that sometimes intention matters, just not for chairs? If not, I can only assume that intention doesn’t determine anything and only shape is important.
I already notice the similarity between all the things that are called feelings
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
If it turned out that I did not have a brain
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing. Unfortunately for you, you do have a brain.
It matters whether it feels similar
Are you going to feel the robot’s feeling and compare?
Are you saying that sometimes intention matters, just not for chairs?
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
Unquestionably, it can be meaningfully extended to robots. You simply mean the same thing that you mean in the regular case. The only question is whether there is any feeling there, not if “feeling” has a meaning, since we already admitted that it does have a meaning.
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing.
The possibility of falsification is a good thing for a physical theory. It is a very bad thing for a theory of the meaning of a word. As you already admitted, the fact that the pieces of furniture we normally sit on are called chairs is not subject to falsification, because that is just what is meant by “chair.” But a physical theory of a chair, e.g. “chairs are made of atoms,” is falsifiable, since someone could examine a chair and discover that it was not made of atoms. He would not then say, “We have discovered that ‘chair’ meant something different from what we thought.” He would say, “We knew what ‘chair’ meant, and that is unchanged, but we have learned something new about the physical constitution of chairs.”
In the same way, I am referring to certain feelings when I talk about “pain.” The fact that the word pain refers to those feelings cannot be falsified, because it is just what that word means. But whether pain depends on a brain activity is a falsifiable physical theory; it has nothing to do with the meaning of the word “pain.”
Unfortunately for you, you do have a brain.
Assuming that I do, that is fortunate, not unfortunate. But as I was saying, neither you or I know that I do, since neither of us has seen the inside of my head.
Are you going to feel the robot’s feeling and compare?
No. The question is not whether the robot has a feeling which feels similar to me as my feeling of pain; the question is whether the robot has a feeling that feels to the robot the same way that my feeling feels to me. And since this has two subjects in it, there is no subject that can feel them both and compare them. And this is just how it is, whether you like it or not, and this is what “pain” refers to, whether you like it or not.
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Can you actually support your claim that intention matters? To clarify, I’m suggesting that intention merely correlates with shape, but has no predictive power on its own.
It is a very bad thing for a theory of the meaning of a word.
It’s somewhat complicated. “Experiences are brain states” is to an extent a theory. “Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition. Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
And since this has two subjects in it, there is no subject that can feel them both and compare them.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
Can you actually support your claim that intention matters?
Artificial things are made for a purpose, and being made for a purpose is part of why they are called what they are called. This is an obvious fact about how these words are used and does not need additional support.
“Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition.
If you mean pain is the conscious state that follows in that situation, yes, if you mean the third person state that folllows, no.
Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason. Definitions shouldn’t be falsifiable, and are not physical theories.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
No. The stars outside event horizon of the visible universe are similar to the stars that we can see, but there is no way to compare them.
One can however ask the question, “Are the stars similar?” and one can answer yes or no. In the same way we can ask if the robot feels like we do and we can say yes and no. But there is access to the answer here, just as there no access in the case of the stars. That has nothing to do with the fact that either they are similar, or they are not, both in the case of the robot, and in the case of the stars.
This is an obvious fact about how these words are used and does not need additional support.
Wow, you have no idea how many bold claims you make. To clarify once again, when I ask if intention matters, I’m asking whether the relationship between intention and classification is causal, or just a correlation. You are supposed to know the difference between those two things, and you’re supposed to know, in theory, how to figure out which one is relevant in a specific case. This whole “does not need additional support” thing inspires no confidence.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
The stars outside event horizon
I hate bringing up modern physics, it has limited relevance here. Maybe they’ll figure out faster than light travel tomorrow, and your point will become moot. But if we must...
If we insist that something beyond the event horizon exists (I’d love to see how you define that word), we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too. Note though, that the certainty of beliefs derived this way should be pretty low. And in the case of robots, hardly any of this applies.
I’m asking whether the relationship between intention and classification is causal, or just a correlation.
It is causal, but not infallible.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
That’s your problem. Everyone else will still call it “the sun,” and when you say “the sun didn’t rise this morning,” your statement will still be false.
we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too.
Exactly. You can make arguments for and against robot pain as well. Meaning does not come from the fact that your arguments are strong or weak: the meaning is there first, even before you have any reason to think that the thing is true.
What words? The word “causal”? I’m asking for arguments why you think that the relationship between intention and classification is causal. I expect you to understand the difference between causation and correlation. Why is this so difficult for you?
As I said, this is how these words work, that is words like “chair” and so on. If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair. And those things are true even given the same form, so it is causal, not merely correlated.
If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
And those things are true even given the same form
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
I am not talking about evidence, but about meaning; when we say, “this is a chair,” part of the meaning is that it was made for sitting. So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair, in the ordinary sense of the term, because they were not made for sitting.
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
I don’t know where you think that was established. I said the fact that someone made something for a different purpose would not guarantee that it was that thing, as that if you made something in the shape of a hammer, and said it was for sitting, it would not be a chair. But the fact that it was made for sitting does indeed make it closer to being a chair, not by way of evidence or correlation, but directly by more closely matching the meaning of “chair.”
I don’t know where you think that was established.
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair? Would simply sitting on the first rock convert it into a chair?
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
By acting like you actually want to understand what is being said, rather than by acting like you are in a fight where you are trying to win. In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
In particular, consider my answer to your next question, because it is basically the same thing again.
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair?
There is no guarantee of this, because the word “chair” is vague. But it is true that there would be more reason to call the second rock a chair—that is, the meaning of “chair” would apply more properly to the second one than to the first one. So the answer might be yes, depending on all the other details of the situation.
Would simply sitting on the first rock convert it into a chair?
In general, no, because the word “chair” does not work that way. It does not mean what someone happens to sit on, but what is made for sitting on.
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
If you are not ignorant of how the word is used, you do have to involve the intention of the maker.
By acting like you actually want to understand what is being said
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer. Here I’m only asking you for a black and white answer on this very specific question. I understood your original reply, but I honestly have not idea how it was supposed to answer my specific question. When people refuse to give direct answers to specific questions, I infer that they’re conceding.
In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
What exactly do you mean by “vague”? The word “chair” refers to the category of chairs. Is the category itself “vague”?
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair. Apparently one that you have more knowledge of than I. So I keep asking you specific questions about this concept. And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff. By the way, this thread started here where you explicitly start talking about words and meanings, so that’s what we’re talking about.
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to, although my guess is to the IKEA discussion. In any case, the response there, and to anything similar that you might be discussing, is that there is no direct answer because the answer is “in some cases yes, in some cases no, depending on the particular circumstances.”
What exactly do you mean by “vague”?
First of all, all words are vague, so there is no such thing as “what exactly do you mean by.” No one ever means anything exactly. Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Is the category itself “vague”?
Yes. All categories are vague, because they are generated by a process similar to factor analysis, as I have said before, and this does not generate precise categories, but vague ones.
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair.
It is false that the meanings are arbitrary, for the reasons I have said. It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
This is true of all words. All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things. Therefore no words refer to precise concepts (in fact there is no such thing as a precise concept, as I have already said.)
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff.
First of all, you are the one who needs the “language 101” stuff, since you have been abusing the concept of meaning by confusing it with testability. Second, nope, we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t. You have been constantly confusing two entirely separate issues, and ignoring me when I point out the separateness of these issues. Let me do so one more time.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.” Those are two entirely separate things. Likewise, you have been confusing “this statement is vague” with “this statement is not testable.” These are two entirely separate things.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
Second, consider a line of stars outside the visible universe, except that some of the stars, on the right, are identical white dwarfs, and the ones to the left of them are identical red giants. Where exactly do the stars stop being white dwarfs and begin being red giants? This time, we cannot answer the question because there is no test to give us the answer. But vagueness is not an issue, because there is a sharp division between the two parts. We simply cannot find it by testing.
Third, consider a line of stars outside the the visible universe, constructed as in the first case. This time, there are two problems: we cannot test where the boundary is, and the boundary is vague. These are two entirely different issues.
Fourth, consider a line of things where the one on the left is a statue, the one on the right is a human being, and somewhere in the middle there are robotic things. Each thing differs by a single atom from the thing on its left, and from the thing on its right.
Now we have the question: “The statue is not conscious. The human being is conscious. Is the central robot conscious?” There are two separate issues here. One is that we cannot test for consciousness. The second is that the word “conscious” is vague. These are two entirely separate issues, just as they are in the above cases of the stars.
Let us prove this. Suppose you are the human being on the right. We begin to modify you, one atom at a time, moving you to the left. Now the issue is testable: you can ask yourself whether you are conscious, and if you have any answer at all, or even if you manage to ask yourself the question, then you are conscious. Note that this is quite different from anyone else asking the thing if it is conscious, because the question “does this thing say it is conscious” is not the same as “is this thing conscious.” But being conscious is having a first person point of view, so if you can ask yourself anything, you are conscious. Unfortunately, long before you cease to be conscious, you will cease to be able to ask yourself any questions. So you will still not be able to find a definite boundary between conscious and not conscious. Nonetheless, this proves that testability is entirely separate from vagueness.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to
Well, that explains a lot. It’s not exactly ancient history, and everything is properly quoted, so you really should know what I’m talking about. Yes, it’s about the identical table-chairs question from IKEA discussion, the one that I linked to just a few posts above.
Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Why are there no determinate boundaries though? I’m saying that boundaries are unclear only if you haven’t yet decided what they should be. But you seem to be saying that the boundaries inherently cannot be clear?
All categories are vague, because they are generated by a process similar to factor analysis
There is nothing vague about the results of factor analysis.
It is false that the meanings are arbitrary, for the reasons I have said.
On this topic, last we seemed to have agreed that “arbitrary” classification means “without reasons related to the properties of the objects classified”. I don’t recall you ever giving any such reasons.
It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
For example, you have said ‘”are tables also chairs” has a definite answer’. Note the word “definite”. You also keep insisting that there is factor analysis involved, which would also be an objective and natural way to assign objects to categories. By the way “natural” is the opposite of “arbitrary”.
All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things.
Yeah, I recall saying something like that myself. And the rest of your claims don’t go well with this one.
you are the one who needs the “language 101” stuff
Well, you decided that I need it, then made some wild and unsupported claims.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.”
Yes, the two statements are largely equivalent. Oddly, I don’t recall you mentioning testability or measurability anywhere in this thread before (I think there was something in another thread though).
Likewise, you have been confusing “this statement is vague” with “this statement is not testable.”
I don’t think I’ve done that. It’s unfortunate that after this you spent so much time trying to to prove something I don’t really disagree with. Why did you think that I’m confusing these things? Please quote.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you. You say “cannot possibly” and then give no reasons why.
I however have no problems with the vagueness here, because the two categories are only shorthands for some very specific properties of the starts (like mass). This is not true for “consciousness”.
Nonetheless, this proves that testability is entirely separate from vagueness.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you.
There is nothing stopping you only in the sense that nothing stops you from asserting falsehoods. (As we see is the case for you personally.)
It is intrinsically vague: “Red giant” does not and cannot have precise boundaries, as is true of all words. The same is true of “White dwarf.” If you cut the line, you will indeed be cutting it arbitrarily, as you say yourself, and this has nothing to do with the meaning of those words.
The rest does not respond to the comparison about consciousness, and as I said we won’t be discussing the comments on language.
“Red giant” does not and cannot have precise boundaries
Again, you make a claim and then offer no arguments to support it. “Red giant” is a term defined quite recently by a fairly small group of people. It means what those people wanted it to mean, and its boundaries are as precise as those people wanted them to be.
we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t.
You started the language discussion, but I have to explain why we’re continuing it? I continue, because I suspect that the reasoning errors you’re making about chairs are similar to the errors you’re making abut consciousness, and chairs are easier to talk about. But it’s only a suspicion. Also, I continue, because you’ve made some ridiculous claims and I’m not going to ignore them.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences
Assuming “experience” is a meaningful category.
with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
Yes, “feeling” and “experience”, are pretty much the same thing, I didn’t mean to imply otherwise in the text you quoted. Instead, the first sentence refers to your definition, and the second offers an alternative one.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
There is a classification problem with tables and chairs. Generally, I know what chairs and tables are supposed to be like, but there are objects similar both to chairs and to tables, and there isn’t any obvious way to choose which of those similarities are more important. At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
Yes, “feeling” and “experience”, are pretty much the same thing,
So they are either both meaningful, or both meaningless. But you have used “experience” as though it is meaningful,
and you have implied that “feeling” is meaningless.
That was a predictable problem. Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates, you cannot make that kind of statement, and so cannot express identity theory.
At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
Whereas, from my point of view, 1st person experience was there all along.
But you have used “experience” as though it is meaningful, and you have implied that “feeling” is meaningless.
No, I used “experience” as a label. Let me rewrite that part:
That’s assuming that “experience”, as you use that word, is a meaningful category. If you didn’t start from that assumption, and instead defined experiences as brain states, you could …
Is that better? I understand that having two definitions and two similar but not identical concepts in one sentence is confusing. But still I expect you to figure it out. Was “identified” the problem?
Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates <...>
What vocabulary relating to what mental states do I reject? Give examples.
Whereas, from my point of view, 1st person experience was there all along.
Wasn’t “chairness” there too? More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
That’s assuming that “experience”, as you use that word, is a meaningful category.
Does “meaningful” mean “meaningful” here , or is it being used as a misleading proxy for something like “immeasurable” or “unnecessary” or “tadasdatys doens’t like it”?
What vocabulary relating to what mental states do I reject?
You keep saying various words are meaningless. One would not want to use meaningless words, generally. OTOH, you have revealed elsewhere that you don’t use “meaningless” to mean “meaningless”. So who knows?
More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
Consciousness is in the dictiionary, chariness isn’t.
Consciousness is a concept used by science, chairness isn’t.
Consciousness is supported by empirical evidence, chairness isn’t.
It’s not that words are meaningless, it’s that you sometimes apply them in stupid ways. “Bitter” is a fine word, until you start discussing the “bitterness of purple”.
Consciousness is in the dictiionary, chariness isn’t.
Are dictionary writers the ultimate arbiters of what is real? “Unicorn” is also in the dictionary, by the way.
Consciousness is a concept used by science, chairness isn’t.
Physicalist, medical definition of consciousness is used by science. You accuse me of changing definitions when it suits me, and then proceed to do exactly that. I guess that’s what projection looks like.
Consciousness is supported by empirical evidence, chairness isn’t.
What evidence exactly? I have to assume my last paragraph applies here too.
If you can’t even come up with arguments why a silly concept I made up is flawed, maybe you shouldn’t be so certain in the meaningfulness of other concepts.
That the brain is not quiescent when experiencing pain is an objective fact. But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Please check out multiple realisability.
Because of that, none can genuinely tell whether an advanced robot has genuine qualia. That includes you, although you are inclined to think that your subjective intuitions are objective knowledge.
But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Sure, but what does that have to do with anything? Does “objective” mean “well understood” to you?
multiple realisability
There are multiple representations of pain the same way that there are multiple representations of chair.
It is ridiculous how much of this debate is about the basic problem of classification, rather than anything to do with brains. Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument. Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
your subjective intuitions
Calling my reasoning, even if not fully formal, “subjective intuitions” seems rude. I’m not sure if there is some point you’re trying to express with that.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
Not sure where you see me talking about intelligence. But intelligence is far more well defined and measurable than consciousness. Multiple realizability has nothing to do with that.
But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Sure, but what does that have to do with anything?
We do, on the other hand, know subjecively what pain feels like..
Does “objective” mean “well understood” to you?
That’s not the point. The point is that if we have words referring to subjective sensations, like “purple” and “bitter”, we can distinguish them subjectively. But if we discard out subjective insight into them, as you are proposing, and replace them with vague objective descriptions—vague, because no one knows precise descriptions of the full gamut of atomic configurations which implement pain. -- then you take a step backwards. You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter. Basing semantics on objective facts, or “reality” as you call it. only works if you know which fact is which. You are promoting something which sounds good, but doesn’t work—as a research program. Of course it works just fine at getting applause from an audience of dualism-haters.
.multiple realisability
There are multiple representations
Are you talking about realisations or representations?
Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument.
No one has made that argument. The point is not that it is not ultimately true that subjetive states are brain states, it is that rejecting the subjective entirely, at this stage, is not useful. Quite the reverse. Consciousness is the only thing we know from the inside—why throw that away?
Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
If we know what casues or predicts something, then we can sometimes reproduce it in radically different ways. That is the basis of artificial intelligence: realising intelligence in a medium other than a brain.
But you are saying that pain cannot be realised by a robot, and your reasons for sayign that are entirely non-empirical.
We might be able to refine the concept of consciousness as part of a research programme, but research programmes have to start with folk concepts.
We do, on the other hand, know subjecively what pain feels like..
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires. But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not. It’s as if you assume something is real, just because it comes out of people’s mouths.
realisations or representations
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
No one has made that argument.
Parts of my text are referring to the arguments I saw in wikipedia under “multiple realizaility”. But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. In particular, you have to believe this to even ask whether robots feel pain.
and your reasons for sayign that are entirely non-empirical
I’m still waiting for your empirical reasons why “purple is not bitter”, or better yet, “purple is not a chair”, if you feel the concept of bitterness is too subjective.
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
But not much of an argument for using semantics grounded in (physical) reality. Doing so does not buy you maximum precision in absolute terms, and , what is worse, the alternative , of grounding terms for types of experience is 1st person experience, can give you more precision.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires.
You may believe that, but do you know it?
But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not.
The difference is that I accept the possibility that first person evidence could falsify 3rd person theory.
It’s as if you assume something is real, just because it comes out of people’s mouths.
I’m not taking 1st person to mean 3rd person reports of (someone elses) 1st person experience.
Doing so does not buy you maximum precision in absolute terms
What sort of precision are you talking about? More generally, you’re repeatedly said that the concept of consciousness is very useful. I don’t think I’ve seen that usefulness. I suspect that elaborating here is your best bet to convince me of anything. Although even if you did convince me of the usefulness of the term, that wouldn’t help the “robot pain” problem much.
You may believe that, but do you know it?
That’s a slightly weird question. Is it somehow different from “why do you believe that” ? I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary. It’s very likely that “zapping” isn’t quite sufficient, depending on how generously you interpret that word. But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
I’m not taking 1st person to mean 3rd person reports of 1st person experience.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper. Obviously, paper is slower and less accurate. But you seem to be implying a more fundamental difference between those two methods of data storage. Why is that?
A realisation of type X has type X, a representation of type X has type “representation”.
I like type theory. Let X be what I’m sitting on. Type of X is “chair”, type of “chair” is “category”, a painting of X is a representation of X, it is not a representation of “chair”. Representations of “chair”, in the same sense that painting represents X might not exist. Somehow I’m quite comfortable saying that an object of type Y is what represents Y. “Instantiates” might be the best word (curiously though, google uses “represent” to define it). Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
More generally, you’re repeatedly said that the concept of consciousness is very useful
I have said that actual experience is useful to pin down the meaning s of words referring to exerpeince.
You may believe that, but do you know it?
That’s a slightly weird question#
Not at all. That there is a difference betewen belief and knowledge is very standard.
I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary.
There’s an extensive literature of arguments to the contrary,
But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
It is the idea that you can learn aout the inward or 1st person by purely outward or 3rd person means that is contentious.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper.
No, I am saying that my first person is me, and your first person is you. SO my first person information is my experience, not someone else’s report of their experience.
Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
Well, you said that the two R words mean the same hting , when by established usage, they don’t That looks like a source of confusion to me.
Not at all. That there is a difference betewen belief and knowledge is very standard.
I assure you that none of the beliefs I state here were generated by flipping a coin. They are all to some extent justified. That’s why the question is weird—did you expect me to answer “no”?
There’s an extensive literature of arguments to the contrary
There is extensive literature of arguments in favor of god or homeopathy. Doesn’t make those things real. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments (likewise for god and homeopathy). However you seem to have read quite a bit more, and you haven’t raised my confidence in the value of that literature so far.
my first person information is my experience, not someone else’s report of their experience.
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences. I’m not saying that this hypothesis is true, I’m only saying that you don’t know it to be false. And if it did happen to be true, then your internal reasoning about your experiences would not be much different from my reasoning about your experiences written on paper (aside from the low precision of our language). Curiously, a physical experiment is more likely to test this hypothesis, than your internal reasoning.
That looks like a source of confusion to me.
It is a potential source of confusion, but that doesn’t mean it’s causing any right now. Maybe if we talked about representations such as paintings, it would cause some. Regardless, I’ll try to use the words you prefer. Debating their differences and similarities is very orthogonal to our main topic.
There’s an extensive literature of arguments to the contrary
There is extensive literature of arguments in favor of god or homeopathy.
You said there was a “lack” of arguments to the contrary, and I pointed out that there wasn’t.
. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments
Then why didn’t you say lack of good arguments? And why didn’t you say what is wrong with them?
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences.
“Magic” is not a helpful phrase.
You need to distinguish ontology and epistemology. Experiences and memories and so on have their physical correlates—ontology—but that does not mean you can comprehend them—epistemology. We might be able to find ways of translating between correlates and experience, but only if we don’t ignore experience as an epistemology. But, again, taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
internal reasoning about your experiences
Experience is experience, not reasoning about experience.
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with? Please explain your thought process, I really want to know. You see, productive debate requires some amount of generosity. I may not be polite, but I don’t think you’re illiterate or insane, and I don’t think I nitpick about things this obvious.
Maybe this is a symptom that you’re tired of the whole thread? You know you can stop whenever you want, right?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves. That’s where my “baseless hypothesis” comes from.
To clarify, the hypothesis isn’t a direct response to something you said, it’s a new angle I want to look at, to help me understand what you’re talking about.
“Magic” is not a helpful phrase.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation. I realize that this might not be obvious. Though I feel that this is a natural use of the word.
taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
That’s fine. There are some things that I’d want to pick on, although I’m not sure which of them are significant. But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with?
Did you mean “consciousness”? To put is bluntly, if you haven’t; heard of MR, there is probably a lot you don’t know about the subject.
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation.
You yourself are blocking off the possibility of understanding consciousness , subjectivity and experience by refusing to allow them as prima-facie, pre-theoretic phenomena.
You say that we must start with reality, but we cannot: (an accruate map of reality) is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist.
I get that you hate, hate, hate dualism or anything else that threatens physical monism, but you can’t prove physical monism by begging the question against it. You are doing it no favours.
But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Nobody else has a problem with robot pain as a meaningful possibility. You do because you have removed the fist person from your definitions.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
If having experiences is an important part of consciousness, then I’d expect you to reason about them, what induces them, their components, their similarities and differences. This “consciousness in general” phrasing is extremely weird.
Starting there means discarding any other kind of prima-facie evidence.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
You say that we must start with reality, but we cannot: (an accruate map of,) reality is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
The question “where did you start” has some bad assumptions. Of course at first we all have to start from the same naive point. If we did arbitrarily start from different unrelated assumptions, expecting to agree on anything would be weird.
So, what happened is that I started from naive assumptions, and arrived at physicalism. Then when I ask myself a new question, I start from where I last stopped—discarding all of my progress would be weird.
You may think that dropping an initial assumption is inherently wrong, but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary. You might be able to convince me that I do need to keep some similar assumption for technical reasons, but that wouldn’t solve the “robot pain” problem.
The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name. But when I drop consciousness, my life gets easier. How does that work?
I start from where I last stopped—discarding all of my progress would be weird.
There is a difference between a working hypothesis and an unfalsifiable dogma. It seems to you that there is nothing to explain about consciousness because you only accept 3rd-person empirical data, because of your ontology.
You may think that dropping an initial assumption is inherently wrong,
Could explain what assumption you are dropping, and why, without using the word magical.
but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary.
I’d prefer if you settled on one claim.
the “robot pain” problem.
That would be the problem for which there is no evidence except your say-so.
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name.
You can function practically without a concept of gravity, as people before Newton did. What you can get away with theoretically depends on what you are trying to explain. Perhaps there is a gravity sceptic out there somewhere insisting that “falling object” is a meaningless term, and that gravity is magic.
There is a difference between a working hypothesis and an unfalsifiable dogma.
Is my position less falsifiable than yours? No, most statements about consciousness are unfalsifiable. I think that’s a strong hint that it’s a flawed concept.
Could explain what assumption you are dropping, and why, without using the word magical.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head. I dropped it because I found that physicalism explains everything better. “Better” doesn’t mean that I have all the answers about anything, it just means that the answers consciousness gives are even worse.
I don’t understand what your problem with “magical” is?
I’d prefer if you settled on one claim.
Well, I suppose an assumption could be unnecessary without being meaningless, so the words aren’t identical, but I do refer to the same thing, when I use them in this context. I also recall explaining how a “meaningless” statement can be considered “false”. The question is, why are you so uncomfortable with paraphrasing? Do you feel that there are some substantial differences? Honestly, I mostly do this to clarify what I mean, not to obscure it.
That would be the problem for which there is no evidence except your say-so.
The “robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do. That’s a pretty big problem, regardless of what I say. Now, when I ask if this or that idea solves “robot pain” problem, I’m not asking if it produces an actual test, I just ask for a smallest hint that maybe the test could exist.
You can function practically without a concept of gravity, as people before Newton did.
That’s ridiculous. The mathematical law of gravity was written down by Newton, but the concept of gravity, in the sense that “things fall down”, is something most animals have. Do you literally think that nobody noticed gravity before Newton?
most statements about consciousness are unfalsifiable
That’s not the problem.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head.
The assumption is more that consciousness is something that needs explaining,
I also recall explaining how a “meaningless” statement can be considered “false”.
That’s wrong. If you can put a truth-value on a sentence , it is meaningful.
The question is, why are you so uncomfortable with paraphrasing?
I think it is better to express yourself using words that mean what you are trying to express.
Do you feel that there are some substantial differences?
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
Honestly, I mostly do this to clarify what I mean, not to obscure it.
I think ti is likely that your entire argument is based on vagueness and semantic confusion,
robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do.
There is a real problem of not being able to test for a pain sensation directly.
Why did it take you so long too express it that way? Perhaps the problem is this:
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”. Perhaps you have to use vagueness and confusion to make the invalid inference seem valid.
Wow, so you agree with me here? Is it not a problem to you at all, or just not “the” problem?
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
Invisible unicorns are immeasurable. They do not exist. The assumption that they do exist is unnecessary. The statement “invisible unicorns are purple” is meaningless. The words aren’t all exactly the same, but that doesn’t mean they aren’t all appropriate.
Why did it take you so long too express it that way?
A long long time ago you wrote: You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”. So I assumed you understood that immeasurability is relevant here. Did you then forget?
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”.
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
So I assumed you understood that immeasurability is relevant here
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”.
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
No, still not from that.
You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.
but you have brought in a bunch of different issues without explaining how they interrelate
Which issues exactly?
No, still not from that.
Why not? Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist? Does “‘robot pain’ is meaningless” follow from the same better?
but you have brought in a bunch of different issues without explaining how they interrelate
Which issues exactly
Meaningfulness, existence, etc.
Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist?
Huh? It’s perfectly good as a standalone stament , it’s just that it doens’t have much to do with meaning or measurabiltiy.
Does “‘robot pain’ is meaningless” follow from the [we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific] better?
Not really, because you haven’t explained why meaning should depend on measurability.
It is evident that this is a major source of our disagreement. Can you define “meaningless” for me, as you understand it? In particular, how it applies to grammatically correct statements.
It’s perfectly good as a standalone stament
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods.
. Can you define “meaningless” for me, as you understand it? In
Useless for communication.
Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods
Where is this going? You can’t stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
A bit too vague. Can I clarify that as “Useless for communication, because it transfers no information”? Even though that’s a bit too strict.
Meaningless statements cannot have truth values assigned to them.
What is stopping me from assigning them truth values? I’m sure you meant, “meaningless statements cannot be proven or disproven”. But “proof” is a problematic concept. You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument. Anyway, isn’t (1.) enough?
Where is this going?
It’s still entirely about meaning, measurability and existence. I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not and what it could follow from. Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”. Or maybe “invisible unicorns cannot be detected” does not follow from “we have no arguments suggesting that maybe ‘invisible unicorns’ could be something detectable”?
What is stopping me from assigning them truth values?
The fact that you can’t understand them.
You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument.
If you cant understand a statement as exerting the existence of something, it isn’t meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not
Because it needs premises along the lines of “what is not measurable is meaningless” and “what is meaningless is false”, but you have not been able to argue for either (except by gerrymandered definitions).
Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”
There’s an important difference between stipulating something to be indetectable … in any way, forever … and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is “true” in some way that has nothing to do with reality.
What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I’m trying to understand your definitions and how they’re different from mine.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
I see that for you “meaningless” is a very narrow concept. But does that agree with your stated definition? In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Also, can you offer a concrete meaningless statement yourself? Preferably one in the form “X exists”.
What happens if a robot pain detector is invented tomorrow?
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.
In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Well, you used it,.
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad.
Its’ bad because there’s nothign inside the box. It’s just a apriori argument.
I can also use”ftoy ljhbxd drgfjh”. Is that not meaningless either? Seriously, if you have no arguments, then don’t respond.
What happens if a robot pain detector is invented tomorrow?
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You’re supposed to be able to check that somehow.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
If you have no arguments, then don’t respond.
The implicit argument is that meaning/communication is not restricted to literal truth.
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow?
What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam’s razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Now you imply that they possible could be detected, in which case I withdraw my original claim
Yes, the unicorns don’t have to be undetectable be definition. They’re just undetectable by all methods that I’m aware of. If “invisible unicorns” have too much undetectability in the title, we can call them “ghost unicorns”. But, of course, if you do detect some unicorns, I’ll say that they aren’t the unicorns I’m talking about and that you’re just redefining this profound problem to suit you. Obviously this isn’t a perfect analogue for your “robot pain”, but I think it’s alright.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
I asked you before to propose a meaningless statement of your own.
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like “colourless green”, or category error, like “sleeping idea”.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
Very low finite rather than infinitessimal or zero.
I don’t see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don’t feel pain. I don’t see how that can be valid.
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that “bitter purple” (or something) was a category error, and your answer was very underwhelming.
I say that “sleeping idea” is meaningless, because I don’t have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, “is this idea sleeping” is answered with “no”. It’s just that I honestly don’t have such a restriction. I use the exact same explanation for the meaninglessness of both “fgdghffgfc” and “robot pain”.
a contradiction, like “colourless green”
The question “is green colorless” has a perfectly good answer (“no, green is green”), unless you don’t think that colors can have colors (in that case it’s a category error too). But I’m nitpicking.
starts with your not knowing something, how to detect robot pain
Here you treat detectability as just some random property of a thing. I’m saying that if you don’t know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can’t possibly say that it exists.
My “unicorn ghost” example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I’d have to make up an argument why this isn’t the unicorn I was talking about. “Robot pain” has no such flaws—it is devoid of any traces of meaningfulness.
300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.
Can you define “meaningless” for me, as you understand it?
I means “does not have a meaning.”
In particular, how it applies to grammatically correct statements.
In general, it doesn’t apply to grammatically correct sentences, and definitely not to statements. It’s possible that you will find something grammatically correct which is meaningless, but it would have to be severely contorted.
How do you know?
If you can ask the question, “How do you know?”, then the thing has a meaning. I will show you an example of something meaningless:
faheuh fr dhwuidfh d dhwudhdww
Note that there is no question of knowing or not knowing anything. When you can ask how you know something or don’t know it, then the thing has a meaning.
It only explains the “-less” suffix. It’s fine as a dictionary definition, but that’s obviously not what I asked for. I need you to explain “meaning” as well.
The problem with that is that if the word “meaning” has several meanings you will have a situation like this:
Suppose the word “meaning” has two meanings, A & B. But then we can ask what the word “meanings” means in the previous sentence: does it mean A, or B? If you answer that it means A, then the word “meaning” might have two meanings in the A sense, but five meanings in the B sense. But then we can ask what the word “meanings” means in the previous statement. And it might turn out that if the word “meanings” is taken in the B sense, the statement (about 2 and 5) is only true if we take the fourth meaning of the B sense, while in the 3rd sense, it has 7 meanings in the A sense, and 2 meanings in the B sense. And so on, ad infinitum.
All of that means that we have to accept a basic sense of meaning which comes before all the others if we want to talk about meaning at all. And in that basic sense, statements like that obviously have a meaning, whereas ones like “shirwho h wehjoeihqw dhfufh sjs” do not.
we have to accept a basic sense … And in that basic sense, statements like that obviously have a meaning
Your comment boils down to “It’s complicated, but I’m obviously right”. It’s not a very convincing argument.
Meaning is complicated. It is a function of at least four variables: the speaker, the listener, the message, and the context. It’s also well-trodden ground over which herds of philosophers regularly stampede and everything with the tag of “obviously” has been smashed into tiny little pieces by now.
Your comment boils down to “It’s complicated, but I’m obviously right”.
You’re right about the “I’m obviously right” part, but not the rest. It boils down to “you have to start somewhere.” You can’t start out with many meanings of “meaning”, otherwise you don’t know what you mean by “meanings” in the sentence “I am starting out with many meanings of meaning.” You have to start with one meaning, and in that case you can know what you mean when you say “I am starting with one meaning of meaning.”
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
The chair you are sitting on is a realisation; Van Gogh’s painting of his chair at Arles is a representation. You can’t sit on it.
But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. I
That’s very vaguely phrased. There’s are questions of whether pain has phenomenal qualities, whether it is totally reducible to physical behaviour, and whether it is multiply realisable. If pain doesn’t have phenomenal properties, how do you decide which set of brain states get labelled as pain states?
But the concern is that you have no way of coming to know the answers to those questions. You have predetermined that everything must be treated as physics from the outset, so you will ineveitably get out the answer you put in. You are not
treating the identity of pain with brain states as a falsifiable hypothesis.
There are uncontentious examples of multiply realisable things. Everything in computer science is MR—all algorithms, data structures , whatever. For the purposes of AI research, intelligence is assumed to be MR. There is no implication that MR things are things that “exist apart” from their realisations. So I don’t know where you are getting that from.
In particular, you have to believe this to even ask whether robots feel pain.
I would have to believe pain is MR to believe that; but the objection cannot be that nothing is MR. You are apparently being inconsistent about MR.
“purple is not bitter
Colour and taste are different categories, therefore category error.
You are not treating the identity of pain with brain states as a falsifiable hypothesis.
No, I’m treating the identity of pain with the memories thoughts and behaviors that express pain, as unfalsifiable. In other words, I loosely define pain “the thing that makes you say ouch”. That’s how definitions work—the theory that the thing I’m sitting on is a chair is also unfalsifiable. At that point the identity of pain with brain states is in principle falsifiable, you just induce the same state in two brains and observe only one saying ouch. Obviously, there are various difficulties with that exact scheme, it’s just a general sketch of how causality can be falsified.
There are uncontentious examples of multiply realisable things.
I don’t recall suggesting that something isn’t MR. I don’t know why you think that MR is a problem for me. Like I said, there are multiple realizations of pain the same way that there are multiple realizations of chair.
Is that supposed to be a novel theory, or a dictionary definition?
Definition, as I state right in the next sentence, and then confirm in the one after that. Is my text that unreadable?
Your suggesting pain can’t be instantiated in robots..
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question. In the same way that wondering if some table can also be a chair is a stupid question. If you feel that you need an answer, feel free to choose arbitrarily. But then, if you think that having an answer helps you somehow, you’re probably doing something very wrong.
In the case of a simulated human brain, it might seem more natural to call those states “pain”, but then if you don’t, nobody will be able to prove you wrong.
Is that supposed to be a novel theory, or a dictionary definition?
Definition, as I state right in the next sentence
The question asked dictionary definition.
Uncontroversially, you can prove something, or at least obtain a high standard of justification using falsifiable empiricism. Uncontroversially, you also can achieve a good level of justification using armchair reasoning based on valid deductions from standard definitions.
The use of nonstandard—stipulated, gerrymanderd, tendentious—is much dicier. You yourself made the comparison to compatiblism. In shades of gray terms, the redefinition manoeuvre isn’t completely beyond the pale, but it is nowhere near the gold standard of epistemology either—compatibliism, the “wretched subtefuge” remains somewhat contentious. The objection is is that compatibilists have changed the subject, are not in fact talking about free will.
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question.
And how are you justifying that suggestion? By appeal to personal intuition, which is also low grade epistemology.
It’s actually possible to answer that kind of question in a reasonably rigorous and formal way...you can show that a certain concept is leads to contradiction. But then such arguments are only convincing if they start from definitions that bear some relation to what a word usually means.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
For everyone else, pain is a feeling, a sensation, a phenomenal mode, a quale. You have left all that out of your definition , which is like definining a chair as something you cannot possibly sit on.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
the redefinition manoeuvre
Definitions aren’t handed from god in stone tablets. I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Oddly, I don’t feel like I’m doing the same thing compatibilists do. At least in my own head I explicitly have multiple versions of definitions (i.e. “if we define pain as <...> then <...>”). But I do worry if that’s always reflected in my text.
And how are you justifying that suggestion?
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I feel like I’ve asked you, though I’m not sure. This is an important point though. If we can’t agree even on that much, then we have some serious problems.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain. The word “similar”, of course, causes it’s own problems (how similar does it need to be?) but that’s nothing in comparison.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course). It’s your definition that’s weird.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
Dictionary definitions generally reflect popular usage. They are sometimes revised in terms of scientific discoveries—water is no longer defined as a basic element—but that requires more epistemic weigh than someone’s intuitive hunch.
Definitions aren’t handed from god in stone tablets
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Who are you communicating to when you use your own definitions?
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I
It’s not relevant to anything. I thunk there can be meaningless statements, and I continue to think yo have no evidence that “robot pain” is one of them.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain.
Says you. Why should I believe that?
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course).
Are you abndoning the position that “robot in pain” is meanngless in all cases?
Are you abndoning the position that “robot in pain” is meanngless in all cases?
I never said “all cases”, that would be ridiculous, the problems with “robot pain” depends on how the words are defined. With a strict physical definition the problem is easy, with a weaker physical definition, we have the usual classification problem, and with your definition the phrase is meaningless.
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
I don’t think I’ve ever tried to prove anything by redefining any words. There is some sort of miscommunication going on here. What I may do is try to convince you that my definitions are better, while matching common usage.
Who are you communicating to when you use your own definitions?
You’re asking this as though I maliciously misinterpreted what you mean by consciousness. Is that how you see this? What I tried to do is understand your definition to the best of my ability, and point out the problems in those. When talking about other definitions, I explicitly said things like “In this view pain is …” or “If you defined consciousness as …”. Was it actually unclear which definition I was talking about where, for all this time?
Well, if you define pain exactly as “the state that follows damage and precedes the ‘ouch’” then you would damage the robot, observe it say ouch, and then proclaim that it experiences pain. It’s that simple. The fact that you asked, suggests that there’s something you’re seriously misunderstanding. But I can’t explain it if I don’t know what it is.
Remember when you offered a stupid proof that “purple is bitter” is category error, and then never replied to my response to it? Gosh, that was a while ago, and apparently we didn’t move an inch.
To summarize, I believe that the phrase is meaningless, because instead of showing to me how meaningful it is, you repeatedly ask me stupid questions. At least, that’s one additional data point.
But using them proves nothing?
Yes, definitions do not generally prove statements.
I am wondering who you communicate with when you use a private language
Considering that I provide you with the alternate definitions and explicitly state which definition I’m using where, I’m communicating with you.
Your solution is unconvincing because it can be fulfilled by code that is too simple to be to be convincing. If you change the definition of pain to remove the the subjective, felt aspect, then the resulting problem is easy to solve...but it’s not the original problem. It’s not that I can’t understand you, it’s that it’s hard to believe anyone could pull such a fraudulent manoeuvre.
Meaninglessness is not the default. Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
If definitions do not prove statements , you have no proof that robot pain is easy.
If you redefine pain, you are not making statements about pain in my language. Your schmain might be a trivially easy thing to understand, but it’s not what I asked about.
What the hell? I’m not just annoyed because of how accusatory this sounds, I’m annoyed because it apparently took you a week of talking about alternative definitions to realize that I am, at times, talking about alternative definitions. Are you not paying attention at all?
Meaninglessness is not the default.
Well, it should be. I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic). Obviously, you can do whatever you want, but I need you to explain to me, how it makes sense to you.
Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
Sure, in a similar way that people discussing god or homeopathy bothers me. It’s not exactly bad to discuss anything, but not all questions are worth the time spent on them either.
If definitions do not prove statements , you have no proof that robot pain is easy.
I did say “generally”. Definitions do prove statements about those definitions. That is “define X as Y” proves that “X is Y”. Of course, there are meaningful statements presented in the form “X is Y”, but in those cases, we already have X well defined as Z and the statement is really a shorthand for “Z is Y”. I guess I’m trying to convince you that in your case the definition Z does not exist, so making up a new one is the next best thing.
If you redefine pain, you are not making statements about pain in my language.
I am, at times, talking about alternative definition
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
Meaninglessness is not the default.
Well, it should be
That can’t possible work, as entirelyuseless has explained.
Sure, in a similar way that people discussing god or homeopathy bothers me.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
in your case the definition Z does not exist, so making up a new one is the next best thing.
The ordinary definition for pain clearly does exist, if that is what you mean.
Robot pain is of ethical concern because pain hurts.
No, pain is of ethical concern because you don’t like it. You don’t have to involve consciousness here. You involve it, because you want to.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
Homeopathy is meaningful. God is meaningful only some of the time. But I didn’t mean to imply that they are analogues. They’re just other bad ideas that get way too much attention.
The ordinary definition for pain clearly does exist, if that is what you mean.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
“You don’t have to involve consciousness here”—has two meanings: one is “the concept of preference is simpler than the concept of consciousness”, which I would like to call a fact, although there are some problems with preference too.
another is “consciousness is generally not necessary to explain morality”, which is more of an opinion.
“highly unpleasant physical sensation caused by illness or injury.”
Of course, now I’ll say that I need “sensation” defined.
have you got an exact definition of “concept”?
Requiring extreme precision in all things tends to bite you.
I’d say it’s one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don’t want to bother. That’s because I have never considered “Is X a concept” to be an interesting question. And, frankly, I use the cord “concept” arbitrarily.
It’s you who thinks that “Can X feel pain” is an interesting question. At that point proper definitions become necessary. I don’t think I’m being extreme at all.
Obviously, anything can be of ethical concert, if you really want it to be
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
is “the concept of preference is simpler than the concept of consciousness”, w
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
“consciousness is generally not necessary to explain morality”, which is more of an opinion.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;’t want to talk about consciousness.
Of course, now I’ll say that I need “sensation” defined.
Of course, I’ll need “defined” defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology. It’s a form of question-begging.
. That’s because I have never considered “Is X a concept” to be an interesting question.
You used the word , surely you meant something by it.
At that point proper definitions become necessary.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
and also don;’t want to talk about consciousness.
What?
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
I’ll need “defined” defined
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
You used the word , surely you meant something by it.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as in proper scotsman?
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
and also don;’t want to talk about consciousness.
What?
You keep saying it s a broken concept.
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain?
That anything should feel like anything,
Proper as in proper scotsman?
Proper as not circular.
Circular as in
“Everything is made of matter.
matter is what everything is made of.” ?
Yes. I consider that “talking about consciousness”. What else is there to say about it?
That anything should feel like anything,
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes, if I had actually said that. By the way, matter exists in you universe too.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If it refers to something else, then I’ll need you to paraphrase.
If you want to know what “pain” means, sit on a thumbtack.
You can say “torture is wrong”, but that has no implications about the physical world
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
They only need to know about robot pain if “robot pain” is a phrase that describes something.
As i have previously pointed out, you cannot assume meaninglessness as a default.
morality, which has many of the same problems as consciousness, and is even less defensible.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
That is a start, but we can’t gather data from entities that cannot speak
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
Are there classes of conscious entity?
Morality or objective morality? They are different.
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
but I don’t necessarily understand what it would mean for a different kind of mind.
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
It seems you are no longer ruling out a science of other minds
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
I’ve already told you what it would mean
Where exactly?
Is the first half of the conversation meaningful and the second half meaningless?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic).
Then you should consider all statements meaningless, without exception, since all of your arguments are made out of statements, and there cannot be an infinite regress of arguments.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.
it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
I don’t really know why you derive from this that all statements are meaningless. Maybe we disagree about what “meaningless” means? Wikipedia nicely explains that “A meaningless statement posits nothing of substance with which one could agree or disagree”. It’s easy for me to see that “undetectable purple unicorns exist” is a meaningless statement, and yet I have no problems with “it’s raining outside”.
How do you argue why “undetectable purple unicorns exist” is a meaningless statement? Maybe you think that it isn’t, and that we should debate whether they really exist?
“Please give an example of a subjective experience, other than consciousness, that has no physical evidence.”
All subjective experiences, including consciousness, are correlated with objective descriptions. E.g. a person who is awake can be described in ways objectively distinct from a person who is asleep. So there is always evidence for subjective experience. But that does not reduce the meaning of having a subjective experience to some objective description.
So for example “I am conscious” does not signify any objective description, but is correlated with various objective descriptions. Likewise, “I currently seem to see a blue object,” does not signify any objective description, but it is correlated with various objective descriptions.
The things are correlated. For example, every time I am awake and conscious, I have a relatively undamaged brain. So if someone else has an undamaged brain and does not appear to be sleeping, that is evidence that they are conscious.
“Meaning” refers to the fact that words are about things, and they are about whatever people want to talk about. You seem to be using the word rather differently, e.g. perhaps to refer to how you would test whether something is true, since you said that the word “pain” is meaningless applied to a robot since we have no way to test whether it feels pain. Or you have the idea that words are meaningless if they do not imply something in the “real world,” by which you understand an objective description. But since people talk about whatever they want to talk about, words can also signify subjective perceptions, and they do.
For starters, do we agree that the phrase “purple is bitter” is meaningless? Or at least that some grammatically correct strings of words can have no meaning?
“Purple is bitter” is not meaningless; it is false.
It is possible that some string of words that satisfies most or all grammatical rules is meaningless. However it is not possible that a string of words that a human says in order to convey their thoughts is meaningless; it will mean the thing they are thinking about.
Purple is a color and not a flavor. Bitter is a flavor and not a color. That strongly suggests that purple is not bitter.
What about if you intend and fail?
That would depend on how exactly you failed. If you failed to speak any words, then obviously there was no meaning, although there was an intended meaning. If you failed to pronounce the words or type them correctly or whatever, there would be a vague spectrum from a situation similar to failing to speak any words, up to speaking them and succeeding. But there would be an intended meaning in every case, even if you failed to actually mean it.
Purple is a color and not a flavor. Bitter is a flavor and not a color. That strongly suggests that purple is not bitter.
I don’t think this is valid reasoning.
Cinchona tree bark is a part of a plant and not a flavour. Bitter is a flavour and not a part of a plant. That strongly suggests that this bark is not bitter.
The problem is that you are mixing the use of “bitter” as a noun and as an adjective. So there are two meanings, bitterness, and something bitter. You need to correct for that. It is obviously true that the bark is not bitterness, which is the relevant conclusion.
The original context of this discussion is whether these things are meaningful. It should be pretty obvious that the whole discussion presupposes that they are, including your own remarks. So since this is obvious, there is no need for further discussion of whether they are true or false in particular.
In the proposition “purple is [not] bitter” it seems clear to me that “bitter” is being used adjectivally. Imagine someone with a variety of synaesthesia that makes them perceive bitterness whenever faced with something purple; then I would say that for them purple is bitter. (In much the same sense as we might say that quinine is bitter.) For most people, colour perception and taste perception are not linked in any such way and therefore purple is not bitter.
This seems reasonable to me. In any case the argument wasn’t really about whether purple is bitter, but whether the sentence “purple is bitter” has any meaning at all. In fact is obviously has at least one meaning (which you mention here) and potentially several.
Your argument is either unsound or invalid, but I’m not sure which. Of course, personal experience of subjective statees does hae *something to do with detecting the same state in others.
There is no detecting going on. If you’re clever (and have too much free time), you may come up with some ways that internal human experience helps to solve that problem, but noting significant. That’s why I used “little” instead of “nothing”.
Finally, being conscious doesn’t mean anything at all. It has no relationship to reality.
What do you mean by “reality”? If you’re an empiricist, as it looks like you are, you mean “that which influinces our observations”. Now what is an “observation”? Good luck answering that question without resorting to qualia.
Does a falling rock also observe the gravitational field?
I’d have to say no here, but if you asked about plants observing light or even ice observing heat, I’d say “sure, why not”. There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.
I’d have to say no here, but if you asked about plants observing light or even ice observing heat, I’d say “sure, why not”. There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.
What are you basing this distinction on? More importantly, how is whatever you’re basing this distinction on relevant to grounding the concept of empirical reality?
Using Eliezer’s formulation of “making beliefs pay rents in anticipated experiences” may make the relevant point clearer here. Specifically, what’s an “experience”?
I would say that object observes an event if it changes its state in response to this event. Yes, that’s a very low bar. First, gravity, isn’t an event, so “observe” is an awkward word. We can instead “measure” and then observe the results. Of course, if the gravity did change, the rock would presumably change it’s shape a tiny bit, which we may or may not count—that’s fine, “observation” is supposed to be on a spectrum.
Using Eliezer’s formulation of “making beliefs pay rents in anticipated experiences” may make the relevant point clearer here. Specifically, what’s an “experience”?
Experiences are brain states, beliefs are also stored in the brain. Eliezer’s advice is equally good both for you and for a roomba, regardless of which of you is supposedly conscious. It may not work for plants or ice though—I don’t think I can find anything resembling beliefs in them, and even if I could, there would be no process to update them.
I agree with much of what you say but I am not sure it implies for cousin_it’s position what you think it does.
I’m sure it’s true that, as you put it elsewhere in the thread, consciousness is “extrapolated”: calling something conscious means that it resembles an awake normal human and not a rock, a human in a coma, etc., and there is no fact of the matter as to exactly how this should be extrapolated to (say) aliens or intelligent robots.
But this falls short of saying that at best, calling something conscious equals saying something about its externally observable behaviours.
For instance: suppose technology advances enough that we can (1) make exact duplicates of human beings, which (initially) exactly match the memories, personalities, capabilities, etc., of their originals, and (2) reversibly cause total paralysis in a human being, so that their mind no longer has any ability to produce externally observable effects, and (3) destroy a human being’s capacity for conscious thought while leaving autonomic functions like breathing normal.
(We can do #2 and #3 pretty well already, apart from reversibility. I want reversibility so that we can confirm later that the person was conscious while paralysed.)
So now we take a normal human being (clearly conscious). We duplicate them (#1). We paralyse them both (#2). Then we scramble the brain of one of them (#3). Then we observe them as much as you like.
I claim these two entities have exactly the same observable behaviours, past and present, but that we can reasonably consider one of them conscious and the other not. We can verify that one of them was conscious by reversing the paralysis. Verifying that the other wasn’t depends on our confidence that by mashing up most of their cerebral cortex (or whatever horrible thing we did in #3) really destroys consciousness, but this seems like a thing we could reasonably be quite confident of.
You might say that our judgement that one of these (ex-?) human beings is conscious is dependent on our ability to reverse the paralysis and check. But, given enough evidence that the induction of paralysis is harmlessly reversible, I claim we could be very confident even if we knew that after (say) a week both would be killed without the paralysis ever being reversed.
Indeed, we can always make two things seem indistinguishable, if we eliminate all of our abilities to distinguish them. The two bodies in your case could still be distinguished with an fmri scan, or similar tool. This might not count as “behavior”, but then I never wanted “behavior” to literally mean “hand movements”.
I think you could remove that by putting the two people into magical impenetrable boxes and then randomly killing one of them, through some schrodinger’s cat-like process. But I wouldn’t find that very interesting either. Yes, you can hide information, but it’s not just information about consciousness you’re hiding, but also about “ability to do arithmetic” and many other things. Now, if you could remove consciousness without removing anything else, that would be very interesting.
OK, so what did you mean by “behaviour” if it includes things you can only discover with an fMRI scan? (Possible “extreme” case: you simply mean that consciousness is something that happens in the physical world and supervenes on arrangements of atoms and fields and whatnot; I don’t think many here would disagree with that.)
If the criteria for consciousness include things you can’t observe “normally” but need fMRI scans and the like for (for the avoidance of doubt, I agree that they do) then you no longer have any excuse for answering “yes” to that last question.
My point wasn’t about hiding information; it was that much of the relevant information is already hidden, which you seemed to be denying when you said consciousness is just a matter of “behaviours”. It now seems like you weren’t intending to deny that at all; but in that case I no longer understand how what you’re saying is relevant to the OP.
The word behavior doesn’t really feature much in the ongoing discussions I have. My first post was an answer to OP, not meant as a stand-alone truth. But obviously, If “consciousness” means anything, it’s a thing that happens in the brain—I’d say it’s the thing that makes complex and human-like behaviors possible.
If the criteria for consciousness include things you can’t observe “normally” <...>
Normally is the key word here. There is nothing normal about your scenario. I need an fmri can for it, because there is nothing else that I can observe. Compared to that, the human in a box communicating through speech is very normal and quite sufficient. Unless the human is mute or malicious. Then I might need more complex tools.
much of the relevant information is already hidden
It’s obscured, sure. But truly hiding information is hard. Speech isn’t that narrow of a window, by the way. Now, if I had to communicate with the agent in the box by sending one bit of information back and forth, that would be more of a problem.
The three examples deal with different kinds of things.
Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don’t, they should be physically stored somehow. In that sense they are the most real of the three.
Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimately, “I have a skill to do X” means “I believe I’m better than most at X” and while it is a belief as good as the previous one, but it’s a little less direct.
Finally, being conscious doesn’t mean anything at all. It has no relationship to reality. At best, “X is conscious” means “X has behaviors in some sense similar to a human’s”. If a computationalist answers “no” to the first two questions, and “yes” to the last one, they’re not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That’s, by the way, similar to what compatibilists do with free will.
You say that like its a good thing.
If you look for consciousness from the outside, you’ll find nothing, or you’ll find behaviour. That’s because consciousness is on the inside, is about subjectivity.
You won’t find penguins in the arctic, but that doesn’t mean you get to define penguins as nonexisent, or redefine “penguin” to mean “polar bear”.
No, I’m not personally in favor of changing definitions of broken words. It leads to stupid arguments. But people do that.
It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain. I’m under the impression that cousin_it believes you can have the latter without the former. I say you must have both. Are you saying you don’t need either? That you could have two physically identical agents, one conscious, the other not?
Meaning the world of exteriors? If so, is that not question begging?
Well, it;’s defintiely reflected in the physical structure of the brain, because you can tell whether someone is conscious with an FMRI scan.
OK. Now you you have asserted it, how about justifying it.
No. I am saying you shouldn’t beg questions, and you shouldn’t confuse the evidence for X with the meaning of X.
You are collapsing a bunch of issues here. You can believe that is possible to meaningfully refer to phenomena that are not fully understood. You can believe that something exists without believing it exists dualistically. And so on.
No, meaning the material, physical world. I’m glad you agree it’s there. Frankly, I have not a slightest clue what “exterior” means. Did you draw an arbitrary wall around your brain, and decided that everything that happens on one side is interior, and everything that happens on another is exterior? I’m sure you didn’t. But I’d rather not answer your other points, when I have no clue about what it is that we disagree about.
No, you can tell if their brain is active. It’s fine to define “consciousness” = “human brain activity”, but that doesn’t generalize well.
It’s where you are willing to look, as opposed to where you are not. You keep insisting that cosnciousness can only be found in the behaviour of someone else: your opponents keep pointing out that you have the option of accessing your own.
We don’t do that. We use a medical definition. “Consciousness” has a number of uses in science.
That’s hardly a definition. I think it’s you who is begging the question here.
I have no idea where you got that. I explicitly state “I say you must have both”, just a couple of posts above.
Here’s a google result for “medical definition of consciousness”. It is quite close to “brain activity”, dreaming aside. If you extended the definition to non-human agents, any dumb robot would qualify. Did you have some other definition in mind?
Behaviour alone versus behaviour plus brain scans doesn’t make a relevant difference.. Brain scans are still objective data about someone else. It’sll an attempt to deal with subjectivity on an objective basis.
The medical definition of consciousness is not brain activity because there is some dirt if brain activity during, sleep states and even coma. The brain is not a PC.
“It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain.”
“It would be preferable” expresses wishful thinking. The word refers to subjective experience, which is subjective by definition, while you are looking at objective things instead.
No, “it’s preferable”, same as “you should”, is fine when there is a goal specified. e.g. “it’s preferable to do X, if you want Y”. Here, the goal is implicit—“not to have stupid beliefs”. Hopefully that’s a goal we all share.
By the way, “should” with implicit goals is quite common, you should be able to handle it. (Notice the second “should’. The implicit goal is now “to participate in normal human communication”).
We can understand that the word consciousness refers to something subjective (as it obviously does) without having stupid beliefs.
Subjective is not the opposite of physical.
Indeed.
“Subjective perception,” is opposite, in the relevant way, to “objective description.”
Suppose there were two kinds of things, physical and non-physical. This would not help in any way to explain consciousness, as long as you were describing the physical and non-physical things in an objective way. So you are quite right that subjective is not the opposite of physical; physicality is utterly irrelevant to it.
The point is that the word consciousness refers to subjective perception, not to any objective description, whether physical or otherwise.
No, physical things have objective descriptions.
Can you find another subjective concept that does not have an objective description? I’m predicting that we disagree about what “objective description” means.
Yes, I can find many others. “You seem to me to be currently mistaken,” does not have any objective descripion; it is how things seem to me. It however is correlated with various objective descriptions, such as the fact that I am arguing against you. However none of those things summarize the meaning, which is a subjective experience.
“No, physical things have objective descriptions.”
If a physical thing has a subjective experience, that experience does not have an objective description, but a subjective one.
I find myself to be conscious every day. I don’t understand what you find “unreal” about direct experience.
Here’s what I think happened.
You observed something interesting happening in your brain, you labeled it “consciousness”.
You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans “conscious”.
You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it “not conscious”.
Then you observed a robot, and you asked “is it conscious?”. If you asked the full question—“are the things happening in a robot similar to the things happening in my brain”—it would be obvious that you won’t get a yes/no answer. They’re similar in some ways and different in others.
But if you go back to the original question, you can’t rule out that the robot is fully conscious , despite having some physical differences. The point being that translating questions about consciousness into questions about brain activity and function (in a wholesale and unguided way) isn’t superior, it’s potentially misleading.
I can rule out that the robot is conscious, because the word “conscious” has very little meaning. It’s a label of an artificial category. You can redefine “conscious” to include or exclude the robot, but that doesn’t change reality in any way. The robot is exactly as “conscious” as you are “roboticious”. You can either ask questions about brain activity and function, or you can ask no questions at all.
There’s nothing artificial about direct experience.
To whom? To most people, it indicates having a first person perspective, which is something rather general. It seems to mean little to you because of your gerrymnadered definition of meaning.Going only be external signs, consciousness might just be some unimportant behavioural quirks.
The point is not to make it vacuously true that robots are conscious. The point is to use a definition of consciousness that includes it’s central feature: subjectivity.
Says who? I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste. The fact that having consiousness fgives you that kind of access is central.
What does “not having a first person perspective” look like?
I find my definition of meaning (of statements) very natural. Do you want to offer a better one?
I think you use that word as equivalent to consciousness, not as a property that consciousness has.
All of these things have perfectly good physical representations. All of them can be done by a fairly simple bot. I don’t think that’s what you mean by consciousness.
Not if “perfectly good” means “known”.
It’s ok, it doesn’t. Why do people keep bringing up current knowledge?
Because we are trying to communicate now, but your semantic scheme requires knowledge that is only available in the future , if at all.
Yes, that sounds about right, with the caveat that I would say that other humans are almost certainly conscious. Obviously there are people (e.g. solipsists) who don’t think that conscious minds other than their own exist.
That sounds approximately right, albeit it is not just the fact that a rock is dissimilar to me that leads me to believe it to be unconscious. I am open to the possibility that entities very different from myself might be conscious.
I’m not sure that “is the robot conscious” is really equivalent to “are the things happening in a robot similar to the things happening in my brain”. It could be that some things happening in the robot’s brain are similar in some ways to some things happening in my brain, but the specific things that are similar might have little or nothing to do with consciousness. Moreover, even if a robot’s brain used mechanisms that are very different from those used by my own brain, this would not mean that the robot is necessarily not conscious. That is what makes the consciousness question difficult—we don’t have an objective way of detecting it in others, particularly in others whose physiology differs significantly from our own. Note that this does not make consciousness unreal, however.
I would be willing to answer “no” to the “is the robot conscious” question for any current robot that I have seen or even read about. But, that is not to say that no robot will ever be conscious.I do agree that there could be varying degrees of consciousness (rather than a yes/no answer), e.g. I suspect that animals have varying degrees of consciousness, e.g. non-human apes a fairly high degree, ants a low or zero degree, etc.
I don’t see why any of this would lead to the conclusion that consciousness or pain are not real phenomena.
Let me say it differently. There is a category in your head called “conscious entities”. Categories are formed from definitions or by picking some examples and extrapolating (or both). I say category, but it doesn’t really have to be hard and binary. I’m saying that “conscious entities” is an extrapolated category. It includes yourself, and it excludes inanimate objects. That’s something we all agree on (even “inanimate objects” may be a little shaky).
My point is that this is the whole specification of “conscious entities”. There is nothing more to help us decide, which objects belong to it, besides wishful thinking. Usually we choose to include all humans or all animals. Some choose to keep themselves as the only member. Others may want to accept plants. It’s all arbitrary. You may choose to pick some precise definition, based on something measurable, but that will just be you. You’ll be better off using another label for your definition.
That it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer’s is conscious is not really in question—pretty much everyone on this thread has said that. It doesn’t follow that I should drop the term or a “use another label”; there is a common understanding of the term “conscious” that makes it useful even if we can’t know whether “X is conscious” is true in many cases.
There is a big gap between “difficult” and “impossible”. If a thing is “difficult to measure”, then you’re supposed to know in principle what sort of measurement you’d want to do, or what evidence you could in theory find, that proves or disproves it. If a thing is “impossible to measure”, then the thing is likely bullshit.
What understanding exactly? Besides “I’m conscious” and “rocks aren’t conscious”, what is it that you understand about consciousness?
In the case of consciousness, we are talking about subjective experience. I don’t think that the fact that we can’t measure it makes it bullshit. For another example, you might wonder whether I have a belief as to whether P=NP, and if so, what that belief is. You can’t get the answer to either of those things via measurement, but I don’t think that they are bullshit questions (albeit they are not particularly useful questions).
In brief, my understanding of consciousness is that it is the ability to have self-awareness and first-person experiences.
What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right? Again, there is a difference between difficult and impossible.
Those sound like synonyms, not in any way more precise than the word “consciousness” itself.
To clarify: at the present you can’t obtain a person’s beliefs by measurement, just as at the present we have no objective test for consciousness in entities with a physiology significantly different from our own. These things are subjective but not unreal.
And yet I know that I have first person experiences and I know that I am self-aware via direct experience. Other people likewise know these things about themselves via direct experience. And it is possible to discuss these things based on that common understanding. So, there is no reason to stop using the word “consciousness”.
Did you mean, “at present subjective”? Because if something is objectively measurable then it is objective. Are these things both subjective and objective? Or will we stop being conscious, when we get a better understanding of the brain.
Are those different experiences or different words for the same thing? What would it feel like to be self-aware without having first person experiences or vice versa?
To clarify, consciousness is a subjective experience, or more precisely it is the ability to have (subjective) first person experiences. Beliefs are similarly “in the head of the believer”. Whether either of these things will be measurable/detectable by an outside observer in the future is an open question.
Interesting questions. It seems to me that self awareness is a first person experience, so I am doubtful that you could have self awareness without the ability to have first person experiences. I don’t think that they are different words for the same thing though—I suspect that there are first-person experiences other than self awareness. I don’t see how my argument or yours depends on whether or not first-person experiences and self-awareness are the same; do you ask the questions for any particular reason, or did you just find them to be interesting questions?
Suppose, as a thought experiment, that these things become measurable tomorrow. You said that beliefs are subjective. But how can a thing be both subjective and objectively measurable? Do beliefs stop being subjective the moment measurement becomes possible?
I ask them because I wanted you to play rationalist taboo (for “consciousness”), and I’m trying to decide if you succeeded or failed. I think “self awareness” could be defined as “thoughts about self” (although I’m not sure that’s what you meant). But “first person experiences” seems to be a perfect synonym for “consciousness”. Can you try again?
It is possible that there is some objective description which is 100% correlated with a subjective experience. If there is, and we are reasonably sure that it is, we would be likely to call the objective measurement a measurement of subjective experience. And it might be that the objective thing is factually identical to the subjective experience. But “this objective description is true” will never have the same meaning as “someone is having this subjective experience,” as I explained earlier.
Note that anyone who brings up another description which is not a synonym for “consciousness,” is not explaining consciousness, but something else. Any explanation which is actually an explanation of consciousness, and not of something else, should have the same meaning as “consciousness.”
That’s the general problem with your game of “rationalist taboo.” In essence, you are saying, “These words seem capable of expressing your position. Avoid all the words that could possibly express your position, and then see what you have to say.” Sorry, but I decline to play.
I can briefly explain “banana” as “bent yellow fruit”. Each of those words has a clear meaning when separated from the others. They would be meaningful even if bananas didn’t exist. On the other hand, “first person experiences” isn’t like that. There are no “third person experiences” that I’m aware of. Likewise, the only “first person” thing is “experience”. And there can be no experiences if there is no consciousness.
There are no third person experiences that you have first person experiences of. But anyone else’s first person experiences will be third person experiences for you.
This is like saying that “thing” must be meaningless because the only things that exist are things. Obviously, if you keep generalizing, you will come to something most general. That does not mean it is meaningless. I would agree that we might use “experience” for the most general kind of subjective thing. But there are clearly more specific subjective things, notably like the example of feeling pain.
Wow, now you’re not just assuming that conscience exists, but that there is more than one.
“Thing” is to some extent a grammatical placeholder. Everything is a thing, and there are no properties that every “thing” shares. I wouldn’t know how to play rationalist taboo for “thing”, but this isn’t true for most words, and your arguments that this must be true for “consciousness” or “experience” are pretty weak.
Nobody is disagreeing. If, in another context, I asked for an explanation of “pain”, saying “experience of stubbing your toe” would be fine.
I am not “assuming” that consciousness exists; I know it from direct experience. I do assume that other people have it as well, because they have many properties in common with me and I expect them to have others in common as well, such as the fact that the reason I say I am conscious is that I am in fact conscious. If other people are not conscious, they would be saying this for a different reason, and there is no reason to believe that. You can certainly imagine coming to the opposite conclusion. For example, I know a fellow who says that when he was three years old, he thought his parents were not conscious beings, because their behavior was too different from his own: e.g. they do not go to the freezer and get the ice cream, even though no one is stopping them.
This means you should know what the word “experience” means. In practice you are pretending not to know what it means.
Yes, I said “in another context”. In current context it’s both “conscience” and “experience” that I need explained.
You don’t know what rationalist taboo (or even regular taboo) is, do you? Here: https://wiki.lesswrong.com/wiki/Rationalist_taboo maybe that will clear some things up for you.
Sentences like this are exactly why I need you to play taboo.
Yes, I do know what you are talking about here.
I have already said why I will not play.
Are you sure? I don’t know how to interpret your “In practice you are pretending not to know what it means”, if you do. Pretending is how the game works.
No one can force you, if you don’t want to. But your arguments that there is something wrong with the game are weak.
Quite sure.
You should interpret it to mean what it says, namely that in practice you have been pretending not to know what it means. If pretending is how the game works, and you are playing that game, then it is not surprising that you are pretending. Nothing complicated about this.
Perhaps your objection is that I should not have said it in an accusatory manner. But the truth is that it is rude to play that game with someone who does not want to play, and I already explained that I do not, and why.
You certainly haven’t provided any refutation of my reasons for that. Once again, in essence you are saying, “describe conscious experience from a third person point of view.” But that cannot be done, even in principle. If you describe anything from a third person point of view, you are not describing a personal experience. So it would be like saying, “describe a banana, but make sure you don’t say anything that would imply the conclusion that it is a kind of fruit.” A banana really is a fruit, so any description that cannot imply that it is, is necessarily incomplete. And a pain really is a subjective feeling, so any description which does not include subjectivity or something equivalent cannot be a description of pain.
I don’t think I actually said something like that. I’m just asking you to describe “conscious experience” without the words “conscious” and “experience”. You expect that I will reject every description you could offer, but you haven’t actually tried any. If you did try a few descriptions and I did find something wrong with each of them (which is not unlikely), your arguments would look a lot more serious.
But now I can only assume that you simply can’t think of any such descriptions. You see, “I don’t want to play” is different from “I give up”. I think you’re confusing them.
All descriptions are incomplete. You just have to provide a description that matches bananas better than it matches apples or sausages. A malicious adversary can always construct some object which would match your description without really being a banana, but at some point the construction will have to be so long and bizarre and the difference so small that we can disregard it.
Again, all descriptions are incomplete. “What makes someone say ouch” is quite accurate considering it’s length.
There is a reason I expect that. Namely, you criticized a proposed definition on the grounds that it was “synonymous” with consciousness. But that’s exactly what it was supposed to be: we are talking about consciousness, not something else. So any definition I propose is going to be synonymous or extremely close to that; otherwise I would not propose it.
Your assumption is false. Let’s say “personal perception.” Obviously I can anticipate your criticism, just as I said above.
If your description of a banana does not suggest that it is fruit, your description will be extremely incomplete, not just a little incomplete. In the same way, if a description of consciousness does not imply that it is subjective, it will be extremely incomplete.
The point is that you are ignoring what is obviously central to the idea of pain, which is the way it feels.
Again you confirm that you don’t understand what the game taboo is (rationalist or not). “Yellow bent fruit” is not a synonym of “banana”.
My criticism is that this description obviously matches a roomba. It can definitely perceive walls (it can become aware of them through sensors) and I don’t see why this perception wouldn’t be personal (it happens completely withing the roomba), although I suspect that this word might mean something special for you. Now, as I say this, I assume that you don’t consider roomba conscious. If you do, then maybe I have not criticisms.
Is that the criticism you anticipated?
I don’t know what sort of scale of incompleteness you have. Actually, there could be an agent who can recognize bananas exactly as well as you, without actually knowing whether they grow on plants or are made in factories. A banana has many distinctive properties, growing on plants is not the most important one.
How does it feel? It feels bad, of course, but what else?
“Perception” includes subjectively noticing something, not just being affected by it. I don’t think that a roomba notices or perceives anything.
Among other things, it usually feels a bit like heat. Why do you ask?
Why do you not think that? If there is something I’m not getting about that word, try making your taboo explanation longer and more precise.
By the way, I have some problems with “subjective”. There is a meaning that I find reasonable (something similar to “different” or “secret”), and there is a meaning that exactly corresponds to consciousness (I can just replace the “subjectively” in your last post with “consciously” ans lose nothing). Try not to use it either.
More specifically I want to know, of all the feelings that you are capable of, how do you recognize that the feeling that follows stubbing your toe is the one that is pain? What distinctive properties does it have?
Off topic, does it really feel like heat? I’m sweating right now, and I don’t think that’s very similar to pain. Of course, getting burned causes pain. Also, hurting yourself can produce swelling, which does feel warm, so that’s another way to explain your association.
I could say that a roomba is a mere machine, but you would probably object that this is just saying it is not conscious. Another way to describe this, in this particular context, is that the the roomba’s actions do not constitute a coherent whole, and “perception” is a single coherent activity, and therefore conscious.
As I said, I’m not playing your game anyway, and I feel no obligation to describe what I think in your words rather than mine, especially since you know quite well what I am talking about here, even if you pretend to fail to understand.
By recognizing that it is similar to the other feelings that I have called pain. It absolutely is not by verbally describing how it feels or anything else, even if I can do so if I wish. That is true of all words: when we recognize that something is a chair or a lamp, we simply immediately note that the thing is similar to other things that we have called chairs or lamps. We do not need to come up with some verbal description, and especially some third person description, as you were fishing for there, in order to see that the thing falls into its category.
It is not just that getting burned causes pain, but intense pain also feels similar to intense heat. Sweating is not an intense case of anything, so there wouldn’t be much similarity.
I am talking about how it feels at the time, not afterwards. And the “association” does not need to be explained by anything except how it feels at the time, not by any third person description like “this swelled up afterwards.”
I would also object by saying that a human is also a “mere machine”.
I have no idea what “coherent whole” means. Is roomba incoherent is some way?
At times I honestly don’t.
Ok, but that just pushes the problem one step back. There are various feelings similar to stubbing a toe, and there are various feelings similar to eating candy. How do you know which group is pain and which is pleasure?
I think you misunderstood me. Sweating is what people do when they’re hot. I’m saying that pain isn’t really that similar to heat, and then offered a couple of explanations why you might imagine that it is.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious or aware of what it is doing, and consequently it does not perceive anything, because “perceiving” includes being conscious and being aware.
In that way, humans are not mere machines, because they are conscious beings that are aware of what they are doing and they perceive things.
The human performs the unified action of “perceiving” and we know that it is unified because we experience it as a unified whole. The roomba just has each part of it moved by other parts, and we have no reason to think that these form a unified whole, since we have no reason to think it experiences anything.
In all of these cases, of course, the situation would be quite different if the roomba was conscious. Then it would also perceive what it was doing, it would not be a mere machine, and its actions would be unified.
The mind does the work of recognizing similarity for us. We don’t have to give a verbal description in order to recognize similarity, much less a third person description, as you are seeking here.
You’re wrong.
Oh, so “mere machine” just a pure synonym of “not conscious”? Then I guess you were right about what my problem is. Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
Are you implying that humans do not have parts that move other parts?
No, you misunderstood my question. I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
Maybe one of us is really a sentient roomba, pretending to be human? Who knows!
No. I said the roomba “just” has that. Humans are also aware of what they are doing.
Are you saying that we must have dualism, and that consciousness is something that certainly cannot be reduced to “parts moved by other parts”? It’s not just that some arrangements of matter are conscious and others are not?
If there are parts, there is also a whole. A whole is not the same as parts. So if you mean by “reductionism” that there are only parts and no wholes, then reductionism is false.
If you mean by reductionism that a thing is made of its parts rather than made of its parts plus one other part, then reductionism is true: a whole is made out of its parts, not of the parts plus another part (which would be redundant and absurd.). But it is made “out of” it—it is not the same as the parts.
No. It also means not any other thing similar to consciousness, even if not exactly consciousness.
My reason is that we have no reason to think that a roomba is conscious.
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
I have not idea what that means (a few typos maybe?). Obviously, there are things that are unconscious but are not machines, so the words aren’t identical. But if there is some difference between “mere machine” and “unconscious machine”, you have to point it out for me.
Hypothetically, what could a reason to think that a robot is conscious look like?
“Pain” is a word and humans aren’t born knowing it. What does “no extra step” even mean? There are a few obvious steps. You have this habit of claiming something to be self-evident, when you’re clearly just confused.
No typos. I meant we know that there are two kinds of things: objective facts and subjective perceptions. As far as anyone knows, there could be a third thing intermediate between those (for example.) So the robot might have something else that we don’t know about.
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
Wrong.
Why is this a probable reason? You have one data point—yourself. Sure, you have human-like behavior, but you also have many other properties, like five fingers on each hand. Why does behavior seem like a more significant indicator of consciousness than having hands with five fingers? How did you come to that conclusion?
If a robot has hands with five fingers, that will also be evidence that it is conscious. This is how induction works; similarity in some properties is evidence of similarity in other properties.
But surely, you believe that human-like behavior is stronger evidence than a hand with five fingers. Why is that?
I perform many human behaviors because I am conscious. So the fact that the robot performs similar behaviors is inductive evidence that it performs those behaviors because it is conscious. This does not apply to the number of fingers, which is only evidence by correlation.
Another bold claim. Why do you think that there is a causal relationship between having consciousness and behavior? Are you sure that consciousness isn’t just a passive observer? Also, why do you think that there is no causal relationship between having consciousness and five fingers?
I am conscious. The reason why I wrote the previous sentence is because I am conscious. As for how I know that this statement is true and I am not just a passive observer, how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
Yes, for the above reason.
In general, because there is no reason to believe that there is. Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
That’s just paraphrasing your previous claim.
I have no problems here. First, everything is mechanical. Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own. I then identify “myself” with this mind, rather than the one that’s mute.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
It is not just paraphrasing. It is giving an example of a particular case where it is obviously true.
Nonsense. Google could easily add a module to Google Translate that would convert a statement into its opposite. That would not give Google Translate a mind of its own.
Nope. You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
Obviously I do not take this seriously, but I take it just as seriously as the claim that my consciousness does not cause me to say that I am conscious.
I replied with an example, namely that I say I am conscious precisely because I am conscious. I do not need to argue for this, and I will not.
No, google could maybe add “not” before every “conscious”, in a grammatically correct way, but it is very far from figuring out what other beliefs need to be altered to make these claims consistent. When it can do that, it will be conscious in my book.
What is “you” in this sentence? The mute mind identifies with the mute mind, and the translation process identifies with the translation process.
There are possible reasons for saying you are conscious, other than being conscious. A tape recorder can also say it is conscious. Saying something doesn’t make it true.
Yes. I have pointed this out myself. This does not suggest in any way that I have such a reason, other than being conscious.
Exactly. This is why tests like “does it say it is conscious?” or any other third person test are not valid. You can only notice that you yourself are conscious. Only a first person test is valid.
Exactly, and you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
What the hell does “not questionable” mean?
Let’s try another situation. Imagine two people in sealed rooms. You press a button and both of them scream in pain. However you know that only the first person is really suffering, while the second one is pretending and the button actually gives him pleasure. The two rooms have the same reaction to pressing the button, but the moral value of pressing the button is different. If you propose an AI that ignores all such differences in principle, and assigns moral value only based on external behavior without figuring out the nature of pain/pleasure/other qualia, then I won’t invest in your AI because it will likely lead to horror.
Hence the title “steelmanning the chinese room argument”. To have any shot at FAI, we need to figure out morality the hard way. Playing rationalist taboo isn’t good enough. The hope of reducing all morally relevant properties (not just consciousness) to outward behavior is just that—a hope. You have zero arguments why it’s true, and the post gives several arguments why it’s false. Don’t bet the world on it.
Let’s pause right there. How do you know it? Obviously, you know it by observing evidence for past differences in behavior. This, of course, includes being told by a third party that the rooms are different and other forms of indirect observations.
If the AI has observed evidence for the difference between the rooms then it will take it into account. If AI has not observed any difference then it will not. The word “ignore” is completely inappropriate here. You can’t ignore something you can’t know. It’s usage here suggests that, you expect, there is some type of evidence that you accept, but the AI wouldn’t. Is that true? Maybe you expect the AI to have no long term memory? Or maybe you think it wouldn’t trust what people tell it?
You assume that all my knowledge about humans comes from observing their behavior. That’s not true. I know that I have certain internal experiences, and that other people are biologically similar to me, so they are likely to also have such experiences. That would still be true even if the experience was never described in words, or was impossible to describe in words, or if words didn’t exist.
You are right that communicating such knowledge to an AI is hard. But we must find a way.
You may know about being human, but how does that help you with the problem you suggested? You may know that some people can fake screams of pain, but as long as you don’t know which of the two people is really in pain, the moral action is to treat them both the same. What else can you do? Guess?
The knowledge that “only the first person is really suffering” has very little to do with your internal experience, it comes entirely from real observation and it is completely sufficient to choose the moral action.
You said:
I’m trying to show that’s not good enough. Seeing red isn’t the same as claiming to see red, feeling pain isn’t the same as claiming to feel pain, etc. There are morally relevant facts about agents that aren’t reducible to their behavior. Each behavior can arise from multiple internal experiences, some preferable to others. Humans can sometimes infer each other’s experiences by similarity, but that doesn’t work for all possible agents (including optimized uploads etc) that are built differently from humans. FAI needs to make such judgments in general, so it will need to understand how internal experience works in general. Otherwise we might get a Disneyland with no children, or with suffering children claiming to be happy. That’s the point of the post.
You could try to patch the problem by making the AI create only agents that aren’t too different from biological humans, for which the problem of suffering could be roughly solved by looking at neurons or something. But that leaves the door open to accidental astronomical suffering in other kinds of agents, so I wouldn’t accept that solution. We need to figure out internal experience the hard way.
A record player looping the words “I see red” is very different from how humans see, both internally and behaviorally. A robot which takes a picture, finds the most common pixel color, and if that’s red, plays the same “I see red” sound, is still in some ways different, but a lot less so. And if someone wanted to call this second robot conscious, as far as color is concerned, there would be no problem with that.
You may feel that pain is special, and that if we recognize a robot which says “ouch” when pushed, to feel pain, that would be in some sense bad. But it wouldn’t. We already recognize that different agents can have equally valid experiences of pain, that aren’t equally important to us (e.g. torturing rats vs humans. or foreigners vs family). This is not a new problem, and suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution.
I don’t see how it follows from the fact that foreigners and animals feel pain that it is reasonable to recognize that a robot that is programmed to say “ouch” when pushed feels pain. Can you clarify that inference?
I don’t see anything magical about consciousness—it is something that is presumably nearly universally held by people, and no one on this thread has suggested a supernatural explanation for it. Just because we don’t as-of-yet have an objective metric for consciousness in others does not make it magical.
No, I’m saying that “feels pain” is not a meaningful category. Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated. My example or rats was meant to point out that “feels pain” is very different from “deserves human rights”.
No one has suggested any explanation for it at all. And I do use “magical” in a loose sense.
So what do pain killers do? Nothing?
Move a human from one internal state to another, that they prefer. “Preference” is not without it’s own complications, but it’s a lot more general than “pain”.
To be clear, the concept of pain, when applied to humans, mammals, and possibly most animals, can be meaningful. It’s only a problem when we ask whether robots feel pain.
I″m with EntirelyUseless. You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”.
There are two ways to look at a meaningless phrase. One is to say that perfectly good words are being used wrong. Another is to say that there is some problem with the words themselves. If we look at, say, “purple is bitter”, the first way is clearly more appropriate. If we look at “robot feels pain”, it’s hard for me to tell, which way I prefer.
I don’t think you have established any problem of meaning , so the question of which problem doesn’t arise.
Here is my claim that “robot feels pain” is a meaningless statement. More generally, a question is meaningless, if an answer to it transfers no information about the real world. I can answer “is purple bitter” either way, and that would tell you nothing about the color purple. Likewise, I could answer “does this robot feel pain” and that would tell you nothing about the robot or what you should do with it. At best, a “yes” would mean that the robot can detect pressure or damage, and then say “ouch” or run away. But that’s clearly not the kind of pain we’re talking about.
Since you are equaiting reality with objectivity, you are simply declaring statements about subjectivity meaningless by fiat.
That’s because it is a category error.
Of course it tells me what I should do. It’s ethically relevant if a robot feels pain. if it feels pain when damaged, I should not damage it.
How do you know? You aer assuming that no roobt can have a real subjective sensation of pain, and you have no way of knowing that one way or the other, and your arguments are question begging and inconsistent. (If “robots do (not) feel pain” is meaningless, as you sometimes say, it cannot also be false, as you sometimes also say).
I’m claiming that all subjective experiences have objective descriptions. Please give an example of a subjective experience, other than consciousness, that has no physical evidence. Obviously, I will try to argue that either there is something objective you missed, or that the subjective experience is as poorly defined as consciousness.
But “robot pain” isn’t? How did you come to those conclusions?
That’s not how this works. Rats feel pain without a doubt, but we destroy them quite freely. Whether you will damage the robot is decided by many factors. E.g. if there is some benefit to the damage, if the robot will scream out in pain, or if it’s likely to damage you in return. The robot’s subjective experience of pain only matters if you decide that it matters—this is true for all categories, no matter how artificial.
Are you asking about the “at best” part? Because the rest of that sentence seems quite mundane. Here “at best” is about the limits of my own imagination. You’re welcome to suggest something better.
That’s not contradictory, even if slightly inconsistent. “is purple bitter” is a meaningless question, but the answer “no” is clearly a lot more appropriate than “yes”. The line between falsehood and nonsense is quite blurry. I think we can freely call all nonsense statements false, without any negative consequences.
That’s not how it works either. You can’t infer zero moral relevance of some factor by noting that other factors acan countervail.
I’m not morally omniscient. The robots experience of pain matters if it features in some scheme of ideal moral reasoning. To put i another way, you just proved that nothing is morally relevant, if you proved anything at all.
Well, you do seem to have a subjective intuition that robots will never feel pain. Others intuit differently. What happened to all the science stuff?
Gosh, I really don’t want to start talking about morality now. But I have to point out that the “bitterness of purple” can also matter, if it features in some scheme of ideal moral reasoning. At least if you accept that this moral reasoning could require arbitrary concepts and not just ones grounded in reality.
No, I ran a deterministic procedure in my brain, called “is X well defined”, on “robot pain”, and it returned “no”. It’s only subjective in the sense that mine is different from yours, if you have such a procedure at all. The procedure, by the way, works by searching for alternative definitions of things, such that the given concept is neither trivial nor stupid. Unfortunately, failure to find such definitions does not produce a proof of non-existence, so I’m quite open to the idea that I missed something, it’s just that you inspire little confidence.
I did not mean to imply that ideal moral reasoning is weird and unguessable....only that you should not take imperfect moral reasoning (whose?) to be the last word. The idea that deliberately causing pain is wrong is not contentious, and you don’t actually have an argument against it.
That’s the sense that matters.
That’s not a very interesting sense. Is height also subjective, since we are not equally tall? This sense is also very far from the magical “subjective experience” you’ve used. I guess the problematic word in that phrase is “experience”, not “subjective”?
Height is not a subjective judgement because it is not a judgement. If judgements are going to vary, that matters, because then who knows what the truth is?
I would say that almost none have descriptions, where description means description. But is sounds as though you might actually be talking about physical correlates.
I can’t make much sense of that, since all subjective experiences occur within consciosuness.
I don’t know why you think I am denying that changes in consciousness have correlations in physical activity.
I am pointing out that we cannot determine much about conscious subjective states from external physicial evidence, because we don’t know how to work back from the one to the other. We can’t recover the richness of conscious experience from externals. But we know it is there, in ourselves at least, because being conscious mean having access to your own conscious experience. You are putting the blame on consciousness itself, saying it is a nothingy thing, when the problem is your techniques.
The requrrement that everything be rooted in (external) reality in order to be meaningful is unreasonable, because i, in cases like this, it requires you to have a sort of omniscience before you can talk at all. (It’s fine to defiene temperature as what thermometers measure once you have accurate thermometers).
You see, I’m proposing the radical new view that the world is made of atoms and other “stuff”, and that most words refer to some configurations of this stuff. In this view “pain” doesn’t just correlate with some brain activity, it is that brain activity. The brain activity of pain is an objective fact, and if you were to describe that objective fact, you get an objective description. In this view, the existence of human pain is as real as the existence of chairs. But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
I’m pointing out that you cannot work out one from another, because your concept of consciousness has no properties or attributes that are more grounded in reality than consciousness itself. You need to play rationalist taboo. If you defined consciousness as “ability to process external events” or “ability to generate thoughts” or “the process that makes some people say they’re conscious”, finding a correspondence between consciousness and brain states would be possible, even if not easy. But you seem to refuse such definitions, you call them correlates, which suggests that there could be a consciousness that satisfies none of them. I’m pointing out that even if you had complete knowledge about everything going on in a particular brain, you still wouldn’t be able to tell if it is conscious, because your concept of consciousness is broken.
It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter, and also without denying that it eventually does. Treating pain semantically as some specific brain activity buys you nothing in terms of the ability to communicate and understand …. when you don’t know which precise kind...which you don’t. If Purple and Bitter are both Brain Activity Not Otherwise Specified, they are the same. If you can solve the mind body problem , then you will be in the position to specify the different kinds of brain activity they are. But you can also distinguish them , here and now, using the subjectively obvious difference. And without committing yourself to evil dualism.
I have never claimed otherwise. In fact, there is literally nothing that I have exact description of, in terms of matter—neither pain nor chairs. But you have to know something. I know that “chair is what I sit on” and from that there is a natural way to derive many statements about chairs. I know that “gravity is what makes things fall down”, and from that there is a fairly straightforward way to the current modern understanding of gravity. There is nothing that you know about consciousness, from which you can derive a more accurate and more material description.
It buys me the ability to look at “do robots feel pain” and see that it’s a stupid question.
What evil dualism?
How do you know? And what of things like https://en.wikipedia.org/wiki/Global_Workspace_Theory ?
It doesn’t seem to have given you the ability to prove that it is a stupid question.
Well, for one, you have been unwilling to share any such knowledge. Is it a secret, perhaps?
I see a model that claims to reproduce some of the behaviors of the human mind. Why is that relevant? Where are your subjective experiences in it?
Also, to clarify, when I say “you know nothing”, I’m not asking for some complex model or theory, I’m asking for the starting point from which those models and theories were constructed.
Proof is a high bar, and I don’t know how to reach it. You could teach me by showing a proof, for example, that “is purple bitter” is a stupid question. Although I suspect that I would find your proof circular.
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
I was responding to your claim that “there is nothing that you know about consciousness, from which you can derive a more accurate and more material description.”. This has been done, so that claim was false. You have shifted the ground.
Purple is a colour, bitter is taste, therefore category error.
Then why be so sure about things? Why not say “dunno” to “can robots feel pain?”.
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it. I ask you if it has subjective experiences, because that seems to be the most important aspect of consciousness to you. If you can’t find them in this model, then the model is on my side, not yours.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple. Maybe we mean different things when we say “proof”?
That’s still an irrelevant objection. The issue is whether the concept of consciousness can be built on and refined, or whether it should be abandoned. GWT shows that it can be built on, and it is unreasonable to demand perfection.
Is that worse than saying you know things you don’t know?
Sometimes different people use the same words to mean different things. I deduce that GWT does not build on consciousness as you understand it, because it doesn’t have the most important feature to you. It builds on consciousness as I understand it. How is that irrelevant?
You mean, is saying “dunno” to everything worse than saying something is true without having absolute 100% confidence? Yes. What kind of question is that?
Also, why did you quote my “category error” response? This doesn’t answer that at all.
If we assume that the sweet spot is somwhere between 0% and 100%, are you sure you are saying “dunno” enough.
Quite sure. How about you?
And, again, what sort of question is that? What response did you expect?
Why is the question about the “are tables also chairs” not meaningful? Structured knowledge databases like Wikidata have to answer that question.
Image that a country has a general tariff for furniture and there’s a tariff exemption for chairs. One clever businessman who sells tables starts to say that his tables are chairs. In that case, the question can become important enough that a large sum of money is spent on a legal process to answer the question.
This seems like a good comment to illustrate, once again, your abuse of the idea of meaning.
There are two ways to understand this claim: 1) most words refer to things which happen also to be configurations of atoms and stuff. 2) most words mean certain configurations of atoms.
The first interpretation would be fairly sensible. In practice you are adopting the second interpretation. This second interpretation is utterly false.
Consider the word “chair.” Does the word chair mean a configuration of atoms that has a particular shape that we happen to consider chairlike?
Suppose someone approached a chair in your house with an atomic microscope and discovered that it was not made of atoms, but was a continuous substance without any boundaries in it. Would you suddenly say that it was not a chair? Not at all. You would say “this chair is not made of atoms.” This proves conclusively that the meaning of the word chair has nothing whatsoever to do with “a configuration of atoms.” A chair is in fact a configuration of atoms; but this is a description of a thing, not a description of a word.
This could be true, if you mean this as a factual statement. It is utterly false, if you mean it as an explanation of the word “pain,” which refers to a certain subjective experience. The word “pain” is not about brain activity in the same way that the word “chair” is not about atoms, as explained above.
I would just note that “are tables also chairs” has a definite answer, and is quite meaningful.
I would say that being a chair (according to the meaning of the word) is correlated with being made of atoms. It may be perfectly correlated in fact; there may be no chair which is not made of atoms, and it may be factually impossible to find or make a chair which is not. But this is a matter for empirical investigation; it is not a matter of the meaning of the word. The meaning of the word is quite open to the possibility that there is a chair not made of atoms. In the same, the meaning of the word “consciousness” refers to a subjective experience, not to any objective description, and consequently in principle the meaning of the word is open to application to a consciousness which does not satisfy any particular objective description, as long as the subjective experience is present.
I explicitly added “other stuff” to my sentence to avoid this sort of argument. I don’t want or need to be tied to current understanding of physics here.
But even if I had only said “atoms”, this would not be a problem. After seeing a chair that I previously thought was impossible, I can update what I mean by “chair”. In the same, but more mundane way, I can go to a chair expo, see a radical new design of chair, and update my category as well. The meaning of “chair” does not come down from the sky fully formed, it is constructed by me.
I want to see that.
Then you should admit that words like “chair” or “consciousness” do not have anything about physics in their meaning.
Tables are not chairs.
I was hoping to see the reasoning behind that. Where does the answer come from? Obviously, you chose it arbitrarily.
For one thing (not the only thing), chairs are things that are normally used for sitting. Tables are not normally used for sitting, so they are not chairs. Nothing arbitrary about that reasoning.
Where do those definitions come from? Do you know what “arbitrary” means? By the way, I have chairs that I have never sat on, and there are tables I’ve sat on quite a bit. What is “normally”?
The meaning of words comes from people’s usage (which is precisely why words do not mean anything like what you think they do.)
Yes.
The vast majority of tables are rarely or never sat on. The table in my house has never been sat on. The vast majority of chairs are frequently sat on, like the ones in my house. It may not be the only normal thing, but certainly what happens in the vast majority of cases is normal.
Also, I said “for one thing.” Even if people normally sat on tables, they would not be chairs, because they do not have the appropriate structure, just as benches are not chairs.
Why do people use the words that way?
Also, I’d point out that what I mean by “chair” is not equivalent to people’s usage. You could call it “reverse engineered” from people’s usage. There are some differences. Do you know where those come from?
Obviously I don’t even know how most people use those words—I only know about my acquaintances and people on TV, I could be living in a bubble, I could be using many words wrong.
Stools are chairs, but benches are just wide stools. So if I have a small table (such as a coffee table), and use it for sitting, it’s not a bench, it’s a stool and therefore a chair?
In case it’s not obvious what I’m doing, I intend to ask you these stupid questions until you realize that they are stupid questions, that they don’t matter and that the correct way to answer them is to pull answers out of your ass (i.e. arbitrarily).
Roughly speaking, because if one performed factor analysis on their life experiences, they would have factors more or less corresponding to the words they use.
Yes, largely from your own experience of the usage contexts of the word chair, which as you say could be somewhat different from the overall usage patterns, although it is unlikely that there are large differences.
No. As I said before, “for one thing.” There are still reasons why a coffee table would not suddenly become a stool, even if in fact you use it for that.
That’s incorrect, so I won’t realize that no matter how many such questions you ask.
That said, it is true that we extend words to additional things when we think the new things are similar enough to the old things.
The problem of “consciousness” is that we have no idea how similar the new thing is to the old thing, no matter how many objective descriptions we come up with for the new thing. That is: the problem is that we have no idea whether the robot is conscious or not, no matter what objective facts we know about the robot. This does not mean that we can arbitrarily decide to say “let’s extend the word conscious to the robot.” It’s like this situation: there is an object behind a screen, and you are never allowed to look behind the screen. Should we call the object a “chair” or not?
The fact that language is both vague and extendible does not suddenly entitle you to say that we can say that an object behind a screen is either “a chair” or “not a chair” without first looking behind the screen. And in the case under discussion, no one yet knows a way to look behind the screen, and possibly there is no such way.
What makes you believe that? Ideally we’d want something like this to be true, but assuming that it is true seems a bit naive. There are also some serious technical problems with the idea (how do you quantify experiences? what do you do when different people have different experiences but have to use the same words? etc).
I don’t think you know what “arbitrary” means. It does not mean “completely random”. In a deterministic world everything has some explanation. It’s just that sometimes the explanations are long, bizarre and kind of stupid, if you look at them.
Likewise, we cannot say whether some new object is a chair only by knowing objective facts about the object. We also need to know what the word “chair” refers to. And in the case that our definition of “chair” doesn’t help us, we’re going to have to extend it in some arbitrary way.
Replace “you are never allowed” with “it is impossible”. Then I will suggest that the object does not exist.
If nothing like that were true, words would actually be arbitrary, as you suppose. Then if we looked for reasonable boundaries, they would fall in random places. For example, it might turn out that the word “chair” refers in some cases to physical objects that people sit on, and in other cases to tiny underground animals. Words don’t work like this, which gives me good reason to think that something like this is true. It is not “assuming” anything.
My point is that the meaning of words is not long, bizarre, and stupid. The meanings of words are actually quite reasonable. Also, you are mistaken about the meaning of “arbitrary”. It does in fact mean not having a reason; you are just saying that sometimes “this doesn’t have a reason” is shorthand for saying that it doesn’t have a good reason. But even understood like this, the meaning of words is not arbitrary.
We will have to extend it, but it will not have to be in an arbitrary way. There may be a good reason why we extend it the way that we do, not a stupid reason.
Then your suggestion is false; even if there is a screen that you cannot go behind, it would not mean that there is nothing inside it. You cannot look beyond the event horizon of the visible universe, but there are things beyond it.
“Chair” happens not to refer to animals, although it can mean “chairman”. “Stool” can refer to several things, including poop. Finally, “shrew” refers both to a tiny animal and an annoying woman. Words do work like this. Surely there are some bizarre historical reasons how the word got those two meanings. But you have to admit, that these reasons have very little to do with the properties of small animals and annoying women.
I’m not saying that there are no forces working to simplify language. But there is a very large gap between that and “factor analysis on life experiences”.
“Arbitrary” literally means “decided by arbiter”, as opposed to “decided by the rule of law”, i.e. there was a question that “the law” couldn’t answer directly, and an arbiter had to decide “arbitrarily”. It doesn’t mean that the arbiter flipped a coin.
Going back to chairs and tables, you can always find some excuse why a coffee table I sit on isn’t a chair, and I can always find some excuse why it should be (I mean, how the hell is http://www.ikea.com/us/en/catalog/products/20299829/ not a stool?). We could live perfectly well is a world were my excuses are right and yours are wrong. The reasons we don’t are long bizarre and stupid. That falls under “arbitrary” perfectly well.
I’d rather not bring modern physics into this, but I have to point out, that suggesting those things don’t exist will cause no problems to anyone. At worst, this suggestion would lose to another idea through Occam’s razor.
Also, cases where a “single” word has two meanings are cases of two words. They are not examples of words that have arbitrary meanings.
You first said that if there was nothing like factor analysis, some words would have two unrelated meanings, then I point out that lots of words have two unrelated meanings, and now you say that one word with two meanings is two words (by definition, I assume?), contradicting your own claim we started from. Do you see how bad this looks?
Sure, there are words that have different meanings and different origins, that, thanks to some arbitrary modifications, end up sounding the same. There is an argument to discount those. But lots of words do have the same origin and the new meaning is a direct modification of the old one.
You misunderstood. The point is that if there was not some common meaning, the applications of a word would be random. This does not happen in any of the cases we have discussed, and two entirely unrelated usages are cases of two words.
This is true, and there is nothing stupid or arbitrary about this way of getting a secondary meaning.
I have never denied that the ways different people use the same words are similar. This however does nothing to support your “factor analysis” theory, nor does it have anything to do with words that have multiple unrelated meanings.
This is a claim with no justification. The whole “one word is two words” formulation is inherently bizarre. Of course, saying that “committee chair” and “armchair” are both “chairs”, doesn’t mean the two things are actually similar. Likewise putting both “armchair” and “stool” in under one label does not reduce their differences, and putting “stool” and “coffee table” in different categories does not reduce their similarities.
Sure it does. People use words in similar ways because their lives have similar factors.
Technically, there are no such words. As I said, these are multiple words that use similar spellings.
Consider these two statements:
1) A committee chair and an armchair are both “chairs.” 2) A committee chair and an armchair are both chairs.
The first statement is true, and simply says that both a committee chair and an armchair can be named with the sound “chair”.
The second statement, of course, is utterly false, because there is no meaning of “chairs” that makes it true. And that is because there is not a word that has both of those meanings; there are two words which are spelled and spoken alike.
In fact, using different names adds a difference: the fact that the things are named differently. Still, overall you are more right than wrong about this, even though you have the tendency to ignore the real reasons for names in favor of appearances, as when you say that “pain” means “what makes someone say ouch.” Obviously, if someone says “ouch” because he wishes to deceive you that he is feeling pain, pain will not be the wish to deceive someone that he is feeling pain. Pain is a subjective feeling; and in a similar way, a coffee table is not merely something that has a certain shape, but something that was made for a certain intended use, even if you personally use it for sitting.
No, people use words in similar ways, because they want to communicate with each other. And because word meanings are usually inherited rather than constructed. It’s not false that the factors are usually similar, but not all true statements follow from one another. Some people with very different factors may use words similarly and others with similar factors will eventually use them differently.
Again, nobody thinks that the two things are similar or share properties, but that’s exactly what you asked for. If you want a milder example, I can offer “computer”, which can refer to an electronic device or to a human who does arithmetic (old usage). The two meanings are still very different, but they do share a property (they both compute), and it’s easy to see that a sentence “I had computers calculate this solution” is natural and could refer to either (or both). At the same time, using two different words for them (e.g., let’s call humans who compute “analysts”) would also be natural. The reasons we don’t use two words have very little to do with the properties of humans or electronic devices.
Etymology is not meaning.
That’s not up to you; you made the argument that if there is a screen that you cannot look behind, there is nothing behind it. That argument is false.
The suggestion will be false, whether or not it causes problems for anyone.
Exactly like suggesting that other people’s conscious experiences do not exist, since this would mean that the reason for your own talk about your own experiences differs from the reason for other people’s talk about their experiences. There is no reason to believe in such a difference.
It would be, if word meanings weren’t assigned in a stupid way. And it does usually help to understand the word.
By “problem” I meant “contradiction”. Contradictions is how we establish what is true and what is false.
Maybe you didn’t want to quote the “at worst” part? Because now I almost agree with you.
Wrong. There’s nothing stupid about this process; on the contrary, it would be stupid to attempt to make meaning exactly correspond with etymology.
They are one way, not the only way. Your positing that something doesn’t exist because you cannot access it is absurd, contradiction or not.
That’s a bold claim. God forbid words have consistent meanings over time!
Going back to the word “arbitrary”, do we actually disagree about what it means, or is this just a useless tangent? You said it means “without any reason”. The obvious problem is that almost everything has some reasons, including whims of small children, delusions of the insane and results of fair coin flips. I suggest that if the word meant “without good reason”, the word would be more useful.
We probably do not disagree about what it means, but we disagree about what we are saying it means. I do say it means without any reason, although we can take it more specifically as “without any reason of the kind we are currently thinking about.”
If we take as I suggested, it would be possible in some cases to mean “without good reason,” namely without a reason of the kind we are currently thinking about, namely a good one.
In general, this topic came up because you were asserting that questions like “are tables also chairs” are stupid and only have arbitrary answers. If arbitrary means that there cannot be a good reason, then you are mistaken, because we have good reason for saying that tables are not chairs, and the stupidity would only be in saying that they are chairs, not in saying that they are not.
In regard to the issue of consciousness, the question is indeed a useless distraction. It is true that words like “pain” or even “consciousness” itself are vague, as are all words, and we exercise judgement when we extend them to new cases. That does not mean there is never a good reason to extend them. But more importantly, when we consider whether to extend “chair” to a new case, we can at least see what the thing looks like. In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain. And so the case is quite different from the case of the chair: as I said before, it is like asking if an unknown object behind a screen is a chair or not. Unknown, but definitely not arbitrary.
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
Here’e an example of weird reasons. How can shape not determine the difference? If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects? IKEA can have whatever intentions they want, but http://www.ikea.com/us/en/catalog/products/20299829/ is a stool. Are you seriously telling me that it isn’t? Now, one could say that a stool can be both a chair and a table, and I think that’s what IKEA does, but then you’ve already claimed this to be impossible.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem. There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
“Properties of the objects being classified” are much more extensive than you realize. For example, it is property of pain that it is subjective and only perceived by the one suffering it. Likewise, it is a property of a chair that someone made it for a certain purpose.
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
In most cases of that kind, the thing being called a table really is a table, and not a stool. Obviously I cannot confirm this in the particular case since I do not intend to buy it. But it is related to the fact that it is made for a certain purpose, as I said. In other words, in most cases the thing is not suitable for use as a stool: it might collapse after one occasion of sitting on it, or anyway after several days. In other words, being made as a table, it is physically unsuitable to be used as a seat. And consequently if it did collapse, it would quite correct to say, “This collapsed because you were using it as a stool even though it is not one.”
That said, I already said that the intention of the makers is not 100% determining.
That’s not subject to falsification, in the same way that it is not subject to falsification that the thing I am sitting on is called a “chair.” In other words, I already notice the similarity between all the things that are called feelings in the same way that I notice the similarity between chairs.
Talk about assumptions. I assume, and you are assuming here, that I have a brain, because we know in most cases that when people have been examined, they turned out to have brains inside their heads. But the fact that my toe hurts when I stub it, is not an assumption. If it turned out that I did not have a brain, I would not say, “I must have been wrong about suffering pain.” I would say “My pain does not depend on a brain.” I pointed out your error in this matter several times earlier—the meaning of pain has absolutely nothing at all to do with brain activities or even the existence of a brain. As far as anyone knows, the pain I feel when I stub my toe could depend on a property of the moon, and the pain I feel when I bump into a lamppost on a property of Mt. Everest. If that were the case, it would affect in no way the fact that those two pains feel similar.
This is completely wrong, for the reason I just stated. We are not talking about similarities between brain states—we are talking about the similarity of two feelings. So it does not matter if the robot’s brain state is similar to mine. It matters whether it feels similar, just as I noted that my different pains feel similar to one other, and would remain feeling similar, even if they depended on radically different physical objects like the moon and Mr. Everest.
When exactly is the intention relevant? If two objects have the same shape but different intended uses, and you still classify them the same, then the intention is not relevant. More generally, if we have variables X, Y and want to test if a function f(X,Y) depends not only on X, but also on Y, we have to find a point where f(X1, Y1) is not equal to f(X1, Y2). Are you saying that sometimes intention matters, just not for chairs? If not, I can only assume that intention doesn’t determine anything and only shape is important.
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing. Unfortunately for you, you do have a brain.
Are you going to feel the robot’s feeling and compare?
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Unquestionably, it can be meaningfully extended to robots. You simply mean the same thing that you mean in the regular case. The only question is whether there is any feeling there, not if “feeling” has a meaning, since we already admitted that it does have a meaning.
The possibility of falsification is a good thing for a physical theory. It is a very bad thing for a theory of the meaning of a word. As you already admitted, the fact that the pieces of furniture we normally sit on are called chairs is not subject to falsification, because that is just what is meant by “chair.” But a physical theory of a chair, e.g. “chairs are made of atoms,” is falsifiable, since someone could examine a chair and discover that it was not made of atoms. He would not then say, “We have discovered that ‘chair’ meant something different from what we thought.” He would say, “We knew what ‘chair’ meant, and that is unchanged, but we have learned something new about the physical constitution of chairs.”
In the same way, I am referring to certain feelings when I talk about “pain.” The fact that the word pain refers to those feelings cannot be falsified, because it is just what that word means. But whether pain depends on a brain activity is a falsifiable physical theory; it has nothing to do with the meaning of the word “pain.”
Assuming that I do, that is fortunate, not unfortunate. But as I was saying, neither you or I know that I do, since neither of us has seen the inside of my head.
No. The question is not whether the robot has a feeling which feels similar to me as my feeling of pain; the question is whether the robot has a feeling that feels to the robot the same way that my feeling feels to me. And since this has two subjects in it, there is no subject that can feel them both and compare them. And this is just how it is, whether you like it or not, and this is what “pain” refers to, whether you like it or not.
Can you actually support your claim that intention matters? To clarify, I’m suggesting that intention merely correlates with shape, but has no predictive power on its own.
It’s somewhat complicated. “Experiences are brain states” is to an extent a theory. “Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition. Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
Artificial things are made for a purpose, and being made for a purpose is part of why they are called what they are called. This is an obvious fact about how these words are used and does not need additional support.
If you mean pain is the conscious state that follows in that situation, yes, if you mean the third person state that folllows, no.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason. Definitions shouldn’t be falsifiable, and are not physical theories.
No. The stars outside event horizon of the visible universe are similar to the stars that we can see, but there is no way to compare them.
One can however ask the question, “Are the stars similar?” and one can answer yes or no. In the same way we can ask if the robot feels like we do and we can say yes and no. But there is access to the answer here, just as there no access in the case of the stars. That has nothing to do with the fact that either they are similar, or they are not, both in the case of the robot, and in the case of the stars.
Wow, you have no idea how many bold claims you make. To clarify once again, when I ask if intention matters, I’m asking whether the relationship between intention and classification is causal, or just a correlation. You are supposed to know the difference between those two things, and you’re supposed to know, in theory, how to figure out which one is relevant in a specific case. This whole “does not need additional support” thing inspires no confidence.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
I hate bringing up modern physics, it has limited relevance here. Maybe they’ll figure out faster than light travel tomorrow, and your point will become moot. But if we must...
If we insist that something beyond the event horizon exists (I’d love to see how you define that word), we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too. Note though, that the certainty of beliefs derived this way should be pretty low. And in the case of robots, hardly any of this applies.
It is causal, but not infallible.
That’s your problem. Everyone else will still call it “the sun,” and when you say “the sun didn’t rise this morning,” your statement will still be false.
Exactly. You can make arguments for and against robot pain as well. Meaning does not come from the fact that your arguments are strong or weak: the meaning is there first, even before you have any reason to think that the thing is true.
Ok, do you have any arguments to support that claim?
That may depend on the specific circumstances of the discovery. Also, different people can use the same words in different ways.
Arguments like what?
As I said, this is how people use the words.
Like yours, for example.
What words? The word “causal”? I’m asking for arguments why you think that the relationship between intention and classification is causal. I expect you to understand the difference between causation and correlation. Why is this so difficult for you?
I understand the difference, and I have no difficulties here. I said it was causal, not merely correlative.
Ok, do you have any arguments to support that it is causal?
As I said, this is how these words work, that is words like “chair” and so on. If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair. And those things are true even given the same form, so it is causal, not merely correlated.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
I am not talking about evidence, but about meaning; when we say, “this is a chair,” part of the meaning is that it was made for sitting. So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair, in the ordinary sense of the term, because they were not made for sitting.
I don’t know where you think that was established. I said the fact that someone made something for a different purpose would not guarantee that it was that thing, as that if you made something in the shape of a hammer, and said it was for sitting, it would not be a chair. But the fact that it was made for sitting does indeed make it closer to being a chair, not by way of evidence or correlation, but directly by more closely matching the meaning of “chair.”
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair? Would simply sitting on the first rock convert it into a chair?
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
By acting like you actually want to understand what is being said, rather than by acting like you are in a fight where you are trying to win. In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
In particular, consider my answer to your next question, because it is basically the same thing again.
There is no guarantee of this, because the word “chair” is vague. But it is true that there would be more reason to call the second rock a chair—that is, the meaning of “chair” would apply more properly to the second one than to the first one. So the answer might be yes, depending on all the other details of the situation.
In general, no, because the word “chair” does not work that way. It does not mean what someone happens to sit on, but what is made for sitting on.
If you are not ignorant of how the word is used, you do have to involve the intention of the maker.
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer. Here I’m only asking you for a black and white answer on this very specific question. I understood your original reply, but I honestly have not idea how it was supposed to answer my specific question. When people refuse to give direct answers to specific questions, I infer that they’re conceding.
What exactly do you mean by “vague”? The word “chair” refers to the category of chairs. Is the category itself “vague”?
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair. Apparently one that you have more knowledge of than I. So I keep asking you specific questions about this concept. And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff. By the way, this thread started here where you explicitly start talking about words and meanings, so that’s what we’re talking about.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to, although my guess is to the IKEA discussion. In any case, the response there, and to anything similar that you might be discussing, is that there is no direct answer because the answer is “in some cases yes, in some cases no, depending on the particular circumstances.”
First of all, all words are vague, so there is no such thing as “what exactly do you mean by.” No one ever means anything exactly. Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Yes. All categories are vague, because they are generated by a process similar to factor analysis, as I have said before, and this does not generate precise categories, but vague ones.
It is false that the meanings are arbitrary, for the reasons I have said. It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
This is true of all words. All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things. Therefore no words refer to precise concepts (in fact there is no such thing as a precise concept, as I have already said.)
First of all, you are the one who needs the “language 101” stuff, since you have been abusing the concept of meaning by confusing it with testability. Second, nope, we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t. You have been constantly confusing two entirely separate issues, and ignoring me when I point out the separateness of these issues. Let me do so one more time.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.” Those are two entirely separate things. Likewise, you have been confusing “this statement is vague” with “this statement is not testable.” These are two entirely separate things.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
Second, consider a line of stars outside the visible universe, except that some of the stars, on the right, are identical white dwarfs, and the ones to the left of them are identical red giants. Where exactly do the stars stop being white dwarfs and begin being red giants? This time, we cannot answer the question because there is no test to give us the answer. But vagueness is not an issue, because there is a sharp division between the two parts. We simply cannot find it by testing.
Third, consider a line of stars outside the the visible universe, constructed as in the first case. This time, there are two problems: we cannot test where the boundary is, and the boundary is vague. These are two entirely different issues.
Fourth, consider a line of things where the one on the left is a statue, the one on the right is a human being, and somewhere in the middle there are robotic things. Each thing differs by a single atom from the thing on its left, and from the thing on its right.
Now we have the question: “The statue is not conscious. The human being is conscious. Is the central robot conscious?” There are two separate issues here. One is that we cannot test for consciousness. The second is that the word “conscious” is vague. These are two entirely separate issues, just as they are in the above cases of the stars.
Let us prove this. Suppose you are the human being on the right. We begin to modify you, one atom at a time, moving you to the left. Now the issue is testable: you can ask yourself whether you are conscious, and if you have any answer at all, or even if you manage to ask yourself the question, then you are conscious. Note that this is quite different from anyone else asking the thing if it is conscious, because the question “does this thing say it is conscious” is not the same as “is this thing conscious.” But being conscious is having a first person point of view, so if you can ask yourself anything, you are conscious. Unfortunately, long before you cease to be conscious, you will cease to be able to ask yourself any questions. So you will still not be able to find a definite boundary between conscious and not conscious. Nonetheless, this proves that testability is entirely separate from vagueness.
Well, that explains a lot. It’s not exactly ancient history, and everything is properly quoted, so you really should know what I’m talking about. Yes, it’s about the identical table-chairs question from IKEA discussion, the one that I linked to just a few posts above.
Why are there no determinate boundaries though? I’m saying that boundaries are unclear only if you haven’t yet decided what they should be. But you seem to be saying that the boundaries inherently cannot be clear?
There is nothing vague about the results of factor analysis.
On this topic, last we seemed to have agreed that “arbitrary” classification means “without reasons related to the properties of the objects classified”. I don’t recall you ever giving any such reasons.
For example, you have said ‘”are tables also chairs” has a definite answer’. Note the word “definite”. You also keep insisting that there is factor analysis involved, which would also be an objective and natural way to assign objects to categories. By the way “natural” is the opposite of “arbitrary”.
Yeah, I recall saying something like that myself. And the rest of your claims don’t go well with this one.
Well, you decided that I need it, then made some wild and unsupported claims.
Yes, the two statements are largely equivalent. Oddly, I don’t recall you mentioning testability or measurability anywhere in this thread before (I think there was something in another thread though).
I don’t think I’ve done that. It’s unfortunate that after this you spent so much time trying to to prove something I don’t really disagree with. Why did you think that I’m confusing these things? Please quote.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you. You say “cannot possibly” and then give no reasons why.
I however have no problems with the vagueness here, because the two categories are only shorthands for some very specific properties of the starts (like mass). This is not true for “consciousness”.
It’s not a test if “no” is unobservable.
There is nothing stopping you only in the sense that nothing stops you from asserting falsehoods. (As we see is the case for you personally.)
It is intrinsically vague: “Red giant” does not and cannot have precise boundaries, as is true of all words. The same is true of “White dwarf.” If you cut the line, you will indeed be cutting it arbitrarily, as you say yourself, and this has nothing to do with the meaning of those words.
The rest does not respond to the comparison about consciousness, and as I said we won’t be discussing the comments on language.
Again, you make a claim and then offer no arguments to support it. “Red giant” is a term defined quite recently by a fairly small group of people. It means what those people wanted it to mean, and its boundaries are as precise as those people wanted them to be.
You started the language discussion, but I have to explain why we’re continuing it? I continue, because I suspect that the reasoning errors you’re making about chairs are similar to the errors you’re making abut consciousness, and chairs are easier to talk about. But it’s only a suspicion. Also, I continue, because you’ve made some ridiculous claims and I’m not going to ignore them.
Assuming “experience” is a meaningful category.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
Yes, “feeling” and “experience”, are pretty much the same thing, I didn’t mean to imply otherwise in the text you quoted. Instead, the first sentence refers to your definition, and the second offers an alternative one.
There is a classification problem with tables and chairs. Generally, I know what chairs and tables are supposed to be like, but there are objects similar both to chairs and to tables, and there isn’t any obvious way to choose which of those similarities are more important. At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
So they are either both meaningful, or both meaningless. But you have used “experience” as though it is meaningful, and you have implied that “feeling” is meaningless.
That was a predictable problem. Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates, you cannot make that kind of statement, and so cannot express identity theory.
Whereas, from my point of view, 1st person experience was there all along.
No, I used “experience” as a label. Let me rewrite that part:
That’s assuming that “experience”, as you use that word, is a meaningful category. If you didn’t start from that assumption, and instead defined experiences as brain states, you could …
Is that better? I understand that having two definitions and two similar but not identical concepts in one sentence is confusing. But still I expect you to figure it out. Was “identified” the problem?
What vocabulary relating to what mental states do I reject? Give examples.
Wasn’t “chairness” there too? More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
As opposed to what?
Does “meaningful” mean “meaningful” here , or is it being used as a misleading proxy for something like “immeasurable” or “unnecessary” or “tadasdatys doens’t like it”?
You keep saying various words are meaningless. One would not want to use meaningless words, generally. OTOH, you have revealed elsewhere that you don’t use “meaningless” to mean “meaningless”. So who knows?
Consciousness is in the dictiionary, chariness isn’t.
Consciousness is a concept used by science, chairness isn’t.
Consciousness is supported by empirical evidence, chairness isn’t.
It’s not that words are meaningless, it’s that you sometimes apply them in stupid ways. “Bitter” is a fine word, until you start discussing the “bitterness of purple”.
Are dictionary writers the ultimate arbiters of what is real? “Unicorn” is also in the dictionary, by the way.
Physicalist, medical definition of consciousness is used by science. You accuse me of changing definitions when it suits me, and then proceed to do exactly that. I guess that’s what projection looks like.
What evidence exactly? I have to assume my last paragraph applies here too.
If you can’t even come up with arguments why a silly concept I made up is flawed, maybe you shouldn’t be so certain in the meaningfulness of other concepts.
That the brain is not quiescent when experiencing pain is an objective fact. But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Please check out multiple realisability.
Because of that, none can genuinely tell whether an advanced robot has genuine qualia. That includes you, although you are inclined to think that your subjective intuitions are objective knowledge.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
Sure, but what does that have to do with anything? Does “objective” mean “well understood” to you?
There are multiple representations of pain the same way that there are multiple representations of chair.
It is ridiculous how much of this debate is about the basic problem of classification, rather than anything to do with brains. Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument. Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
Calling my reasoning, even if not fully formal, “subjective intuitions” seems rude. I’m not sure if there is some point you’re trying to express with that.
Not sure where you see me talking about intelligence. But intelligence is far more well defined and measurable than consciousness. Multiple realizability has nothing to do with that.
We do, on the other hand, know subjecively what pain feels like..
That’s not the point. The point is that if we have words referring to subjective sensations, like “purple” and “bitter”, we can distinguish them subjectively. But if we discard out subjective insight into them, as you are proposing, and replace them with vague objective descriptions—vague, because no one knows precise descriptions of the full gamut of atomic configurations which implement pain. -- then you take a step backwards. You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter. Basing semantics on objective facts, or “reality” as you call it. only works if you know which fact is which. You are promoting something which sounds good, but doesn’t work—as a research program. Of course it works just fine at getting applause from an audience of dualism-haters.
Are you talking about realisations or representations?
No one has made that argument. The point is not that it is not ultimately true that subjetive states are brain states, it is that rejecting the subjective entirely, at this stage, is not useful. Quite the reverse. Consciousness is the only thing we know from the inside—why throw that away?
If we know what casues or predicts something, then we can sometimes reproduce it in radically different ways. That is the basis of artificial intelligence: realising intelligence in a medium other than a brain.
But you are saying that pain cannot be realised by a robot, and your reasons for sayign that are entirely non-empirical.
We might be able to refine the concept of consciousness as part of a research programme, but research programmes have to start with folk concepts.
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires. But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not. It’s as if you assume something is real, just because it comes out of people’s mouths.
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
Parts of my text are referring to the arguments I saw in wikipedia under “multiple realizaility”. But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. In particular, you have to believe this to even ask whether robots feel pain.
I’m still waiting for your empirical reasons why “purple is not bitter”, or better yet, “purple is not a chair”, if you feel the concept of bitterness is too subjective.
But not much of an argument for using semantics grounded in (physical) reality. Doing so does not buy you maximum precision in absolute terms, and , what is worse, the alternative , of grounding terms for types of experience is 1st person experience, can give you more precision.
You may believe that, but do you know it?
The difference is that I accept the possibility that first person evidence could falsify 3rd person theory.
I’m not taking 1st person to mean 3rd person reports of (someone elses) 1st person experience.
What sort of precision are you talking about? More generally, you’re repeatedly said that the concept of consciousness is very useful. I don’t think I’ve seen that usefulness. I suspect that elaborating here is your best bet to convince me of anything. Although even if you did convince me of the usefulness of the term, that wouldn’t help the “robot pain” problem much.
That’s a slightly weird question. Is it somehow different from “why do you believe that” ? I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary. It’s very likely that “zapping” isn’t quite sufficient, depending on how generously you interpret that word. But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper. Obviously, paper is slower and less accurate. But you seem to be implying a more fundamental difference between those two methods of data storage. Why is that?
I like type theory. Let X be what I’m sitting on. Type of X is “chair”, type of “chair” is “category”, a painting of X is a representation of X, it is not a representation of “chair”. Representations of “chair”, in the same sense that painting represents X might not exist. Somehow I’m quite comfortable saying that an object of type Y is what represents Y. “Instantiates” might be the best word (curiously though, google uses “represent” to define it). Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
I have said that actual experience is useful to pin down the meaning s of words referring to exerpeince.
Not at all. That there is a difference betewen belief and knowledge is very standard.
There’s an extensive literature of arguments to the contrary,
It is the idea that you can learn aout the inward or 1st person by purely outward or 3rd person means that is contentious.
No, I am saying that my first person is me, and your first person is you. SO my first person information is my experience, not someone else’s report of their experience.
Well, you said that the two R words mean the same hting , when by established usage, they don’t That looks like a source of confusion to me.
I assure you that none of the beliefs I state here were generated by flipping a coin. They are all to some extent justified. That’s why the question is weird—did you expect me to answer “no”?
There is extensive literature of arguments in favor of god or homeopathy. Doesn’t make those things real. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments (likewise for god and homeopathy). However you seem to have read quite a bit more, and you haven’t raised my confidence in the value of that literature so far.
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences. I’m not saying that this hypothesis is true, I’m only saying that you don’t know it to be false. And if it did happen to be true, then your internal reasoning about your experiences would not be much different from my reasoning about your experiences written on paper (aside from the low precision of our language). Curiously, a physical experiment is more likely to test this hypothesis, than your internal reasoning.
It is a potential source of confusion, but that doesn’t mean it’s causing any right now. Maybe if we talked about representations such as paintings, it would cause some. Regardless, I’ll try to use the words you prefer. Debating their differences and similarities is very orthogonal to our main topic.
You said there was a “lack” of arguments to the contrary, and I pointed out that there wasn’t.
Then why didn’t you say lack of good arguments? And why didn’t you say what is wrong with them?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
“Magic” is not a helpful phrase.
You need to distinguish ontology and epistemology. Experiences and memories and so on have their physical correlates—ontology—but that does not mean you can comprehend them—epistemology. We might be able to find ways of translating between correlates and experience, but only if we don’t ignore experience as an epistemology. But, again, taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
Experience is experience, not reasoning about experience.
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with? Please explain your thought process, I really want to know. You see, productive debate requires some amount of generosity. I may not be polite, but I don’t think you’re illiterate or insane, and I don’t think I nitpick about things this obvious.
Maybe this is a symptom that you’re tired of the whole thread? You know you can stop whenever you want, right?
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves. That’s where my “baseless hypothesis” comes from.
To clarify, the hypothesis isn’t a direct response to something you said, it’s a new angle I want to look at, to help me understand what you’re talking about.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation. I realize that this might not be obvious. Though I feel that this is a natural use of the word.
That’s fine. There are some things that I’d want to pick on, although I’m not sure which of them are significant. But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Did you mean “consciousness”? To put is bluntly, if you haven’t; heard of MR, there is probably a lot you don’t know about the subject.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
You yourself are blocking off the possibility of understanding consciousness , subjectivity and experience by refusing to allow them as prima-facie, pre-theoretic phenomena.
You say that we must start with reality, but we cannot: (an accruate map of reality) is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist.
I get that you hate, hate, hate dualism or anything else that threatens physical monism, but you can’t prove physical monism by begging the question against it. You are doing it no favours.
Nobody else has a problem with robot pain as a meaningful possibility. You do because you have removed the fist person from your definitions.
Heh. That’s fair.
If having experiences is an important part of consciousness, then I’d expect you to reason about them, what induces them, their components, their similarities and differences. This “consciousness in general” phrasing is extremely weird.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
Of course, I mean your methodology starts..
I’m not sure that changes anything.
Can you argue your point? I can argue mine.
You say that we must start with reality, but we cannot: (an accruate map of,) reality is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
The question “where did you start” has some bad assumptions. Of course at first we all have to start from the same naive point. If we did arbitrarily start from different unrelated assumptions, expecting to agree on anything would be weird.
So, what happened is that I started from naive assumptions, and arrived at physicalism. Then when I ask myself a new question, I start from where I last stopped—discarding all of my progress would be weird.
You may think that dropping an initial assumption is inherently wrong, but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary. You might be able to convince me that I do need to keep some similar assumption for technical reasons, but that wouldn’t solve the “robot pain” problem.
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name. But when I drop consciousness, my life gets easier. How does that work?
There is a difference between a working hypothesis and an unfalsifiable dogma. It seems to you that there is nothing to explain about consciousness because you only accept 3rd-person empirical data, because of your ontology.
Could explain what assumption you are dropping, and why, without using the word magical.
I’d prefer if you settled on one claim.
That would be the problem for which there is no evidence except your say-so.
You can function practically without a concept of gravity, as people before Newton did. What you can get away with theoretically depends on what you are trying to explain. Perhaps there is a gravity sceptic out there somewhere insisting that “falling object” is a meaningless term, and that gravity is magic.
Is my position less falsifiable than yours? No, most statements about consciousness are unfalsifiable. I think that’s a strong hint that it’s a flawed concept.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head. I dropped it because I found that physicalism explains everything better. “Better” doesn’t mean that I have all the answers about anything, it just means that the answers consciousness gives are even worse.
I don’t understand what your problem with “magical” is?
Well, I suppose an assumption could be unnecessary without being meaningless, so the words aren’t identical, but I do refer to the same thing, when I use them in this context. I also recall explaining how a “meaningless” statement can be considered “false”. The question is, why are you so uncomfortable with paraphrasing? Do you feel that there are some substantial differences? Honestly, I mostly do this to clarify what I mean, not to obscure it.
The “robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do. That’s a pretty big problem, regardless of what I say. Now, when I ask if this or that idea solves “robot pain” problem, I’m not asking if it produces an actual test, I just ask for a smallest hint that maybe the test could exist.
That’s ridiculous. The mathematical law of gravity was written down by Newton, but the concept of gravity, in the sense that “things fall down”, is something most animals have. Do you literally think that nobody noticed gravity before Newton?
That’s not the problem.
The assumption is more that consciousness is something that needs explaining,
That’s wrong. If you can put a truth-value on a sentence , it is meaningful.
I think it is better to express yourself using words that mean what you are trying to express.
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
I think ti is likely that your entire argument is based on vagueness and semantic confusion,
There is a real problem of not being able to test for a pain sensation directly.
Why did it take you so long too express it that way? Perhaps the problem is this:
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”. Perhaps you have to use vagueness and confusion to make the invalid inference seem valid.
Wow, so you agree with me here? Is it not a problem to you at all, or just not “the” problem?
Invisible unicorns are immeasurable. They do not exist. The assumption that they do exist is unnecessary. The statement “invisible unicorns are purple” is meaningless. The words aren’t all exactly the same, but that doesn’t mean they aren’t all appropriate.
A long long time ago you wrote: You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”. So I assumed you understood that immeasurability is relevant here. Did you then forget?
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you
No, still not from that.
You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.
Which issues exactly?
Why not? Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist? Does “‘robot pain’ is meaningless” follow from the same better?
Meaningfulness, existence, etc.
Huh? It’s perfectly good as a standalone stament , it’s just that it doens’t have much to do with meaning or measurabiltiy.
Not really, because you haven’t explained why meaning should depend on measurability.
It is evident that this is a major source of our disagreement. Can you define “meaningless” for me, as you understand it? In particular, how it applies to grammatically correct statements.
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods.
Useless for communication.
Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).
Where is this going? You can’t stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
A bit too vague. Can I clarify that as “Useless for communication, because it transfers no information”? Even though that’s a bit too strict.
What is stopping me from assigning them truth values? I’m sure you meant, “meaningless statements cannot be proven or disproven”. But “proof” is a problematic concept. You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument. Anyway, isn’t (1.) enough?
It’s still entirely about meaning, measurability and existence. I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not and what it could follow from. Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”. Or maybe “invisible unicorns cannot be detected” does not follow from “we have no arguments suggesting that maybe ‘invisible unicorns’ could be something detectable”?
The fact that you can’t understand them.
If you cant understand a statement as exerting the existence of something, it isn’t meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
Because it needs premises along the lines of “what is not measurable is meaningless” and “what is meaningless is false”, but you have not been able to argue for either (except by gerrymandered definitions).
There’s an important difference between stipulating something to be indetectable … in any way, forever … and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is “true” in some way that has nothing to do with reality.
I’m trying to understand your definitions and how they’re different from mine.
I see that for you “meaningless” is a very narrow concept. But does that agree with your stated definition? In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Also, can you offer a concrete meaningless statement yourself? Preferably one in the form “X exists”.
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.
Well, you used it,.
Its’ bad because there’s nothign inside the box. It’s just a apriori argument.
I can also use”ftoy ljhbxd drgfjh”. Is that not meaningless either? Seriously, if you have no arguments, then don’t respond.
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You’re supposed to be able to check that somehow.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
The implicit argument is that meaning/communication is not restricted to literal truth.
What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam’s razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Yes, the unicorns don’t have to be undetectable be definition. They’re just undetectable by all methods that I’m aware of. If “invisible unicorns” have too much undetectability in the title, we can call them “ghost unicorns”. But, of course, if you do detect some unicorns, I’ll say that they aren’t the unicorns I’m talking about and that you’re just redefining this profound problem to suit you. Obviously this isn’t a perfect analogue for your “robot pain”, but I think it’s alright.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like “colourless green”, or category error, like “sleeping idea”.
Very low finite rather than infinitessimal or zero.
I don’t see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don’t feel pain. I don’t see how that can be valid.
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that “bitter purple” (or something) was a category error, and your answer was very underwhelming.
I say that “sleeping idea” is meaningless, because I don’t have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, “is this idea sleeping” is answered with “no”. It’s just that I honestly don’t have such a restriction. I use the exact same explanation for the meaninglessness of both “fgdghffgfc” and “robot pain”.
The question “is green colorless” has a perfectly good answer (“no, green is green”), unless you don’t think that colors can have colors (in that case it’s a category error too). But I’m nitpicking.
Here you treat detectability as just some random property of a thing. I’m saying that if you don’t know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can’t possibly say that it exists.
My “unicorn ghost” example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I’d have to make up an argument why this isn’t the unicorn I was talking about. “Robot pain” has no such flaws—it is devoid of any traces of meaningfulness.
300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.
I doubt that’s a good thing. It hasn’t been very productive so far.
“Seriously, if you have no arguments, then don’t respond.”
People who live in glass houses shouldn’t throw stones.
I means “does not have a meaning.”
In general, it doesn’t apply to grammatically correct sentences, and definitely not to statements. It’s possible that you will find something grammatically correct which is meaningless, but it would have to be severely contorted.
If you can ask the question, “How do you know?”, then the thing has a meaning. I will show you an example of something meaningless:
faheuh fr dhwuidfh d dhwudhdww
Note that there is no question of knowing or not knowing anything. When you can ask how you know something or don’t know it, then the thing has a meaning.
I’m sure you can see how unhelpful this is.
No.
It only explains the “-less” suffix. It’s fine as a dictionary definition, but that’s obviously not what I asked for. I need you to explain “meaning” as well.
You need no such thing, and as I said, we won’t be continuing the discussion of language until you show it has something to do with consciousness.
Noam Chomsky wrote “Colorless green ideas sleep furiously” in 1955.
Ideas don’t sleep, so they don’t sleep furiously. The sentence is false, not meaningless.
This topic has been discussed, fairly extensively.
Yes. No one has shown that it is meaningless and it pretty obviously is not.
That’s a definitions argument, isn’t it? Under some ideas of what “meaning”, well, means, such sentences are meaningful; under others they are not.
The problem with that is that if the word “meaning” has several meanings you will have a situation like this:
Suppose the word “meaning” has two meanings, A & B. But then we can ask what the word “meanings” means in the previous sentence: does it mean A, or B? If you answer that it means A, then the word “meaning” might have two meanings in the A sense, but five meanings in the B sense. But then we can ask what the word “meanings” means in the previous statement. And it might turn out that if the word “meanings” is taken in the B sense, the statement (about 2 and 5) is only true if we take the fourth meaning of the B sense, while in the 3rd sense, it has 7 meanings in the A sense, and 2 meanings in the B sense. And so on, ad infinitum.
All of that means that we have to accept a basic sense of meaning which comes before all the others if we want to talk about meaning at all. And in that basic sense, statements like that obviously have a meaning, whereas ones like “shirwho h wehjoeihqw dhfufh sjs” do not.
Your comment boils down to “It’s complicated, but I’m obviously right”. It’s not a very convincing argument.
Meaning is complicated. It is a function of at least four variables: the speaker, the listener, the message, and the context. It’s also well-trodden ground over which herds of philosophers regularly stampede and everything with the tag of “obviously” has been smashed into tiny little pieces by now.
You’re right about the “I’m obviously right” part, but not the rest. It boils down to “you have to start somewhere.” You can’t start out with many meanings of “meaning”, otherwise you don’t know what you mean by “meanings” in the sentence “I am starting out with many meanings of meaning.” You have to start with one meaning, and in that case you can know what you mean when you say “I am starting with one meaning of meaning.”
“eventually I found them unnecessary and unattractive”
It is typically considered unnecessary and unattractive to assert that the Emperor is naked.
There’s that word again.
Do you prefer “naive”? Not exactly the same thing, but similar.
The chair you are sitting on is a realisation; Van Gogh’s painting of his chair at Arles is a representation. You can’t sit on it.
That’s very vaguely phrased. There’s are questions of whether pain has phenomenal qualities, whether it is totally reducible to physical behaviour, and whether it is multiply realisable. If pain doesn’t have phenomenal properties, how do you decide which set of brain states get labelled as pain states?
But the concern is that you have no way of coming to know the answers to those questions. You have predetermined that everything must be treated as physics from the outset, so you will ineveitably get out the answer you put in. You are not treating the identity of pain with brain states as a falsifiable hypothesis.
There are uncontentious examples of multiply realisable things. Everything in computer science is MR—all algorithms, data structures , whatever. For the purposes of AI research, intelligence is assumed to be MR. There is no implication that MR things are things that “exist apart” from their realisations. So I don’t know where you are getting that from.
I would have to believe pain is MR to believe that; but the objection cannot be that nothing is MR. You are apparently being inconsistent about MR.
Colour and taste are different categories, therefore category error.
No, I’m treating the identity of pain with the memories thoughts and behaviors that express pain, as unfalsifiable. In other words, I loosely define pain “the thing that makes you say ouch”. That’s how definitions work—the theory that the thing I’m sitting on is a chair is also unfalsifiable. At that point the identity of pain with brain states is in principle falsifiable, you just induce the same state in two brains and observe only one saying ouch. Obviously, there are various difficulties with that exact scheme, it’s just a general sketch of how causality can be falsified.
I don’t recall suggesting that something isn’t MR. I don’t know why you think that MR is a problem for me. Like I said, there are multiple realizations of pain the same way that there are multiple realizations of chair.
Is that supposed to be a novel theory, or a dictionary definition?
Your suggesting pain can’t be instantiated in robots..
Definition, as I state right in the next sentence, and then confirm in the one after that. Is my text that unreadable?
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question. In the same way that wondering if some table can also be a chair is a stupid question. If you feel that you need an answer, feel free to choose arbitrarily. But then, if you think that having an answer helps you somehow, you’re probably doing something very wrong.
In the case of a simulated human brain, it might seem more natural to call those states “pain”, but then if you don’t, nobody will be able to prove you wrong.
The question asked dictionary definition.
Uncontroversially, you can prove something, or at least obtain a high standard of justification using falsifiable empiricism. Uncontroversially, you also can achieve a good level of justification using armchair reasoning based on valid deductions from standard definitions.
The use of nonstandard—stipulated, gerrymanderd, tendentious—is much dicier. You yourself made the comparison to compatiblism. In shades of gray terms, the redefinition manoeuvre isn’t completely beyond the pale, but it is nowhere near the gold standard of epistemology either—compatibliism, the “wretched subtefuge” remains somewhat contentious. The objection is is that compatibilists have changed the subject, are not in fact talking about free will.
And how are you justifying that suggestion? By appeal to personal intuition, which is also low grade epistemology.
It’s actually possible to answer that kind of question in a reasonably rigorous and formal way...you can show that a certain concept is leads to contradiction. But then such arguments are only convincing if they start from definitions that bear some relation to what a word usually means.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
For everyone else, pain is a feeling, a sensation, a phenomenal mode, a quale. You have left all that out of your definition , which is like definining a chair as something you cannot possibly sit on.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
Definitions aren’t handed from god in stone tablets. I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Oddly, I don’t feel like I’m doing the same thing compatibilists do. At least in my own head I explicitly have multiple versions of definitions (i.e. “if we define pain as <...> then <...>”). But I do worry if that’s always reflected in my text.
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I feel like I’ve asked you, though I’m not sure. This is an important point though. If we can’t agree even on that much, then we have some serious problems.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain. The word “similar”, of course, causes it’s own problems (how similar does it need to be?) but that’s nothing in comparison.
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course). It’s your definition that’s weird.
Dictionary definitions generally reflect popular usage. They are sometimes revised in terms of scientific discoveries—water is no longer defined as a basic element—but that requires more epistemic weigh than someone’s intuitive hunch.
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
Who are you communicating to when you use your own definitions?
It’s not relevant to anything. I thunk there can be meaningless statements, and I continue to think yo have no evidence that “robot pain” is one of them.
Says you. Why should I believe that?
Are you abndoning the position that “robot in pain” is meanngless in all cases?
I never said “all cases”, that would be ridiculous, the problems with “robot pain” depends on how the words are defined. With a strict physical definition the problem is easy, with a weaker physical definition, we have the usual classification problem, and with your definition the phrase is meaningless.
I don’t think I’ve ever tried to prove anything by redefining any words. There is some sort of miscommunication going on here. What I may do is try to convince you that my definitions are better, while matching common usage.
You’re asking this as though I maliciously misinterpreted what you mean by consciousness. Is that how you see this? What I tried to do is understand your definition to the best of my ability, and point out the problems in those. When talking about other definitions, I explicitly said things like “In this view pain is …” or “If you defined consciousness as …”. Was it actually unclear which definition I was talking about where, for all this time?
Solve it , then.
Prove that.
But using them proves nothing?
I am wondering who you communicate with when you use a private language>
Well, if you define pain exactly as “the state that follows damage and precedes the ‘ouch’” then you would damage the robot, observe it say ouch, and then proclaim that it experiences pain. It’s that simple. The fact that you asked, suggests that there’s something you’re seriously misunderstanding. But I can’t explain it if I don’t know what it is.
I feel like we’ve talked about this. In fact, here: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvhm
Remember when you offered a stupid proof that “purple is bitter” is category error, and then never replied to my response to it? Gosh, that was a while ago, and apparently we didn’t move an inch.
To summarize, I believe that the phrase is meaningless, because instead of showing to me how meaningful it is, you repeatedly ask me stupid questions. At least, that’s one additional data point.
Yes, definitions do not generally prove statements.
Considering that I provide you with the alternate definitions and explicitly state which definition I’m using where, I’m communicating with you.
Your solution is unconvincing because it can be fulfilled by code that is too simple to be to be convincing. If you change the definition of pain to remove the the subjective, felt aspect, then the resulting problem is easy to solve...but it’s not the original problem. It’s not that I can’t understand you, it’s that it’s hard to believe anyone could pull such a fraudulent manoeuvre.
Meaninglessness is not the default. Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
If definitions do not prove statements , you have no proof that robot pain is easy.
If you redefine pain, you are not making statements about pain in my language. Your schmain might be a trivially easy thing to understand, but it’s not what I asked about.
What the hell? I’m not just annoyed because of how accusatory this sounds, I’m annoyed because it apparently took you a week of talking about alternative definitions to realize that I am, at times, talking about alternative definitions. Are you not paying attention at all?
Well, it should be. I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic). Obviously, you can do whatever you want, but I need you to explain to me, how it makes sense to you.
Sure, in a similar way that people discussing god or homeopathy bothers me. It’s not exactly bad to discuss anything, but not all questions are worth the time spent on them either.
I did say “generally”. Definitions do prove statements about those definitions. That is “define X as Y” proves that “X is Y”. Of course, there are meaningful statements presented in the form “X is Y”, but in those cases, we already have X well defined as Z and the statement is really a shorthand for “Z is Y”. I guess I’m trying to convince you that in your case the definition Z does not exist, so making up a new one is the next best thing.
Yes, that’s because your language is broken.
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
That can’t possible work, as entirelyuseless has explained.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
The ordinary definition for pain clearly does exist, if that is what you mean.
Prove it.
No, pain is of ethical concern because you don’t like it. You don’t have to involve consciousness here. You involve it, because you want to.
Homeopathy is meaningful. God is meaningful only some of the time. But I didn’t mean to imply that they are analogues. They’re just other bad ideas that get way too much attention.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Is that a fact or an opinion?
“highly unpleasant physical sensation caused by illness or injury.”
have you got an exact definition of “concept”?
Requiring extreme precision in all things tends to bite you.
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
“You don’t have to involve consciousness here”—has two meanings:
one is “the concept of preference is simpler than the concept of consciousness”, which I would like to call a fact, although there are some problems with preference too. another is “consciousness is generally not necessary to explain morality”, which is more of an opinion.
Of course, now I’ll say that I need “sensation” defined.
I’d say it’s one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don’t want to bother. That’s because I have never considered “Is X a concept” to be an interesting question. And, frankly, I use the cord “concept” arbitrarily.
It’s you who thinks that “Can X feel pain” is an interesting question. At that point proper definitions become necessary. I don’t think I’m being extreme at all.
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;’t want to talk about consciousness.
Of course, I’ll need “defined” defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology. It’s a form of question-begging.
You used the word , surely you meant something by it.
Proper as in proper scotsman?
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
What?
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
You keep saying it s a broken concept.
That anything should feel like anything,
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes. I consider that “talking about consciousness”. What else is there to say about it?
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Yes, if I had actually said that. By the way, matter exists in you universe too.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If you want to know what “pain” means, sit on a thumbtack.
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
As i have previously pointed out, you cannot assume meaninglessness as a default.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
Are there classes of conscious entity?
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
Where exactly?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
Then you should consider all statements meaningless, without exception, since all of your arguments are made out of statements, and there cannot be an infinite regress of arguments.
That’s cute.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.
I don’t really know why you derive from this that all statements are meaningless. Maybe we disagree about what “meaningless” means? Wikipedia nicely explains that “A meaningless statement posits nothing of substance with which one could agree or disagree”. It’s easy for me to see that “undetectable purple unicorns exist” is a meaningless statement, and yet I have no problems with “it’s raining outside”.
How do you argue why “undetectable purple unicorns exist” is a meaningless statement? Maybe you think that it isn’t, and that we should debate whether they really exist?
“Please give an example of a subjective experience, other than consciousness, that has no physical evidence.”
All subjective experiences, including consciousness, are correlated with objective descriptions. E.g. a person who is awake can be described in ways objectively distinct from a person who is asleep. So there is always evidence for subjective experience. But that does not reduce the meaning of having a subjective experience to some objective description.
So for example “I am conscious” does not signify any objective description, but is correlated with various objective descriptions. Likewise, “I currently seem to see a blue object,” does not signify any objective description, but it is correlated with various objective descriptions.
Evidence is only evidence, in the full sense of the term, if you can interpret it,
The things are correlated. For example, every time I am awake and conscious, I have a relatively undamaged brain. So if someone else has an undamaged brain and does not appear to be sleeping, that is evidence that they are conscious.
“Meaning.” You keep using that word. I do not think it means what you think it means.
You’ll have to be more specific with your criticism.
“Meaning” refers to the fact that words are about things, and they are about whatever people want to talk about. You seem to be using the word rather differently, e.g. perhaps to refer to how you would test whether something is true, since you said that the word “pain” is meaningless applied to a robot since we have no way to test whether it feels pain. Or you have the idea that words are meaningless if they do not imply something in the “real world,” by which you understand an objective description. But since people talk about whatever they want to talk about, words can also signify subjective perceptions, and they do.
For starters, do we agree that the phrase “purple is bitter” is meaningless? Or at least that some grammatically correct strings of words can have no meaning?
“Purple is bitter” is not meaningless; it is false.
It is possible that some string of words that satisfies most or all grammatical rules is meaningless. However it is not possible that a string of words that a human says in order to convey their thoughts is meaningless; it will mean the thing they are thinking about.
Really? Synaesthesia aside, do you want to say that “Purple is not bitter” is true?
Are you equating meaning with intent?
Yes.
No. But if you have thoughts and intend to signify them, then your words have meaning from your thoughts.
Would you like to show some arguments why “Purple is not bitter” is true?
What about if you intend and fail?
Purple is a color and not a flavor. Bitter is a flavor and not a color. That strongly suggests that purple is not bitter.
That would depend on how exactly you failed. If you failed to speak any words, then obviously there was no meaning, although there was an intended meaning. If you failed to pronounce the words or type them correctly or whatever, there would be a vague spectrum from a situation similar to failing to speak any words, up to speaking them and succeeding. But there would be an intended meaning in every case, even if you failed to actually mean it.
I don’t think this is valid reasoning.
Cinchona tree bark is a part of a plant and not a flavour. Bitter is a flavour and not a part of a plant. That strongly suggests that this bark is not bitter.
The problem is that you are mixing the use of “bitter” as a noun and as an adjective. So there are two meanings, bitterness, and something bitter. You need to correct for that. It is obviously true that the bark is not bitterness, which is the relevant conclusion.
Huh? “Bitter” is an adjective—as you youself say, the noun is “bitterness”.
In both phrases—purple is (not) bitter and this bark is not bitter—it’s an adjective.
By the way, consider another phrase: Red is hot.
Is it true or false?
The original context of this discussion is whether these things are meaningful. It should be pretty obvious that the whole discussion presupposes that they are, including your own remarks. So since this is obvious, there is no need for further discussion of whether they are true or false in particular.
In the proposition “purple is [not] bitter” it seems clear to me that “bitter” is being used adjectivally. Imagine someone with a variety of synaesthesia that makes them perceive bitterness whenever faced with something purple; then I would say that for them purple is bitter. (In much the same sense as we might say that quinine is bitter.) For most people, colour perception and taste perception are not linked in any such way and therefore purple is not bitter.
This seems reasonable to me. In any case the argument wasn’t really about whether purple is bitter, but whether the sentence “purple is bitter” has any meaning at all. In fact is obviously has at least one meaning (which you mention here) and potentially several.
Your solution seems to consist of adopting an ethics that is explicitly non-universal.
There’s a slippery slope there. You start with “very little X” and slide to “entirely non-X”.
“very little” is a polite way to say “nothing”. It makes sense, especially next to the vague “has to do with” construct. So there is no slope here.
To clarify, are you disagreeing with me?
Your argument is either unsound or invalid, but I’m not sure which. Of course, personal experience of subjective statees does hae *something to do with detecting the same state in others.
Read the problem cousin_it posted again: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvd5
There is no detecting going on. If you’re clever (and have too much free time), you may come up with some ways that internal human experience helps to solve that problem, but noting significant. That’s why I used “little” instead of “nothing”.
But I wasn’t talking about the CR, I was talking in general.
What do you mean by “reality”? If you’re an empiricist, as it looks like you are, you mean “that which influinces our observations”. Now what is an “observation”? Good luck answering that question without resorting to qualia.
“observation” is what your roomba does to find the dirt on your floor.
How do you know? Does a falling rock also observe the gravitational field?
The same way I know what a chair is.
I’d have to say no here, but if you asked about plants observing light or even ice observing heat, I’d say “sure, why not”. There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.
What are you basing this distinction on? More importantly, how is whatever you’re basing this distinction on relevant to grounding the concept of empirical reality?
Using Eliezer’s formulation of “making beliefs pay rents in anticipated experiences” may make the relevant point clearer here. Specifically, what’s an “experience”?
I would say that object observes an event if it changes its state in response to this event. Yes, that’s a very low bar. First, gravity, isn’t an event, so “observe” is an awkward word. We can instead “measure” and then observe the results. Of course, if the gravity did change, the rock would presumably change it’s shape a tiny bit, which we may or may not count—that’s fine, “observation” is supposed to be on a spectrum.
Experiences are brain states, beliefs are also stored in the brain. Eliezer’s advice is equally good both for you and for a roomba, regardless of which of you is supposedly conscious. It may not work for plants or ice though—I don’t think I can find anything resembling beliefs in them, and even if I could, there would be no process to update them.
I agree with much of what you say but I am not sure it implies for cousin_it’s position what you think it does.
I’m sure it’s true that, as you put it elsewhere in the thread, consciousness is “extrapolated”: calling something conscious means that it resembles an awake normal human and not a rock, a human in a coma, etc., and there is no fact of the matter as to exactly how this should be extrapolated to (say) aliens or intelligent robots.
But this falls short of saying that at best, calling something conscious equals saying something about its externally observable behaviours.
For instance: suppose technology advances enough that we can (1) make exact duplicates of human beings, which (initially) exactly match the memories, personalities, capabilities, etc., of their originals, and (2) reversibly cause total paralysis in a human being, so that their mind no longer has any ability to produce externally observable effects, and (3) destroy a human being’s capacity for conscious thought while leaving autonomic functions like breathing normal.
(We can do #2 and #3 pretty well already, apart from reversibility. I want reversibility so that we can confirm later that the person was conscious while paralysed.)
So now we take a normal human being (clearly conscious). We duplicate them (#1). We paralyse them both (#2). Then we scramble the brain of one of them (#3). Then we observe them as much as you like.
I claim these two entities have exactly the same observable behaviours, past and present, but that we can reasonably consider one of them conscious and the other not. We can verify that one of them was conscious by reversing the paralysis. Verifying that the other wasn’t depends on our confidence that by mashing up most of their cerebral cortex (or whatever horrible thing we did in #3) really destroys consciousness, but this seems like a thing we could reasonably be quite confident of.
You might say that our judgement that one of these (ex-?) human beings is conscious is dependent on our ability to reverse the paralysis and check. But, given enough evidence that the induction of paralysis is harmlessly reversible, I claim we could be very confident even if we knew that after (say) a week both would be killed without the paralysis ever being reversed.
Indeed, we can always make two things seem indistinguishable, if we eliminate all of our abilities to distinguish them. The two bodies in your case could still be distinguished with an fmri scan, or similar tool. This might not count as “behavior”, but then I never wanted “behavior” to literally mean “hand movements”.
I think you could remove that by putting the two people into magical impenetrable boxes and then randomly killing one of them, through some schrodinger’s cat-like process. But I wouldn’t find that very interesting either. Yes, you can hide information, but it’s not just information about consciousness you’re hiding, but also about “ability to do arithmetic” and many other things. Now, if you could remove consciousness without removing anything else, that would be very interesting.
OK, so what did you mean by “behaviour” if it includes things you can only discover with an fMRI scan? (Possible “extreme” case: you simply mean that consciousness is something that happens in the physical world and supervenes on arrangements of atoms and fields and whatnot; I don’t think many here would disagree with that.)
If the criteria for consciousness include things you can’t observe “normally” but need fMRI scans and the like for (for the avoidance of doubt, I agree that they do) then you no longer have any excuse for answering “yes” to that last question.
My point wasn’t about hiding information; it was that much of the relevant information is already hidden, which you seemed to be denying when you said consciousness is just a matter of “behaviours”. It now seems like you weren’t intending to deny that at all; but in that case I no longer understand how what you’re saying is relevant to the OP.
The word behavior doesn’t really feature much in the ongoing discussions I have. My first post was an answer to OP, not meant as a stand-alone truth. But obviously, If “consciousness” means anything, it’s a thing that happens in the brain—I’d say it’s the thing that makes complex and human-like behaviors possible.
Normally is the key word here. There is nothing normal about your scenario. I need an fmri can for it, because there is nothing else that I can observe. Compared to that, the human in a box communicating through speech is very normal and quite sufficient. Unless the human is mute or malicious. Then I might need more complex tools.
It’s obscured, sure. But truly hiding information is hard. Speech isn’t that narrow of a window, by the way. Now, if I had to communicate with the agent in the box by sending one bit of information back and forth, that would be more of a problem.