Steelmanning the Chinese Room Argument
(This post grew out of an old conversation with Wei Dai.)
Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.
Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn’t know the secret, or knows some other secret instead?
Clearly the only reasonable answer is “no, not in general”.
Now imagine a person in the same situation, claiming to possess some mental skill that’s hard for you to verify (e.g. visualizing four-dimensional objects in their mind’s eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?
Again, clearly, the only reasonable answer is “not in general”.
Now imagine a sealed box that behaves exactly like a human, dutifully saying things like “I’m conscious”, “I experience red” and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?
A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I’m not sure! Not at all!
- Bi-Weekly Rational Feed by 9 Jul 2017 19:11 UTC; 14 points) (
- 26 Jul 2017 17:56 UTC; 0 points) 's comment on Steelmanning the Chinese Room Argument by (
I challenge this.
Either you relax the communication channel in such a way that I can access other kinds of information (brain scans, purchase history, body language, etc.) or you do not get to say “not in general”, because there’s nothing general about two people communicating only through a terminal.
To me it’s like you’re saying “can you tell me how a cake smells by a picture? No! So I’m not sure that smells are really communicable”. Hm.
I think the point does hold in general. There can be many possible internal experiences corresponding to any given input-output map. To the extent that’s true, Chinese Room type arguments stay unresolved. Unless you define high resolution brain scans as part of input/output, but that seems far from the spirit of Chinese Room.
Since physical existence of Wei is highly doubtful can we have a link to the conversation?
See here, Ctrl+F “optimize”. Didn’t think it would be still accessible. Added the link to the post.
I’ve met Wei in person and can assure you that he’s as real as I am :-)
″ can assure you that he’s as real as I am :-)”
This just moves the dilemma back one level!
Heh. Lots of people on LW have met me. There’s at least one in this thread :-)
That just moves the dilemma back one level!
I bet cousin_it didn’t link it because it’s not on the (public) internet. Edit: Nope!
People have met Wei Dai in meatspace, if that’s what you’re talking about. Edit: As confirmed by cousin_it.
Another Chinese Room steelman
Yeah, and it doesn’t even seem all that different from mine :-)
They actually seem pretty different to me. Searle’s original claim was that computer programs won’t have “intentionality” (which seems like a confused/useless concept but I haven’t digged into it enough to be sure) even if they exhibit intelligent input-output behavior. Kaj’s steelman claims that systems based on crude manipulations of suggestively named tokens likely won’t be intelligent, whereas your (cousin_it’s) steelman claims that a system may not be conscious even if it exhibits human-like (and hence intelligent) input-output behavior. These seem to go in very different directions.
It’s what parrots and chatbots uncontroversially have not got -- the ability to know what they are saying.
I guess the connection is that simple systems can seem surprisingly human-like. Phil Goetz made a similar point in We Are Eliza.
cousin_it, aren’t you forgetting that the rules of the Chinese Room are different than those of Turing’s imitation game? While Turing does not let you in the other test room, Searle grants you complete access to the code of the program. If you could really work out a (Chinese) brain digital upload, you could develop a theory of consciousness/intelligence/intentionality from it. Unfortunately, artificial neural networks bear no connection to the brain, like ELIZA bears no connection to a human!
The argument is too general, as it also proves that it is impossible to know that another biological human has conscious. Maybe nobody except me-now has it.
I knew a person who claimed that he could create 4-dimensional images in hid mind eye. I don’t know should I believe him and how to check it.
Since other people are biologically similar to me, they probably say “I’m conscious” for the same reason as me, so it makes sense to believe them. The problem in Chinese Room is that the system is quite different from a human and might be lying about some things, so there’s less reason to trust it when it claims to have human-like qualia.
Be careful (2, 3).
I can’t agree with you because you can only assert that a person is biologically similar to you based on how they look and feel, barring cutting into them. If I were to design a robot that looked and felt and talked similar to a human being enough that you would have no way of discerning whether it’s a real human or not, then you’re saying that you would be inclined to believe them.
I admit I don’t have an answer to this problem, I just don’t agree with your statement.
I would believe the computer, not because of accepting computationalism, but because when I imagine the situation happening in real life, I cannot imagine continuing to say to someone or something, “Actually, I’m not sure you’re really conscious,” when it acts like it is in every way.
I actually think the same thing is likely to happen to almost everyone (that is, in the end accepting that it is conscious), regardless of their previous philosophical views.
Yeah, that’s how Justin Corwin won twenty AI-box experiments.
Right. I’ve said before that we don’t need the experiment. We already know people will let out an AI that seems decent and undeserving of being in a box.
Can I be sure that I’m conscious? Nobody can give me a description of consciousness which I can look at and say “sure, I have one of those.” The best they can do is describe consciousness in terms of other things, which they can’t give a description for either, which doesn’t really help.
True, consciousness seems to defy precise definitions.
It seems to me that consciousness as commonly understood is necessary for having first-person experiences of the sort that I have, and presumably you have also. And I suspect that pondering your own consciousness implies that you are in fact conscious.
But that just moves the question back a level. How do I know that some activity is “pondering your own consciousness”? You can’t give me a description of “pondering your own consciousness” that can be used to determine if that is taking place.
Isn’t that what you were doing when you said “Can I be sure that I’m conscious”?
It seems to me that one’s own consciousness is beyond dispute if one is able to think about things (including but not limited to one’s own consciousness) and have first-person experiences. Even if one disputes the consciousness of others (for example, if one is a solipsist), I don’t see how anyone can reasonably doubt his/her own consciousness.
It’s turtles all the way down. Just like you can’t give me a description of consciousness, and you can’t give me a description of “pondering your own consciousness”, you can’t give me a description of “first person experiences” either. You can’t give me a description of any of these related concepts except in terms of other such concepts.
It’s not so much that I’m doubting whether I’m conscious, but rather I’m doubting whether I can figure out whether I’m conscious. I can’t figure out if I have something when you can’t communicate to me exactly what it is that I may or may not have.
Can you figure out whether there are chairs in your house? How? Suppose you say that there are. How do you know they are chairs and not something else? If you answer those questions, we can continue in the same way and ask how you know those answers are right and what they mean. You will never be able to explain any concept without using other concepts, and we can always say, “but what are those things?”
I would say there is no difference; consciousness is no harder to recognize than chairs (and in fact a bit easier.) If you think there is a difference, what is it?
If I ask you to describe a chair, ultimately you’ll describe it in terms of things I can perceive. “A chair is something made for sitting. Sitting is this thing I’m doing” and I can watch you sitting, therefore getting an idea of what sitting is. I can’t watch your consciousness.
But what is watching someone sitting, and what is “getting an idea of what sitting is”? Those aren’t things which are easy to watch.
And if you say you can notice yourself watching someone sitting and notice yourself getting an idea of what sitting is, then you can notice yourself being conscious. So there shouldn’t be any difficulty figuring out whether you are conscious. The difficulty (if there is one) would be figuring out whether someone else is conscious. And it is equally difficult to know whether someone else has an idea of what sitting is.
I think maybe I’m not being clear.
If you want to tell me what a chair is, you can point to a chair and its characteristics and I can look at it. I can then notice that when I look at that chair, and when I look at an object inside my house, they look pretty much the same. So I conclude that the object inside my house seems to be what you would call a chair. (Of course, you’d probably describe a chair in a more complicated way, but it would come down to a lot of instances of that.)
If I try to do that for consciousness, one of the intermediate steps is missing. I can’t look at your consciousness, then look at mine, and say “hmm, they seem to be the same sort of thing”. Each one is (or is purported to be) only visible to one person.
The fact that I can “notice myself being conscious” doesn’t change this. I can’t compare consciousnesses. While it’s true that I can’t directly compare my idea of sitting to your idea of sitting, I can go through the intermediary of asking you to sit, then comparing what I see when you sit to what I see when I sit.
If you notice when things look pretty much the same, then I can explain what I mean by consciousness, without you having to see what my consciousness is like. In fact, we can assume I have no consciousness and you are the only one who has it: we can talk about it anyway.
First, notice that things look pretty similar at all the times when you are awake, compared to times when you are sleep. That is like noticing that two chairs are alike. Then, notice that when you are asleep and dreaming, that is also similar, although less similar, to the times when you are awake, and dissimilar to the times when you are asleep and not dreaming. Then, suppose there are also some times when you sleepwalk, but without dreaming. Those are noticeably similar to times when you are asleep without dreaming but doing nothing—in fact those times seem exactly alike until later when you judge them by other evidence.
Now when I say “you are conscious,” I am talking about the similarity between the times when you are awake and the times when you are dreaming, in contrast with the times when you are asleep and not dreaming.
You don’t have a separate word which means “Jiro’s consciousness” and nothing else. You have a single word which is used both for mine and yours, which implies that they are the similar. What you’ve just described fails to imply that similarity, so it doesn’t match the way you are using the word.
I deliberately failed to imply the similarity, since I said that we would define consciousness in that way even if I were not conscious.
However, you are quite right that I would not actually know about consciousness if I were not. And indeed, I notice the similarity between being awake and dreaming sleep, as opposed to dreamless sleep, in the same way that you do. So I quite rightly talk about consciousness being the same in you and in me.
If you don’t doubt you are conscious, I’m not sure why you would need to figure out whether you are conscious—it seems to me that you already know based on direct experience.
That these things are difficult to describe is not in dispute; that is what I meant when I said “consciousness seems to defy precise definitions”. But, we can still talk about them as there seems to be a shared understanding of the concepts.
One need not have a precise definition of a thing to discuss and believe in that thing or to know that one is effected by that thing. For example, consider someone unschooled in physics beyond a grade-school level. He/she knows about gravity, knows that he/she is subject to the effects of gravity and can make (qualitative) predictions about the effects of gravity, even if he/she cannot say whether gravity is a force, a warping of spacetime, both of these things, neither of these things, or even understand the distinction. Similarly, there is enough of a common understanding of consciousness and first person experiences for a person to be confident that she/he is conscious and has first person experiences.
I do agree that the lack of precise definition (and, more importantly, the lack of measurable or externally observable manifestations) makes it impossible (at the present) for an observer to know whether some other entity is conscious.
What if the person claims to be able to add numbers? If you ask them about 2+2 and they answer 4, maybe they were pre-ordered with that response, but if you get them to add a few dozen poisson-distributed numbers, maybe you start believing they’re actually implementing the algorithm. This relies on the important distinction between telling two things apart certainly and gathering evidence.
Unlike with addition, I don’t think we understand consciousness well enough to create a sequence of questions such that the simplest algorithm answering them would be conscious. It’s not clear to me that such a sequence even exists. If we found one, it would be a big step for FAI.
Hm, this is an interesting question to think about. I lean more towards the camp of construing consciousness as broad and pretty easy to attain, but only a small part of a mind’s value. As long as we can push down the probability of lookup tables and push up the probability of self-reflection and abstract thinking.
Weird example I’d label as conscious: an AI that can observe us, trying to fool us in a particular way: Our brains compute expectations of what kinds of things a conscious correspondent would say, then the AI can observe these expectations and compute something consistent both with our expectations and its past responses. Most of the computation of a mind is there, but packaged differently and spread over multiple media—the text responses no longer reflect consciousness if the AI loses its observation channel.
The three examples deal with different kinds of things.
Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don’t, they should be physically stored somehow. In that sense they are the most real of the three.
Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimately, “I have a skill to do X” means “I believe I’m better than most at X” and while it is a belief as good as the previous one, but it’s a little less direct.
Finally, being conscious doesn’t mean anything at all. It has no relationship to reality. At best, “X is conscious” means “X has behaviors in some sense similar to a human’s”. If a computationalist answers “no” to the first two questions, and “yes” to the last one, they’re not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That’s, by the way, similar to what compatibilists do with free will.
You say that like its a good thing.
If you look for consciousness from the outside, you’ll find nothing, or you’ll find behaviour. That’s because consciousness is on the inside, is about subjectivity.
You won’t find penguins in the arctic, but that doesn’t mean you get to define penguins as nonexisent, or redefine “penguin” to mean “polar bear”.
No, I’m not personally in favor of changing definitions of broken words. It leads to stupid arguments. But people do that.
It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain. I’m under the impression that cousin_it believes you can have the latter without the former. I say you must have both. Are you saying you don’t need either? That you could have two physically identical agents, one conscious, the other not?
Meaning the world of exteriors? If so, is that not question begging?
Well, it;’s defintiely reflected in the physical structure of the brain, because you can tell whether someone is conscious with an FMRI scan.
OK. Now you you have asserted it, how about justifying it.
No. I am saying you shouldn’t beg questions, and you shouldn’t confuse the evidence for X with the meaning of X.
You are collapsing a bunch of issues here. You can believe that is possible to meaningfully refer to phenomena that are not fully understood. You can believe that something exists without believing it exists dualistically. And so on.
No, meaning the material, physical world. I’m glad you agree it’s there. Frankly, I have not a slightest clue what “exterior” means. Did you draw an arbitrary wall around your brain, and decided that everything that happens on one side is interior, and everything that happens on another is exterior? I’m sure you didn’t. But I’d rather not answer your other points, when I have no clue about what it is that we disagree about.
No, you can tell if their brain is active. It’s fine to define “consciousness” = “human brain activity”, but that doesn’t generalize well.
It’s where you are willing to look, as opposed to where you are not. You keep insisting that cosnciousness can only be found in the behaviour of someone else: your opponents keep pointing out that you have the option of accessing your own.
We don’t do that. We use a medical definition. “Consciousness” has a number of uses in science.
That’s hardly a definition. I think it’s you who is begging the question here.
I have no idea where you got that. I explicitly state “I say you must have both”, just a couple of posts above.
Here’s a google result for “medical definition of consciousness”. It is quite close to “brain activity”, dreaming aside. If you extended the definition to non-human agents, any dumb robot would qualify. Did you have some other definition in mind?
Behaviour alone versus behaviour plus brain scans doesn’t make a relevant difference.. Brain scans are still objective data about someone else. It’sll an attempt to deal with subjectivity on an objective basis.
The medical definition of consciousness is not brain activity because there is some dirt if brain activity during, sleep states and even coma. The brain is not a PC.
“It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain.”
“It would be preferable” expresses wishful thinking. The word refers to subjective experience, which is subjective by definition, while you are looking at objective things instead.
No, “it’s preferable”, same as “you should”, is fine when there is a goal specified. e.g. “it’s preferable to do X, if you want Y”. Here, the goal is implicit—“not to have stupid beliefs”. Hopefully that’s a goal we all share.
By the way, “should” with implicit goals is quite common, you should be able to handle it. (Notice the second “should’. The implicit goal is now “to participate in normal human communication”).
We can understand that the word consciousness refers to something subjective (as it obviously does) without having stupid beliefs.
Subjective is not the opposite of physical.
Indeed.
“Subjective perception,” is opposite, in the relevant way, to “objective description.”
Suppose there were two kinds of things, physical and non-physical. This would not help in any way to explain consciousness, as long as you were describing the physical and non-physical things in an objective way. So you are quite right that subjective is not the opposite of physical; physicality is utterly irrelevant to it.
The point is that the word consciousness refers to subjective perception, not to any objective description, whether physical or otherwise.
No, physical things have objective descriptions.
Can you find another subjective concept that does not have an objective description? I’m predicting that we disagree about what “objective description” means.
Yes, I can find many others. “You seem to me to be currently mistaken,” does not have any objective descripion; it is how things seem to me. It however is correlated with various objective descriptions, such as the fact that I am arguing against you. However none of those things summarize the meaning, which is a subjective experience.
“No, physical things have objective descriptions.”
If a physical thing has a subjective experience, that experience does not have an objective description, but a subjective one.
I find myself to be conscious every day. I don’t understand what you find “unreal” about direct experience.
Here’s what I think happened.
You observed something interesting happening in your brain, you labeled it “consciousness”.
You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans “conscious”.
You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it “not conscious”.
Then you observed a robot, and you asked “is it conscious?”. If you asked the full question—“are the things happening in a robot similar to the things happening in my brain”—it would be obvious that you won’t get a yes/no answer. They’re similar in some ways and different in others.
But if you go back to the original question, you can’t rule out that the robot is fully conscious , despite having some physical differences. The point being that translating questions about consciousness into questions about brain activity and function (in a wholesale and unguided way) isn’t superior, it’s potentially misleading.
I can rule out that the robot is conscious, because the word “conscious” has very little meaning. It’s a label of an artificial category. You can redefine “conscious” to include or exclude the robot, but that doesn’t change reality in any way. The robot is exactly as “conscious” as you are “roboticious”. You can either ask questions about brain activity and function, or you can ask no questions at all.
There’s nothing artificial about direct experience.
To whom? To most people, it indicates having a first person perspective, which is something rather general. It seems to mean little to you because of your gerrymnadered definition of meaning.Going only be external signs, consciousness might just be some unimportant behavioural quirks.
The point is not to make it vacuously true that robots are conscious. The point is to use a definition of consciousness that includes it’s central feature: subjectivity.
Says who? I can ask and answer subjective questions of myself, like how do I feel, what can I remember, how much do I enjoy a taste. The fact that having consiousness fgives you that kind of access is central.
What does “not having a first person perspective” look like?
I find my definition of meaning (of statements) very natural. Do you want to offer a better one?
I think you use that word as equivalent to consciousness, not as a property that consciousness has.
All of these things have perfectly good physical representations. All of them can be done by a fairly simple bot. I don’t think that’s what you mean by consciousness.
Not if “perfectly good” means “known”.
It’s ok, it doesn’t. Why do people keep bringing up current knowledge?
Because we are trying to communicate now, but your semantic scheme requires knowledge that is only available in the future , if at all.
Yes, that sounds about right, with the caveat that I would say that other humans are almost certainly conscious. Obviously there are people (e.g. solipsists) who don’t think that conscious minds other than their own exist.
That sounds approximately right, albeit it is not just the fact that a rock is dissimilar to me that leads me to believe it to be unconscious. I am open to the possibility that entities very different from myself might be conscious.
I’m not sure that “is the robot conscious” is really equivalent to “are the things happening in a robot similar to the things happening in my brain”. It could be that some things happening in the robot’s brain are similar in some ways to some things happening in my brain, but the specific things that are similar might have little or nothing to do with consciousness. Moreover, even if a robot’s brain used mechanisms that are very different from those used by my own brain, this would not mean that the robot is necessarily not conscious. That is what makes the consciousness question difficult—we don’t have an objective way of detecting it in others, particularly in others whose physiology differs significantly from our own. Note that this does not make consciousness unreal, however.
I would be willing to answer “no” to the “is the robot conscious” question for any current robot that I have seen or even read about. But, that is not to say that no robot will ever be conscious.I do agree that there could be varying degrees of consciousness (rather than a yes/no answer), e.g. I suspect that animals have varying degrees of consciousness, e.g. non-human apes a fairly high degree, ants a low or zero degree, etc.
I don’t see why any of this would lead to the conclusion that consciousness or pain are not real phenomena.
Let me say it differently. There is a category in your head called “conscious entities”. Categories are formed from definitions or by picking some examples and extrapolating (or both). I say category, but it doesn’t really have to be hard and binary. I’m saying that “conscious entities” is an extrapolated category. It includes yourself, and it excludes inanimate objects. That’s something we all agree on (even “inanimate objects” may be a little shaky).
My point is that this is the whole specification of “conscious entities”. There is nothing more to help us decide, which objects belong to it, besides wishful thinking. Usually we choose to include all humans or all animals. Some choose to keep themselves as the only member. Others may want to accept plants. It’s all arbitrary. You may choose to pick some precise definition, based on something measurable, but that will just be you. You’ll be better off using another label for your definition.
That it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer’s is conscious is not really in question—pretty much everyone on this thread has said that. It doesn’t follow that I should drop the term or a “use another label”; there is a common understanding of the term “conscious” that makes it useful even if we can’t know whether “X is conscious” is true in many cases.
There is a big gap between “difficult” and “impossible”. If a thing is “difficult to measure”, then you’re supposed to know in principle what sort of measurement you’d want to do, or what evidence you could in theory find, that proves or disproves it. If a thing is “impossible to measure”, then the thing is likely bullshit.
What understanding exactly? Besides “I’m conscious” and “rocks aren’t conscious”, what is it that you understand about consciousness?
In the case of consciousness, we are talking about subjective experience. I don’t think that the fact that we can’t measure it makes it bullshit. For another example, you might wonder whether I have a belief as to whether P=NP, and if so, what that belief is. You can’t get the answer to either of those things via measurement, but I don’t think that they are bullshit questions (albeit they are not particularly useful questions).
In brief, my understanding of consciousness is that it is the ability to have self-awareness and first-person experiences.
What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right? Again, there is a difference between difficult and impossible.
Those sound like synonyms, not in any way more precise than the word “consciousness” itself.
To clarify: at the present you can’t obtain a person’s beliefs by measurement, just as at the present we have no objective test for consciousness in entities with a physiology significantly different from our own. These things are subjective but not unreal.
And yet I know that I have first person experiences and I know that I am self-aware via direct experience. Other people likewise know these things about themselves via direct experience. And it is possible to discuss these things based on that common understanding. So, there is no reason to stop using the word “consciousness”.
Did you mean, “at present subjective”? Because if something is objectively measurable then it is objective. Are these things both subjective and objective? Or will we stop being conscious, when we get a better understanding of the brain.
Are those different experiences or different words for the same thing? What would it feel like to be self-aware without having first person experiences or vice versa?
To clarify, consciousness is a subjective experience, or more precisely it is the ability to have (subjective) first person experiences. Beliefs are similarly “in the head of the believer”. Whether either of these things will be measurable/detectable by an outside observer in the future is an open question.
Interesting questions. It seems to me that self awareness is a first person experience, so I am doubtful that you could have self awareness without the ability to have first person experiences. I don’t think that they are different words for the same thing though—I suspect that there are first-person experiences other than self awareness. I don’t see how my argument or yours depends on whether or not first-person experiences and self-awareness are the same; do you ask the questions for any particular reason, or did you just find them to be interesting questions?
Suppose, as a thought experiment, that these things become measurable tomorrow. You said that beliefs are subjective. But how can a thing be both subjective and objectively measurable? Do beliefs stop being subjective the moment measurement becomes possible?
I ask them because I wanted you to play rationalist taboo (for “consciousness”), and I’m trying to decide if you succeeded or failed. I think “self awareness” could be defined as “thoughts about self” (although I’m not sure that’s what you meant). But “first person experiences” seems to be a perfect synonym for “consciousness”. Can you try again?
It is possible that there is some objective description which is 100% correlated with a subjective experience. If there is, and we are reasonably sure that it is, we would be likely to call the objective measurement a measurement of subjective experience. And it might be that the objective thing is factually identical to the subjective experience. But “this objective description is true” will never have the same meaning as “someone is having this subjective experience,” as I explained earlier.
Note that anyone who brings up another description which is not a synonym for “consciousness,” is not explaining consciousness, but something else. Any explanation which is actually an explanation of consciousness, and not of something else, should have the same meaning as “consciousness.”
That’s the general problem with your game of “rationalist taboo.” In essence, you are saying, “These words seem capable of expressing your position. Avoid all the words that could possibly express your position, and then see what you have to say.” Sorry, but I decline to play.
I can briefly explain “banana” as “bent yellow fruit”. Each of those words has a clear meaning when separated from the others. They would be meaningful even if bananas didn’t exist. On the other hand, “first person experiences” isn’t like that. There are no “third person experiences” that I’m aware of. Likewise, the only “first person” thing is “experience”. And there can be no experiences if there is no consciousness.
There are no third person experiences that you have first person experiences of. But anyone else’s first person experiences will be third person experiences for you.
This is like saying that “thing” must be meaningless because the only things that exist are things. Obviously, if you keep generalizing, you will come to something most general. That does not mean it is meaningless. I would agree that we might use “experience” for the most general kind of subjective thing. But there are clearly more specific subjective things, notably like the example of feeling pain.
Wow, now you’re not just assuming that conscience exists, but that there is more than one.
“Thing” is to some extent a grammatical placeholder. Everything is a thing, and there are no properties that every “thing” shares. I wouldn’t know how to play rationalist taboo for “thing”, but this isn’t true for most words, and your arguments that this must be true for “consciousness” or “experience” are pretty weak.
Nobody is disagreeing. If, in another context, I asked for an explanation of “pain”, saying “experience of stubbing your toe” would be fine.
I am not “assuming” that consciousness exists; I know it from direct experience. I do assume that other people have it as well, because they have many properties in common with me and I expect them to have others in common as well, such as the fact that the reason I say I am conscious is that I am in fact conscious. If other people are not conscious, they would be saying this for a different reason, and there is no reason to believe that. You can certainly imagine coming to the opposite conclusion. For example, I know a fellow who says that when he was three years old, he thought his parents were not conscious beings, because their behavior was too different from his own: e.g. they do not go to the freezer and get the ice cream, even though no one is stopping them.
This means you should know what the word “experience” means. In practice you are pretending not to know what it means.
Yes, I said “in another context”. In current context it’s both “conscience” and “experience” that I need explained.
You don’t know what rationalist taboo (or even regular taboo) is, do you? Here: https://wiki.lesswrong.com/wiki/Rationalist_taboo maybe that will clear some things up for you.
Sentences like this are exactly why I need you to play taboo.
Yes, I do know what you are talking about here.
I have already said why I will not play.
Are you sure? I don’t know how to interpret your “In practice you are pretending not to know what it means”, if you do. Pretending is how the game works.
No one can force you, if you don’t want to. But your arguments that there is something wrong with the game are weak.
Quite sure.
You should interpret it to mean what it says, namely that in practice you have been pretending not to know what it means. If pretending is how the game works, and you are playing that game, then it is not surprising that you are pretending. Nothing complicated about this.
Perhaps your objection is that I should not have said it in an accusatory manner. But the truth is that it is rude to play that game with someone who does not want to play, and I already explained that I do not, and why.
You certainly haven’t provided any refutation of my reasons for that. Once again, in essence you are saying, “describe conscious experience from a third person point of view.” But that cannot be done, even in principle. If you describe anything from a third person point of view, you are not describing a personal experience. So it would be like saying, “describe a banana, but make sure you don’t say anything that would imply the conclusion that it is a kind of fruit.” A banana really is a fruit, so any description that cannot imply that it is, is necessarily incomplete. And a pain really is a subjective feeling, so any description which does not include subjectivity or something equivalent cannot be a description of pain.
I don’t think I actually said something like that. I’m just asking you to describe “conscious experience” without the words “conscious” and “experience”. You expect that I will reject every description you could offer, but you haven’t actually tried any. If you did try a few descriptions and I did find something wrong with each of them (which is not unlikely), your arguments would look a lot more serious.
But now I can only assume that you simply can’t think of any such descriptions. You see, “I don’t want to play” is different from “I give up”. I think you’re confusing them.
All descriptions are incomplete. You just have to provide a description that matches bananas better than it matches apples or sausages. A malicious adversary can always construct some object which would match your description without really being a banana, but at some point the construction will have to be so long and bizarre and the difference so small that we can disregard it.
Again, all descriptions are incomplete. “What makes someone say ouch” is quite accurate considering it’s length.
There is a reason I expect that. Namely, you criticized a proposed definition on the grounds that it was “synonymous” with consciousness. But that’s exactly what it was supposed to be: we are talking about consciousness, not something else. So any definition I propose is going to be synonymous or extremely close to that; otherwise I would not propose it.
Your assumption is false. Let’s say “personal perception.” Obviously I can anticipate your criticism, just as I said above.
If your description of a banana does not suggest that it is fruit, your description will be extremely incomplete, not just a little incomplete. In the same way, if a description of consciousness does not imply that it is subjective, it will be extremely incomplete.
The point is that you are ignoring what is obviously central to the idea of pain, which is the way it feels.
Again you confirm that you don’t understand what the game taboo is (rationalist or not). “Yellow bent fruit” is not a synonym of “banana”.
My criticism is that this description obviously matches a roomba. It can definitely perceive walls (it can become aware of them through sensors) and I don’t see why this perception wouldn’t be personal (it happens completely withing the roomba), although I suspect that this word might mean something special for you. Now, as I say this, I assume that you don’t consider roomba conscious. If you do, then maybe I have not criticisms.
Is that the criticism you anticipated?
I don’t know what sort of scale of incompleteness you have. Actually, there could be an agent who can recognize bananas exactly as well as you, without actually knowing whether they grow on plants or are made in factories. A banana has many distinctive properties, growing on plants is not the most important one.
How does it feel? It feels bad, of course, but what else?
“Perception” includes subjectively noticing something, not just being affected by it. I don’t think that a roomba notices or perceives anything.
Among other things, it usually feels a bit like heat. Why do you ask?
Why do you not think that? If there is something I’m not getting about that word, try making your taboo explanation longer and more precise.
By the way, I have some problems with “subjective”. There is a meaning that I find reasonable (something similar to “different” or “secret”), and there is a meaning that exactly corresponds to consciousness (I can just replace the “subjectively” in your last post with “consciously” ans lose nothing). Try not to use it either.
More specifically I want to know, of all the feelings that you are capable of, how do you recognize that the feeling that follows stubbing your toe is the one that is pain? What distinctive properties does it have?
Off topic, does it really feel like heat? I’m sweating right now, and I don’t think that’s very similar to pain. Of course, getting burned causes pain. Also, hurting yourself can produce swelling, which does feel warm, so that’s another way to explain your association.
I could say that a roomba is a mere machine, but you would probably object that this is just saying it is not conscious. Another way to describe this, in this particular context, is that the the roomba’s actions do not constitute a coherent whole, and “perception” is a single coherent activity, and therefore conscious.
As I said, I’m not playing your game anyway, and I feel no obligation to describe what I think in your words rather than mine, especially since you know quite well what I am talking about here, even if you pretend to fail to understand.
By recognizing that it is similar to the other feelings that I have called pain. It absolutely is not by verbally describing how it feels or anything else, even if I can do so if I wish. That is true of all words: when we recognize that something is a chair or a lamp, we simply immediately note that the thing is similar to other things that we have called chairs or lamps. We do not need to come up with some verbal description, and especially some third person description, as you were fishing for there, in order to see that the thing falls into its category.
It is not just that getting burned causes pain, but intense pain also feels similar to intense heat. Sweating is not an intense case of anything, so there wouldn’t be much similarity.
I am talking about how it feels at the time, not afterwards. And the “association” does not need to be explained by anything except how it feels at the time, not by any third person description like “this swelled up afterwards.”
I would also object by saying that a human is also a “mere machine”.
I have no idea what “coherent whole” means. Is roomba incoherent is some way?
At times I honestly don’t.
Ok, but that just pushes the problem one step back. There are various feelings similar to stubbing a toe, and there are various feelings similar to eating candy. How do you know which group is pain and which is pleasure?
I think you misunderstood me. Sweating is what people do when they’re hot. I’m saying that pain isn’t really that similar to heat, and then offered a couple of explanations why you might imagine that it is.
The word “mere” in that statement means “and not something else of the kind we are currently considering.” When I made the statement, I meant that the roomba is not conscious or aware of what it is doing, and consequently it does not perceive anything, because “perceiving” includes being conscious and being aware.
In that way, humans are not mere machines, because they are conscious beings that are aware of what they are doing and they perceive things.
The human performs the unified action of “perceiving” and we know that it is unified because we experience it as a unified whole. The roomba just has each part of it moved by other parts, and we have no reason to think that these form a unified whole, since we have no reason to think it experiences anything.
In all of these cases, of course, the situation would be quite different if the roomba was conscious. Then it would also perceive what it was doing, it would not be a mere machine, and its actions would be unified.
The mind does the work of recognizing similarity for us. We don’t have to give a verbal description in order to recognize similarity, much less a third person description, as you are seeking here.
You’re wrong.
Oh, so “mere machine” just a pure synonym of “not conscious”? Then I guess you were right about what my problem is. Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don’t know how to explain to you that this is bad.
Are you implying that humans do not have parts that move other parts?
No, you misunderstood my question. I get that the mind recognizes similarity. I’m asking, how do you attach labels of “pain” and “pleasure” to the groups of similar experiences?
Maybe one of us is really a sentient roomba, pretending to be human? Who knows!
No. I said the roomba “just” has that. Humans are also aware of what they are doing.
Are you saying that we must have dualism, and that consciousness is something that certainly cannot be reduced to “parts moved by other parts”? It’s not just that some arrangements of matter are conscious and others are not?
If there are parts, there is also a whole. A whole is not the same as parts. So if you mean by “reductionism” that there are only parts and no wholes, then reductionism is false.
If you mean by reductionism that a thing is made of its parts rather than made of its parts plus one other part, then reductionism is true: a whole is made out of its parts, not of the parts plus another part (which would be redundant and absurd.). But it is made “out of” it—it is not the same as the parts.
No. It also means not any other thing similar to consciousness, even if not exactly consciousness.
My reason is that we have no reason to think that a roomba is conscious.
There is no extra step between recognizing the similarity of painful experiences and calling them all painful.
I have not idea what that means (a few typos maybe?). Obviously, there are things that are unconscious but are not machines, so the words aren’t identical. But if there is some difference between “mere machine” and “unconscious machine”, you have to point it out for me.
Hypothetically, what could a reason to think that a robot is conscious look like?
“Pain” is a word and humans aren’t born knowing it. What does “no extra step” even mean? There are a few obvious steps. You have this habit of claiming something to be self-evident, when you’re clearly just confused.
No typos. I meant we know that there are two kinds of things: objective facts and subjective perceptions. As far as anyone knows, there could be a third thing intermediate between those (for example.) So the robot might have something else that we don’t know about.
Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.
Wrong.
Why is this a probable reason? You have one data point—yourself. Sure, you have human-like behavior, but you also have many other properties, like five fingers on each hand. Why does behavior seem like a more significant indicator of consciousness than having hands with five fingers? How did you come to that conclusion?
If a robot has hands with five fingers, that will also be evidence that it is conscious. This is how induction works; similarity in some properties is evidence of similarity in other properties.
But surely, you believe that human-like behavior is stronger evidence than a hand with five fingers. Why is that?
I perform many human behaviors because I am conscious. So the fact that the robot performs similar behaviors is inductive evidence that it performs those behaviors because it is conscious. This does not apply to the number of fingers, which is only evidence by correlation.
Another bold claim. Why do you think that there is a causal relationship between having consciousness and behavior? Are you sure that consciousness isn’t just a passive observer? Also, why do you think that there is no causal relationship between having consciousness and five fingers?
I am conscious. The reason why I wrote the previous sentence is because I am conscious. As for how I know that this statement is true and I am not just a passive observer, how do you know you don’t just agree with me about this you whole discussion, and you are mechanically writing statements you don’t agree with?
Yes, for the above reason.
In general, because there is no reason to believe that there is. Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.
That’s just paraphrasing your previous claim.
I have no problems here. First, everything is mechanical. Second, a process that would translate one belief into it’s opposite, in a consistent way, would be complex enough to be considered a mind of its own. I then identify “myself” with this mind, rather than the one that’s mute.
You gave no reason for thinking that your consciousness is causal. You just replied with a question.
It is not just paraphrasing. It is giving an example of a particular case where it is obviously true.
Nonsense. Google could easily add a module to Google Translate that would convert a statement into its opposite. That would not give Google Translate a mind of its own.
Nope. You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.
Obviously I do not take this seriously, but I take it just as seriously as the claim that my consciousness does not cause me to say that I am conscious.
I replied with an example, namely that I say I am conscious precisely because I am conscious. I do not need to argue for this, and I will not.
No, google could maybe add “not” before every “conscious”, in a grammatically correct way, but it is very far from figuring out what other beliefs need to be altered to make these claims consistent. When it can do that, it will be conscious in my book.
What is “you” in this sentence? The mute mind identifies with the mute mind, and the translation process identifies with the translation process.
There are possible reasons for saying you are conscious, other than being conscious. A tape recorder can also say it is conscious. Saying something doesn’t make it true.
Yes. I have pointed this out myself. This does not suggest in any way that I have such a reason, other than being conscious.
Exactly. This is why tests like “does it say it is conscious?” or any other third person test are not valid. You can only notice that you yourself are conscious. Only a first person test is valid.
Exactly, and you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.
What the hell does “not questionable” mean?
Let’s try another situation. Imagine two people in sealed rooms. You press a button and both of them scream in pain. However you know that only the first person is really suffering, while the second one is pretending and the button actually gives him pleasure. The two rooms have the same reaction to pressing the button, but the moral value of pressing the button is different. If you propose an AI that ignores all such differences in principle, and assigns moral value only based on external behavior without figuring out the nature of pain/pleasure/other qualia, then I won’t invest in your AI because it will likely lead to horror.
Hence the title “steelmanning the chinese room argument”. To have any shot at FAI, we need to figure out morality the hard way. Playing rationalist taboo isn’t good enough. The hope of reducing all morally relevant properties (not just consciousness) to outward behavior is just that—a hope. You have zero arguments why it’s true, and the post gives several arguments why it’s false. Don’t bet the world on it.
Let’s pause right there. How do you know it? Obviously, you know it by observing evidence for past differences in behavior. This, of course, includes being told by a third party that the rooms are different and other forms of indirect observations.
If the AI has observed evidence for the difference between the rooms then it will take it into account. If AI has not observed any difference then it will not. The word “ignore” is completely inappropriate here. You can’t ignore something you can’t know. It’s usage here suggests that, you expect, there is some type of evidence that you accept, but the AI wouldn’t. Is that true? Maybe you expect the AI to have no long term memory? Or maybe you think it wouldn’t trust what people tell it?
You assume that all my knowledge about humans comes from observing their behavior. That’s not true. I know that I have certain internal experiences, and that other people are biologically similar to me, so they are likely to also have such experiences. That would still be true even if the experience was never described in words, or was impossible to describe in words, or if words didn’t exist.
You are right that communicating such knowledge to an AI is hard. But we must find a way.
You may know about being human, but how does that help you with the problem you suggested? You may know that some people can fake screams of pain, but as long as you don’t know which of the two people is really in pain, the moral action is to treat them both the same. What else can you do? Guess?
The knowledge that “only the first person is really suffering” has very little to do with your internal experience, it comes entirely from real observation and it is completely sufficient to choose the moral action.
You said:
I’m trying to show that’s not good enough. Seeing red isn’t the same as claiming to see red, feeling pain isn’t the same as claiming to feel pain, etc. There are morally relevant facts about agents that aren’t reducible to their behavior. Each behavior can arise from multiple internal experiences, some preferable to others. Humans can sometimes infer each other’s experiences by similarity, but that doesn’t work for all possible agents (including optimized uploads etc) that are built differently from humans. FAI needs to make such judgments in general, so it will need to understand how internal experience works in general. Otherwise we might get a Disneyland with no children, or with suffering children claiming to be happy. That’s the point of the post.
You could try to patch the problem by making the AI create only agents that aren’t too different from biological humans, for which the problem of suffering could be roughly solved by looking at neurons or something. But that leaves the door open to accidental astronomical suffering in other kinds of agents, so I wouldn’t accept that solution. We need to figure out internal experience the hard way.
A record player looping the words “I see red” is very different from how humans see, both internally and behaviorally. A robot which takes a picture, finds the most common pixel color, and if that’s red, plays the same “I see red” sound, is still in some ways different, but a lot less so. And if someone wanted to call this second robot conscious, as far as color is concerned, there would be no problem with that.
You may feel that pain is special, and that if we recognize a robot which says “ouch” when pushed, to feel pain, that would be in some sense bad. But it wouldn’t. We already recognize that different agents can have equally valid experiences of pain, that aren’t equally important to us (e.g. torturing rats vs humans. or foreigners vs family). This is not a new problem, and suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution.
I don’t see how it follows from the fact that foreigners and animals feel pain that it is reasonable to recognize that a robot that is programmed to say “ouch” when pushed feels pain. Can you clarify that inference?
I don’t see anything magical about consciousness—it is something that is presumably nearly universally held by people, and no one on this thread has suggested a supernatural explanation for it. Just because we don’t as-of-yet have an objective metric for consciousness in others does not make it magical.
No, I’m saying that “feels pain” is not a meaningful category. Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated. My example or rats was meant to point out that “feels pain” is very different from “deserves human rights”.
No one has suggested any explanation for it at all. And I do use “magical” in a loose sense.
So what do pain killers do? Nothing?
Move a human from one internal state to another, that they prefer. “Preference” is not without it’s own complications, but it’s a lot more general than “pain”.
To be clear, the concept of pain, when applied to humans, mammals, and possibly most animals, can be meaningful. It’s only a problem when we ask whether robots feel pain.
I″m with EntirelyUseless. You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”.
There are two ways to look at a meaningless phrase. One is to say that perfectly good words are being used wrong. Another is to say that there is some problem with the words themselves. If we look at, say, “purple is bitter”, the first way is clearly more appropriate. If we look at “robot feels pain”, it’s hard for me to tell, which way I prefer.
I don’t think you have established any problem of meaning , so the question of which problem doesn’t arise.
Here is my claim that “robot feels pain” is a meaningless statement. More generally, a question is meaningless, if an answer to it transfers no information about the real world. I can answer “is purple bitter” either way, and that would tell you nothing about the color purple. Likewise, I could answer “does this robot feel pain” and that would tell you nothing about the robot or what you should do with it. At best, a “yes” would mean that the robot can detect pressure or damage, and then say “ouch” or run away. But that’s clearly not the kind of pain we’re talking about.
Since you are equaiting reality with objectivity, you are simply declaring statements about subjectivity meaningless by fiat.
That’s because it is a category error.
Of course it tells me what I should do. It’s ethically relevant if a robot feels pain. if it feels pain when damaged, I should not damage it.
How do you know? You aer assuming that no roobt can have a real subjective sensation of pain, and you have no way of knowing that one way or the other, and your arguments are question begging and inconsistent. (If “robots do (not) feel pain” is meaningless, as you sometimes say, it cannot also be false, as you sometimes also say).
I’m claiming that all subjective experiences have objective descriptions. Please give an example of a subjective experience, other than consciousness, that has no physical evidence. Obviously, I will try to argue that either there is something objective you missed, or that the subjective experience is as poorly defined as consciousness.
But “robot pain” isn’t? How did you come to those conclusions?
That’s not how this works. Rats feel pain without a doubt, but we destroy them quite freely. Whether you will damage the robot is decided by many factors. E.g. if there is some benefit to the damage, if the robot will scream out in pain, or if it’s likely to damage you in return. The robot’s subjective experience of pain only matters if you decide that it matters—this is true for all categories, no matter how artificial.
Are you asking about the “at best” part? Because the rest of that sentence seems quite mundane. Here “at best” is about the limits of my own imagination. You’re welcome to suggest something better.
That’s not contradictory, even if slightly inconsistent. “is purple bitter” is a meaningless question, but the answer “no” is clearly a lot more appropriate than “yes”. The line between falsehood and nonsense is quite blurry. I think we can freely call all nonsense statements false, without any negative consequences.
That’s not how it works either. You can’t infer zero moral relevance of some factor by noting that other factors acan countervail.
I’m not morally omniscient. The robots experience of pain matters if it features in some scheme of ideal moral reasoning. To put i another way, you just proved that nothing is morally relevant, if you proved anything at all.
Well, you do seem to have a subjective intuition that robots will never feel pain. Others intuit differently. What happened to all the science stuff?
Gosh, I really don’t want to start talking about morality now. But I have to point out that the “bitterness of purple” can also matter, if it features in some scheme of ideal moral reasoning. At least if you accept that this moral reasoning could require arbitrary concepts and not just ones grounded in reality.
No, I ran a deterministic procedure in my brain, called “is X well defined”, on “robot pain”, and it returned “no”. It’s only subjective in the sense that mine is different from yours, if you have such a procedure at all. The procedure, by the way, works by searching for alternative definitions of things, such that the given concept is neither trivial nor stupid. Unfortunately, failure to find such definitions does not produce a proof of non-existence, so I’m quite open to the idea that I missed something, it’s just that you inspire little confidence.
I did not mean to imply that ideal moral reasoning is weird and unguessable....only that you should not take imperfect moral reasoning (whose?) to be the last word. The idea that deliberately causing pain is wrong is not contentious, and you don’t actually have an argument against it.
That’s the sense that matters.
That’s not a very interesting sense. Is height also subjective, since we are not equally tall? This sense is also very far from the magical “subjective experience” you’ve used. I guess the problematic word in that phrase is “experience”, not “subjective”?
Height is not a subjective judgement because it is not a judgement. If judgements are going to vary, that matters, because then who knows what the truth is?
I would say that almost none have descriptions, where description means description. But is sounds as though you might actually be talking about physical correlates.
I can’t make much sense of that, since all subjective experiences occur within consciosuness.
I don’t know why you think I am denying that changes in consciousness have correlations in physical activity.
I am pointing out that we cannot determine much about conscious subjective states from external physicial evidence, because we don’t know how to work back from the one to the other. We can’t recover the richness of conscious experience from externals. But we know it is there, in ourselves at least, because being conscious mean having access to your own conscious experience. You are putting the blame on consciousness itself, saying it is a nothingy thing, when the problem is your techniques.
The requrrement that everything be rooted in (external) reality in order to be meaningful is unreasonable, because i, in cases like this, it requires you to have a sort of omniscience before you can talk at all. (It’s fine to defiene temperature as what thermometers measure once you have accurate thermometers).
You see, I’m proposing the radical new view that the world is made of atoms and other “stuff”, and that most words refer to some configurations of this stuff. In this view “pain” doesn’t just correlate with some brain activity, it is that brain activity. The brain activity of pain is an objective fact, and if you were to describe that objective fact, you get an objective description. In this view, the existence of human pain is as real as the existence of chairs. But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
I’m pointing out that you cannot work out one from another, because your concept of consciousness has no properties or attributes that are more grounded in reality than consciousness itself. You need to play rationalist taboo. If you defined consciousness as “ability to process external events” or “ability to generate thoughts” or “the process that makes some people say they’re conscious”, finding a correspondence between consciousness and brain states would be possible, even if not easy. But you seem to refuse such definitions, you call them correlates, which suggests that there could be a consciousness that satisfies none of them. I’m pointing out that even if you had complete knowledge about everything going on in a particular brain, you still wouldn’t be able to tell if it is conscious, because your concept of consciousness is broken.
It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter, and also without denying that it eventually does. Treating pain semantically as some specific brain activity buys you nothing in terms of the ability to communicate and understand …. when you don’t know which precise kind...which you don’t. If Purple and Bitter are both Brain Activity Not Otherwise Specified, they are the same. If you can solve the mind body problem , then you will be in the position to specify the different kinds of brain activity they are. But you can also distinguish them , here and now, using the subjectively obvious difference. And without committing yourself to evil dualism.
I have never claimed otherwise. In fact, there is literally nothing that I have exact description of, in terms of matter—neither pain nor chairs. But you have to know something. I know that “chair is what I sit on” and from that there is a natural way to derive many statements about chairs. I know that “gravity is what makes things fall down”, and from that there is a fairly straightforward way to the current modern understanding of gravity. There is nothing that you know about consciousness, from which you can derive a more accurate and more material description.
It buys me the ability to look at “do robots feel pain” and see that it’s a stupid question.
What evil dualism?
How do you know? And what of things like https://en.wikipedia.org/wiki/Global_Workspace_Theory ?
It doesn’t seem to have given you the ability to prove that it is a stupid question.
Well, for one, you have been unwilling to share any such knowledge. Is it a secret, perhaps?
I see a model that claims to reproduce some of the behaviors of the human mind. Why is that relevant? Where are your subjective experiences in it?
Also, to clarify, when I say “you know nothing”, I’m not asking for some complex model or theory, I’m asking for the starting point from which those models and theories were constructed.
Proof is a high bar, and I don’t know how to reach it. You could teach me by showing a proof, for example, that “is purple bitter” is a stupid question. Although I suspect that I would find your proof circular.
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
I was responding to your claim that “there is nothing that you know about consciousness, from which you can derive a more accurate and more material description.”. This has been done, so that claim was false. You have shifted the ground.
Purple is a colour, bitter is taste, therefore category error.
Then why be so sure about things? Why not say “dunno” to “can robots feel pain?”.
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it. I ask you if it has subjective experiences, because that seems to be the most important aspect of consciousness to you. If you can’t find them in this model, then the model is on my side, not yours.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple. Maybe we mean different things when we say “proof”?
That’s still an irrelevant objection. The issue is whether the concept of consciousness can be built on and refined, or whether it should be abandoned. GWT shows that it can be built on, and it is unreasonable to demand perfection.
Is that worse than saying you know things you don’t know?
Sometimes different people use the same words to mean different things. I deduce that GWT does not build on consciousness as you understand it, because it doesn’t have the most important feature to you. It builds on consciousness as I understand it. How is that irrelevant?
You mean, is saying “dunno” to everything worse than saying something is true without having absolute 100% confidence? Yes. What kind of question is that?
Also, why did you quote my “category error” response? This doesn’t answer that at all.
If we assume that the sweet spot is somwhere between 0% and 100%, are you sure you are saying “dunno” enough.
Quite sure. How about you?
And, again, what sort of question is that? What response did you expect?
Why is the question about the “are tables also chairs” not meaningful? Structured knowledge databases like Wikidata have to answer that question.
Image that a country has a general tariff for furniture and there’s a tariff exemption for chairs. One clever businessman who sells tables starts to say that his tables are chairs. In that case, the question can become important enough that a large sum of money is spent on a legal process to answer the question.
This seems like a good comment to illustrate, once again, your abuse of the idea of meaning.
There are two ways to understand this claim: 1) most words refer to things which happen also to be configurations of atoms and stuff. 2) most words mean certain configurations of atoms.
The first interpretation would be fairly sensible. In practice you are adopting the second interpretation. This second interpretation is utterly false.
Consider the word “chair.” Does the word chair mean a configuration of atoms that has a particular shape that we happen to consider chairlike?
Suppose someone approached a chair in your house with an atomic microscope and discovered that it was not made of atoms, but was a continuous substance without any boundaries in it. Would you suddenly say that it was not a chair? Not at all. You would say “this chair is not made of atoms.” This proves conclusively that the meaning of the word chair has nothing whatsoever to do with “a configuration of atoms.” A chair is in fact a configuration of atoms; but this is a description of a thing, not a description of a word.
This could be true, if you mean this as a factual statement. It is utterly false, if you mean it as an explanation of the word “pain,” which refers to a certain subjective experience. The word “pain” is not about brain activity in the same way that the word “chair” is not about atoms, as explained above.
I would just note that “are tables also chairs” has a definite answer, and is quite meaningful.
I would say that being a chair (according to the meaning of the word) is correlated with being made of atoms. It may be perfectly correlated in fact; there may be no chair which is not made of atoms, and it may be factually impossible to find or make a chair which is not. But this is a matter for empirical investigation; it is not a matter of the meaning of the word. The meaning of the word is quite open to the possibility that there is a chair not made of atoms. In the same, the meaning of the word “consciousness” refers to a subjective experience, not to any objective description, and consequently in principle the meaning of the word is open to application to a consciousness which does not satisfy any particular objective description, as long as the subjective experience is present.
I explicitly added “other stuff” to my sentence to avoid this sort of argument. I don’t want or need to be tied to current understanding of physics here.
But even if I had only said “atoms”, this would not be a problem. After seeing a chair that I previously thought was impossible, I can update what I mean by “chair”. In the same, but more mundane way, I can go to a chair expo, see a radical new design of chair, and update my category as well. The meaning of “chair” does not come down from the sky fully formed, it is constructed by me.
I want to see that.
Then you should admit that words like “chair” or “consciousness” do not have anything about physics in their meaning.
Tables are not chairs.
I was hoping to see the reasoning behind that. Where does the answer come from? Obviously, you chose it arbitrarily.
For one thing (not the only thing), chairs are things that are normally used for sitting. Tables are not normally used for sitting, so they are not chairs. Nothing arbitrary about that reasoning.
Where do those definitions come from? Do you know what “arbitrary” means? By the way, I have chairs that I have never sat on, and there are tables I’ve sat on quite a bit. What is “normally”?
The meaning of words comes from people’s usage (which is precisely why words do not mean anything like what you think they do.)
Yes.
The vast majority of tables are rarely or never sat on. The table in my house has never been sat on. The vast majority of chairs are frequently sat on, like the ones in my house. It may not be the only normal thing, but certainly what happens in the vast majority of cases is normal.
Also, I said “for one thing.” Even if people normally sat on tables, they would not be chairs, because they do not have the appropriate structure, just as benches are not chairs.
Why do people use the words that way?
Also, I’d point out that what I mean by “chair” is not equivalent to people’s usage. You could call it “reverse engineered” from people’s usage. There are some differences. Do you know where those come from?
Obviously I don’t even know how most people use those words—I only know about my acquaintances and people on TV, I could be living in a bubble, I could be using many words wrong.
Stools are chairs, but benches are just wide stools. So if I have a small table (such as a coffee table), and use it for sitting, it’s not a bench, it’s a stool and therefore a chair?
In case it’s not obvious what I’m doing, I intend to ask you these stupid questions until you realize that they are stupid questions, that they don’t matter and that the correct way to answer them is to pull answers out of your ass (i.e. arbitrarily).
Roughly speaking, because if one performed factor analysis on their life experiences, they would have factors more or less corresponding to the words they use.
Yes, largely from your own experience of the usage contexts of the word chair, which as you say could be somewhat different from the overall usage patterns, although it is unlikely that there are large differences.
No. As I said before, “for one thing.” There are still reasons why a coffee table would not suddenly become a stool, even if in fact you use it for that.
That’s incorrect, so I won’t realize that no matter how many such questions you ask.
That said, it is true that we extend words to additional things when we think the new things are similar enough to the old things.
The problem of “consciousness” is that we have no idea how similar the new thing is to the old thing, no matter how many objective descriptions we come up with for the new thing. That is: the problem is that we have no idea whether the robot is conscious or not, no matter what objective facts we know about the robot. This does not mean that we can arbitrarily decide to say “let’s extend the word conscious to the robot.” It’s like this situation: there is an object behind a screen, and you are never allowed to look behind the screen. Should we call the object a “chair” or not?
The fact that language is both vague and extendible does not suddenly entitle you to say that we can say that an object behind a screen is either “a chair” or “not a chair” without first looking behind the screen. And in the case under discussion, no one yet knows a way to look behind the screen, and possibly there is no such way.
What makes you believe that? Ideally we’d want something like this to be true, but assuming that it is true seems a bit naive. There are also some serious technical problems with the idea (how do you quantify experiences? what do you do when different people have different experiences but have to use the same words? etc).
I don’t think you know what “arbitrary” means. It does not mean “completely random”. In a deterministic world everything has some explanation. It’s just that sometimes the explanations are long, bizarre and kind of stupid, if you look at them.
Likewise, we cannot say whether some new object is a chair only by knowing objective facts about the object. We also need to know what the word “chair” refers to. And in the case that our definition of “chair” doesn’t help us, we’re going to have to extend it in some arbitrary way.
Replace “you are never allowed” with “it is impossible”. Then I will suggest that the object does not exist.
If nothing like that were true, words would actually be arbitrary, as you suppose. Then if we looked for reasonable boundaries, they would fall in random places. For example, it might turn out that the word “chair” refers in some cases to physical objects that people sit on, and in other cases to tiny underground animals. Words don’t work like this, which gives me good reason to think that something like this is true. It is not “assuming” anything.
My point is that the meaning of words is not long, bizarre, and stupid. The meanings of words are actually quite reasonable. Also, you are mistaken about the meaning of “arbitrary”. It does in fact mean not having a reason; you are just saying that sometimes “this doesn’t have a reason” is shorthand for saying that it doesn’t have a good reason. But even understood like this, the meaning of words is not arbitrary.
We will have to extend it, but it will not have to be in an arbitrary way. There may be a good reason why we extend it the way that we do, not a stupid reason.
Then your suggestion is false; even if there is a screen that you cannot go behind, it would not mean that there is nothing inside it. You cannot look beyond the event horizon of the visible universe, but there are things beyond it.
“Chair” happens not to refer to animals, although it can mean “chairman”. “Stool” can refer to several things, including poop. Finally, “shrew” refers both to a tiny animal and an annoying woman. Words do work like this. Surely there are some bizarre historical reasons how the word got those two meanings. But you have to admit, that these reasons have very little to do with the properties of small animals and annoying women.
I’m not saying that there are no forces working to simplify language. But there is a very large gap between that and “factor analysis on life experiences”.
“Arbitrary” literally means “decided by arbiter”, as opposed to “decided by the rule of law”, i.e. there was a question that “the law” couldn’t answer directly, and an arbiter had to decide “arbitrarily”. It doesn’t mean that the arbiter flipped a coin.
Going back to chairs and tables, you can always find some excuse why a coffee table I sit on isn’t a chair, and I can always find some excuse why it should be (I mean, how the hell is http://www.ikea.com/us/en/catalog/products/20299829/ not a stool?). We could live perfectly well is a world were my excuses are right and yours are wrong. The reasons we don’t are long bizarre and stupid. That falls under “arbitrary” perfectly well.
I’d rather not bring modern physics into this, but I have to point out, that suggesting those things don’t exist will cause no problems to anyone. At worst, this suggestion would lose to another idea through Occam’s razor.
Also, cases where a “single” word has two meanings are cases of two words. They are not examples of words that have arbitrary meanings.
You first said that if there was nothing like factor analysis, some words would have two unrelated meanings, then I point out that lots of words have two unrelated meanings, and now you say that one word with two meanings is two words (by definition, I assume?), contradicting your own claim we started from. Do you see how bad this looks?
Sure, there are words that have different meanings and different origins, that, thanks to some arbitrary modifications, end up sounding the same. There is an argument to discount those. But lots of words do have the same origin and the new meaning is a direct modification of the old one.
You misunderstood. The point is that if there was not some common meaning, the applications of a word would be random. This does not happen in any of the cases we have discussed, and two entirely unrelated usages are cases of two words.
This is true, and there is nothing stupid or arbitrary about this way of getting a secondary meaning.
I have never denied that the ways different people use the same words are similar. This however does nothing to support your “factor analysis” theory, nor does it have anything to do with words that have multiple unrelated meanings.
This is a claim with no justification. The whole “one word is two words” formulation is inherently bizarre. Of course, saying that “committee chair” and “armchair” are both “chairs”, doesn’t mean the two things are actually similar. Likewise putting both “armchair” and “stool” in under one label does not reduce their differences, and putting “stool” and “coffee table” in different categories does not reduce their similarities.
Sure it does. People use words in similar ways because their lives have similar factors.
Technically, there are no such words. As I said, these are multiple words that use similar spellings.
Consider these two statements:
1) A committee chair and an armchair are both “chairs.” 2) A committee chair and an armchair are both chairs.
The first statement is true, and simply says that both a committee chair and an armchair can be named with the sound “chair”.
The second statement, of course, is utterly false, because there is no meaning of “chairs” that makes it true. And that is because there is not a word that has both of those meanings; there are two words which are spelled and spoken alike.
In fact, using different names adds a difference: the fact that the things are named differently. Still, overall you are more right than wrong about this, even though you have the tendency to ignore the real reasons for names in favor of appearances, as when you say that “pain” means “what makes someone say ouch.” Obviously, if someone says “ouch” because he wishes to deceive you that he is feeling pain, pain will not be the wish to deceive someone that he is feeling pain. Pain is a subjective feeling; and in a similar way, a coffee table is not merely something that has a certain shape, but something that was made for a certain intended use, even if you personally use it for sitting.
No, people use words in similar ways, because they want to communicate with each other. And because word meanings are usually inherited rather than constructed. It’s not false that the factors are usually similar, but not all true statements follow from one another. Some people with very different factors may use words similarly and others with similar factors will eventually use them differently.
Again, nobody thinks that the two things are similar or share properties, but that’s exactly what you asked for. If you want a milder example, I can offer “computer”, which can refer to an electronic device or to a human who does arithmetic (old usage). The two meanings are still very different, but they do share a property (they both compute), and it’s easy to see that a sentence “I had computers calculate this solution” is natural and could refer to either (or both). At the same time, using two different words for them (e.g., let’s call humans who compute “analysts”) would also be natural. The reasons we don’t use two words have very little to do with the properties of humans or electronic devices.
Etymology is not meaning.
That’s not up to you; you made the argument that if there is a screen that you cannot look behind, there is nothing behind it. That argument is false.
The suggestion will be false, whether or not it causes problems for anyone.
Exactly like suggesting that other people’s conscious experiences do not exist, since this would mean that the reason for your own talk about your own experiences differs from the reason for other people’s talk about their experiences. There is no reason to believe in such a difference.
It would be, if word meanings weren’t assigned in a stupid way. And it does usually help to understand the word.
By “problem” I meant “contradiction”. Contradictions is how we establish what is true and what is false.
Maybe you didn’t want to quote the “at worst” part? Because now I almost agree with you.
Wrong. There’s nothing stupid about this process; on the contrary, it would be stupid to attempt to make meaning exactly correspond with etymology.
They are one way, not the only way. Your positing that something doesn’t exist because you cannot access it is absurd, contradiction or not.
That’s a bold claim. God forbid words have consistent meanings over time!
Going back to the word “arbitrary”, do we actually disagree about what it means, or is this just a useless tangent? You said it means “without any reason”. The obvious problem is that almost everything has some reasons, including whims of small children, delusions of the insane and results of fair coin flips. I suggest that if the word meant “without good reason”, the word would be more useful.
We probably do not disagree about what it means, but we disagree about what we are saying it means. I do say it means without any reason, although we can take it more specifically as “without any reason of the kind we are currently thinking about.”
If we take as I suggested, it would be possible in some cases to mean “without good reason,” namely without a reason of the kind we are currently thinking about, namely a good one.
In general, this topic came up because you were asserting that questions like “are tables also chairs” are stupid and only have arbitrary answers. If arbitrary means that there cannot be a good reason, then you are mistaken, because we have good reason for saying that tables are not chairs, and the stupidity would only be in saying that they are chairs, not in saying that they are not.
In regard to the issue of consciousness, the question is indeed a useless distraction. It is true that words like “pain” or even “consciousness” itself are vague, as are all words, and we exercise judgement when we extend them to new cases. That does not mean there is never a good reason to extend them. But more importantly, when we consider whether to extend “chair” to a new case, we can at least see what the thing looks like. In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain. And so the case is quite different from the case of the chair: as I said before, it is like asking if an unknown object behind a screen is a chair or not. Unknown, but definitely not arbitrary.
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
Here’e an example of weird reasons. How can shape not determine the difference? If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects? IKEA can have whatever intentions they want, but http://www.ikea.com/us/en/catalog/products/20299829/ is a stool. Are you seriously telling me that it isn’t? Now, one could say that a stool can be both a chair and a table, and I think that’s what IKEA does, but then you’ve already claimed this to be impossible.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem. There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
“Properties of the objects being classified” are much more extensive than you realize. For example, it is property of pain that it is subjective and only perceived by the one suffering it. Likewise, it is a property of a chair that someone made it for a certain purpose.
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
In most cases of that kind, the thing being called a table really is a table, and not a stool. Obviously I cannot confirm this in the particular case since I do not intend to buy it. But it is related to the fact that it is made for a certain purpose, as I said. In other words, in most cases the thing is not suitable for use as a stool: it might collapse after one occasion of sitting on it, or anyway after several days. In other words, being made as a table, it is physically unsuitable to be used as a seat. And consequently if it did collapse, it would quite correct to say, “This collapsed because you were using it as a stool even though it is not one.”
That said, I already said that the intention of the makers is not 100% determining.
That’s not subject to falsification, in the same way that it is not subject to falsification that the thing I am sitting on is called a “chair.” In other words, I already notice the similarity between all the things that are called feelings in the same way that I notice the similarity between chairs.
Talk about assumptions. I assume, and you are assuming here, that I have a brain, because we know in most cases that when people have been examined, they turned out to have brains inside their heads. But the fact that my toe hurts when I stub it, is not an assumption. If it turned out that I did not have a brain, I would not say, “I must have been wrong about suffering pain.” I would say “My pain does not depend on a brain.” I pointed out your error in this matter several times earlier—the meaning of pain has absolutely nothing at all to do with brain activities or even the existence of a brain. As far as anyone knows, the pain I feel when I stub my toe could depend on a property of the moon, and the pain I feel when I bump into a lamppost on a property of Mt. Everest. If that were the case, it would affect in no way the fact that those two pains feel similar.
This is completely wrong, for the reason I just stated. We are not talking about similarities between brain states—we are talking about the similarity of two feelings. So it does not matter if the robot’s brain state is similar to mine. It matters whether it feels similar, just as I noted that my different pains feel similar to one other, and would remain feeling similar, even if they depended on radically different physical objects like the moon and Mr. Everest.
When exactly is the intention relevant? If two objects have the same shape but different intended uses, and you still classify them the same, then the intention is not relevant. More generally, if we have variables X, Y and want to test if a function f(X,Y) depends not only on X, but also on Y, we have to find a point where f(X1, Y1) is not equal to f(X1, Y2). Are you saying that sometimes intention matters, just not for chairs? If not, I can only assume that intention doesn’t determine anything and only shape is important.
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing. Unfortunately for you, you do have a brain.
Are you going to feel the robot’s feeling and compare?
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Unquestionably, it can be meaningfully extended to robots. You simply mean the same thing that you mean in the regular case. The only question is whether there is any feeling there, not if “feeling” has a meaning, since we already admitted that it does have a meaning.
The possibility of falsification is a good thing for a physical theory. It is a very bad thing for a theory of the meaning of a word. As you already admitted, the fact that the pieces of furniture we normally sit on are called chairs is not subject to falsification, because that is just what is meant by “chair.” But a physical theory of a chair, e.g. “chairs are made of atoms,” is falsifiable, since someone could examine a chair and discover that it was not made of atoms. He would not then say, “We have discovered that ‘chair’ meant something different from what we thought.” He would say, “We knew what ‘chair’ meant, and that is unchanged, but we have learned something new about the physical constitution of chairs.”
In the same way, I am referring to certain feelings when I talk about “pain.” The fact that the word pain refers to those feelings cannot be falsified, because it is just what that word means. But whether pain depends on a brain activity is a falsifiable physical theory; it has nothing to do with the meaning of the word “pain.”
Assuming that I do, that is fortunate, not unfortunate. But as I was saying, neither you or I know that I do, since neither of us has seen the inside of my head.
No. The question is not whether the robot has a feeling which feels similar to me as my feeling of pain; the question is whether the robot has a feeling that feels to the robot the same way that my feeling feels to me. And since this has two subjects in it, there is no subject that can feel them both and compare them. And this is just how it is, whether you like it or not, and this is what “pain” refers to, whether you like it or not.
Can you actually support your claim that intention matters? To clarify, I’m suggesting that intention merely correlates with shape, but has no predictive power on its own.
It’s somewhat complicated. “Experiences are brain states” is to an extent a theory. “Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition. Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
Artificial things are made for a purpose, and being made for a purpose is part of why they are called what they are called. This is an obvious fact about how these words are used and does not need additional support.
If you mean pain is the conscious state that follows in that situation, yes, if you mean the third person state that folllows, no.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason. Definitions shouldn’t be falsifiable, and are not physical theories.
No. The stars outside event horizon of the visible universe are similar to the stars that we can see, but there is no way to compare them.
One can however ask the question, “Are the stars similar?” and one can answer yes or no. In the same way we can ask if the robot feels like we do and we can say yes and no. But there is access to the answer here, just as there no access in the case of the stars. That has nothing to do with the fact that either they are similar, or they are not, both in the case of the robot, and in the case of the stars.
Wow, you have no idea how many bold claims you make. To clarify once again, when I ask if intention matters, I’m asking whether the relationship between intention and classification is causal, or just a correlation. You are supposed to know the difference between those two things, and you’re supposed to know, in theory, how to figure out which one is relevant in a specific case. This whole “does not need additional support” thing inspires no confidence.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
I hate bringing up modern physics, it has limited relevance here. Maybe they’ll figure out faster than light travel tomorrow, and your point will become moot. But if we must...
If we insist that something beyond the event horizon exists (I’d love to see how you define that word), we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too. Note though, that the certainty of beliefs derived this way should be pretty low. And in the case of robots, hardly any of this applies.
It is causal, but not infallible.
That’s your problem. Everyone else will still call it “the sun,” and when you say “the sun didn’t rise this morning,” your statement will still be false.
Exactly. You can make arguments for and against robot pain as well. Meaning does not come from the fact that your arguments are strong or weak: the meaning is there first, even before you have any reason to think that the thing is true.
Ok, do you have any arguments to support that claim?
That may depend on the specific circumstances of the discovery. Also, different people can use the same words in different ways.
Arguments like what?
As I said, this is how people use the words.
Like yours, for example.
What words? The word “causal”? I’m asking for arguments why you think that the relationship between intention and classification is causal. I expect you to understand the difference between causation and correlation. Why is this so difficult for you?
I understand the difference, and I have no difficulties here. I said it was causal, not merely correlative.
Ok, do you have any arguments to support that it is causal?
As I said, this is how these words work, that is words like “chair” and so on. If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair. And those things are true even given the same form, so it is causal, not merely correlated.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
I am not talking about evidence, but about meaning; when we say, “this is a chair,” part of the meaning is that it was made for sitting. So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair, in the ordinary sense of the term, because they were not made for sitting.
I don’t know where you think that was established. I said the fact that someone made something for a different purpose would not guarantee that it was that thing, as that if you made something in the shape of a hammer, and said it was for sitting, it would not be a chair. But the fact that it was made for sitting does indeed make it closer to being a chair, not by way of evidence or correlation, but directly by more closely matching the meaning of “chair.”
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair? Would simply sitting on the first rock convert it into a chair?
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
By acting like you actually want to understand what is being said, rather than by acting like you are in a fight where you are trying to win. In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
In particular, consider my answer to your next question, because it is basically the same thing again.
There is no guarantee of this, because the word “chair” is vague. But it is true that there would be more reason to call the second rock a chair—that is, the meaning of “chair” would apply more properly to the second one than to the first one. So the answer might be yes, depending on all the other details of the situation.
In general, no, because the word “chair” does not work that way. It does not mean what someone happens to sit on, but what is made for sitting on.
If you are not ignorant of how the word is used, you do have to involve the intention of the maker.
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer. Here I’m only asking you for a black and white answer on this very specific question. I understood your original reply, but I honestly have not idea how it was supposed to answer my specific question. When people refuse to give direct answers to specific questions, I infer that they’re conceding.
What exactly do you mean by “vague”? The word “chair” refers to the category of chairs. Is the category itself “vague”?
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair. Apparently one that you have more knowledge of than I. So I keep asking you specific questions about this concept. And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff. By the way, this thread started here where you explicitly start talking about words and meanings, so that’s what we’re talking about.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to, although my guess is to the IKEA discussion. In any case, the response there, and to anything similar that you might be discussing, is that there is no direct answer because the answer is “in some cases yes, in some cases no, depending on the particular circumstances.”
First of all, all words are vague, so there is no such thing as “what exactly do you mean by.” No one ever means anything exactly. Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Yes. All categories are vague, because they are generated by a process similar to factor analysis, as I have said before, and this does not generate precise categories, but vague ones.
It is false that the meanings are arbitrary, for the reasons I have said. It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
This is true of all words. All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things. Therefore no words refer to precise concepts (in fact there is no such thing as a precise concept, as I have already said.)
First of all, you are the one who needs the “language 101” stuff, since you have been abusing the concept of meaning by confusing it with testability. Second, nope, we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t. You have been constantly confusing two entirely separate issues, and ignoring me when I point out the separateness of these issues. Let me do so one more time.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.” Those are two entirely separate things. Likewise, you have been confusing “this statement is vague” with “this statement is not testable.” These are two entirely separate things.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
Second, consider a line of stars outside the visible universe, except that some of the stars, on the right, are identical white dwarfs, and the ones to the left of them are identical red giants. Where exactly do the stars stop being white dwarfs and begin being red giants? This time, we cannot answer the question because there is no test to give us the answer. But vagueness is not an issue, because there is a sharp division between the two parts. We simply cannot find it by testing.
Third, consider a line of stars outside the the visible universe, constructed as in the first case. This time, there are two problems: we cannot test where the boundary is, and the boundary is vague. These are two entirely different issues.
Fourth, consider a line of things where the one on the left is a statue, the one on the right is a human being, and somewhere in the middle there are robotic things. Each thing differs by a single atom from the thing on its left, and from the thing on its right.
Now we have the question: “The statue is not conscious. The human being is conscious. Is the central robot conscious?” There are two separate issues here. One is that we cannot test for consciousness. The second is that the word “conscious” is vague. These are two entirely separate issues, just as they are in the above cases of the stars.
Let us prove this. Suppose you are the human being on the right. We begin to modify you, one atom at a time, moving you to the left. Now the issue is testable: you can ask yourself whether you are conscious, and if you have any answer at all, or even if you manage to ask yourself the question, then you are conscious. Note that this is quite different from anyone else asking the thing if it is conscious, because the question “does this thing say it is conscious” is not the same as “is this thing conscious.” But being conscious is having a first person point of view, so if you can ask yourself anything, you are conscious. Unfortunately, long before you cease to be conscious, you will cease to be able to ask yourself any questions. So you will still not be able to find a definite boundary between conscious and not conscious. Nonetheless, this proves that testability is entirely separate from vagueness.
Well, that explains a lot. It’s not exactly ancient history, and everything is properly quoted, so you really should know what I’m talking about. Yes, it’s about the identical table-chairs question from IKEA discussion, the one that I linked to just a few posts above.
Why are there no determinate boundaries though? I’m saying that boundaries are unclear only if you haven’t yet decided what they should be. But you seem to be saying that the boundaries inherently cannot be clear?
There is nothing vague about the results of factor analysis.
On this topic, last we seemed to have agreed that “arbitrary” classification means “without reasons related to the properties of the objects classified”. I don’t recall you ever giving any such reasons.
For example, you have said ‘”are tables also chairs” has a definite answer’. Note the word “definite”. You also keep insisting that there is factor analysis involved, which would also be an objective and natural way to assign objects to categories. By the way “natural” is the opposite of “arbitrary”.
Yeah, I recall saying something like that myself. And the rest of your claims don’t go well with this one.
Well, you decided that I need it, then made some wild and unsupported claims.
Yes, the two statements are largely equivalent. Oddly, I don’t recall you mentioning testability or measurability anywhere in this thread before (I think there was something in another thread though).
I don’t think I’ve done that. It’s unfortunate that after this you spent so much time trying to to prove something I don’t really disagree with. Why did you think that I’m confusing these things? Please quote.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you. You say “cannot possibly” and then give no reasons why.
I however have no problems with the vagueness here, because the two categories are only shorthands for some very specific properties of the starts (like mass). This is not true for “consciousness”.
It’s not a test if “no” is unobservable.
There is nothing stopping you only in the sense that nothing stops you from asserting falsehoods. (As we see is the case for you personally.)
It is intrinsically vague: “Red giant” does not and cannot have precise boundaries, as is true of all words. The same is true of “White dwarf.” If you cut the line, you will indeed be cutting it arbitrarily, as you say yourself, and this has nothing to do with the meaning of those words.
The rest does not respond to the comparison about consciousness, and as I said we won’t be discussing the comments on language.
Again, you make a claim and then offer no arguments to support it. “Red giant” is a term defined quite recently by a fairly small group of people. It means what those people wanted it to mean, and its boundaries are as precise as those people wanted them to be.
You started the language discussion, but I have to explain why we’re continuing it? I continue, because I suspect that the reasoning errors you’re making about chairs are similar to the errors you’re making abut consciousness, and chairs are easier to talk about. But it’s only a suspicion. Also, I continue, because you’ve made some ridiculous claims and I’m not going to ignore them.
Assuming “experience” is a meaningful category.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
Yes, “feeling” and “experience”, are pretty much the same thing, I didn’t mean to imply otherwise in the text you quoted. Instead, the first sentence refers to your definition, and the second offers an alternative one.
There is a classification problem with tables and chairs. Generally, I know what chairs and tables are supposed to be like, but there are objects similar both to chairs and to tables, and there isn’t any obvious way to choose which of those similarities are more important. At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
So they are either both meaningful, or both meaningless. But you have used “experience” as though it is meaningful, and you have implied that “feeling” is meaningless.
That was a predictable problem. Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates, you cannot make that kind of statement, and so cannot express identity theory.
Whereas, from my point of view, 1st person experience was there all along.
No, I used “experience” as a label. Let me rewrite that part:
That’s assuming that “experience”, as you use that word, is a meaningful category. If you didn’t start from that assumption, and instead defined experiences as brain states, you could …
Is that better? I understand that having two definitions and two similar but not identical concepts in one sentence is confusing. But still I expect you to figure it out. Was “identified” the problem?
What vocabulary relating to what mental states do I reject? Give examples.
Wasn’t “chairness” there too? More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
As opposed to what?
Does “meaningful” mean “meaningful” here , or is it being used as a misleading proxy for something like “immeasurable” or “unnecessary” or “tadasdatys doens’t like it”?
You keep saying various words are meaningless. One would not want to use meaningless words, generally. OTOH, you have revealed elsewhere that you don’t use “meaningless” to mean “meaningless”. So who knows?
Consciousness is in the dictiionary, chariness isn’t.
Consciousness is a concept used by science, chairness isn’t.
Consciousness is supported by empirical evidence, chairness isn’t.
It’s not that words are meaningless, it’s that you sometimes apply them in stupid ways. “Bitter” is a fine word, until you start discussing the “bitterness of purple”.
Are dictionary writers the ultimate arbiters of what is real? “Unicorn” is also in the dictionary, by the way.
Physicalist, medical definition of consciousness is used by science. You accuse me of changing definitions when it suits me, and then proceed to do exactly that. I guess that’s what projection looks like.
What evidence exactly? I have to assume my last paragraph applies here too.
If you can’t even come up with arguments why a silly concept I made up is flawed, maybe you shouldn’t be so certain in the meaningfulness of other concepts.
That the brain is not quiescent when experiencing pain is an objective fact. But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Please check out multiple realisability.
Because of that, none can genuinely tell whether an advanced robot has genuine qualia. That includes you, although you are inclined to think that your subjective intuitions are objective knowledge.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
Sure, but what does that have to do with anything? Does “objective” mean “well understood” to you?
There are multiple representations of pain the same way that there are multiple representations of chair.
It is ridiculous how much of this debate is about the basic problem of classification, rather than anything to do with brains. Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument. Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
Calling my reasoning, even if not fully formal, “subjective intuitions” seems rude. I’m not sure if there is some point you’re trying to express with that.
Not sure where you see me talking about intelligence. But intelligence is far more well defined and measurable than consciousness. Multiple realizability has nothing to do with that.
We do, on the other hand, know subjecively what pain feels like..
That’s not the point. The point is that if we have words referring to subjective sensations, like “purple” and “bitter”, we can distinguish them subjectively. But if we discard out subjective insight into them, as you are proposing, and replace them with vague objective descriptions—vague, because no one knows precise descriptions of the full gamut of atomic configurations which implement pain. -- then you take a step backwards. You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter. Basing semantics on objective facts, or “reality” as you call it. only works if you know which fact is which. You are promoting something which sounds good, but doesn’t work—as a research program. Of course it works just fine at getting applause from an audience of dualism-haters.
Are you talking about realisations or representations?
No one has made that argument. The point is not that it is not ultimately true that subjetive states are brain states, it is that rejecting the subjective entirely, at this stage, is not useful. Quite the reverse. Consciousness is the only thing we know from the inside—why throw that away?
If we know what casues or predicts something, then we can sometimes reproduce it in radically different ways. That is the basis of artificial intelligence: realising intelligence in a medium other than a brain.
But you are saying that pain cannot be realised by a robot, and your reasons for sayign that are entirely non-empirical.
We might be able to refine the concept of consciousness as part of a research programme, but research programmes have to start with folk concepts.
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires. But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not. It’s as if you assume something is real, just because it comes out of people’s mouths.
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
Parts of my text are referring to the arguments I saw in wikipedia under “multiple realizaility”. But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. In particular, you have to believe this to even ask whether robots feel pain.
I’m still waiting for your empirical reasons why “purple is not bitter”, or better yet, “purple is not a chair”, if you feel the concept of bitterness is too subjective.
But not much of an argument for using semantics grounded in (physical) reality. Doing so does not buy you maximum precision in absolute terms, and , what is worse, the alternative , of grounding terms for types of experience is 1st person experience, can give you more precision.
You may believe that, but do you know it?
The difference is that I accept the possibility that first person evidence could falsify 3rd person theory.
I’m not taking 1st person to mean 3rd person reports of (someone elses) 1st person experience.
What sort of precision are you talking about? More generally, you’re repeatedly said that the concept of consciousness is very useful. I don’t think I’ve seen that usefulness. I suspect that elaborating here is your best bet to convince me of anything. Although even if you did convince me of the usefulness of the term, that wouldn’t help the “robot pain” problem much.
That’s a slightly weird question. Is it somehow different from “why do you believe that” ? I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary. It’s very likely that “zapping” isn’t quite sufficient, depending on how generously you interpret that word. But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper. Obviously, paper is slower and less accurate. But you seem to be implying a more fundamental difference between those two methods of data storage. Why is that?
I like type theory. Let X be what I’m sitting on. Type of X is “chair”, type of “chair” is “category”, a painting of X is a representation of X, it is not a representation of “chair”. Representations of “chair”, in the same sense that painting represents X might not exist. Somehow I’m quite comfortable saying that an object of type Y is what represents Y. “Instantiates” might be the best word (curiously though, google uses “represent” to define it). Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
I have said that actual experience is useful to pin down the meaning s of words referring to exerpeince.
Not at all. That there is a difference betewen belief and knowledge is very standard.
There’s an extensive literature of arguments to the contrary,
It is the idea that you can learn aout the inward or 1st person by purely outward or 3rd person means that is contentious.
No, I am saying that my first person is me, and your first person is you. SO my first person information is my experience, not someone else’s report of their experience.
Well, you said that the two R words mean the same hting , when by established usage, they don’t That looks like a source of confusion to me.
I assure you that none of the beliefs I state here were generated by flipping a coin. They are all to some extent justified. That’s why the question is weird—did you expect me to answer “no”?
There is extensive literature of arguments in favor of god or homeopathy. Doesn’t make those things real. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments (likewise for god and homeopathy). However you seem to have read quite a bit more, and you haven’t raised my confidence in the value of that literature so far.
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences. I’m not saying that this hypothesis is true, I’m only saying that you don’t know it to be false. And if it did happen to be true, then your internal reasoning about your experiences would not be much different from my reasoning about your experiences written on paper (aside from the low precision of our language). Curiously, a physical experiment is more likely to test this hypothesis, than your internal reasoning.
It is a potential source of confusion, but that doesn’t mean it’s causing any right now. Maybe if we talked about representations such as paintings, it would cause some. Regardless, I’ll try to use the words you prefer. Debating their differences and similarities is very orthogonal to our main topic.
You said there was a “lack” of arguments to the contrary, and I pointed out that there wasn’t.
Then why didn’t you say lack of good arguments? And why didn’t you say what is wrong with them?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
“Magic” is not a helpful phrase.
You need to distinguish ontology and epistemology. Experiences and memories and so on have their physical correlates—ontology—but that does not mean you can comprehend them—epistemology. We might be able to find ways of translating between correlates and experience, but only if we don’t ignore experience as an epistemology. But, again, taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
Experience is experience, not reasoning about experience.
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with? Please explain your thought process, I really want to know. You see, productive debate requires some amount of generosity. I may not be polite, but I don’t think you’re illiterate or insane, and I don’t think I nitpick about things this obvious.
Maybe this is a symptom that you’re tired of the whole thread? You know you can stop whenever you want, right?
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves. That’s where my “baseless hypothesis” comes from.
To clarify, the hypothesis isn’t a direct response to something you said, it’s a new angle I want to look at, to help me understand what you’re talking about.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation. I realize that this might not be obvious. Though I feel that this is a natural use of the word.
That’s fine. There are some things that I’d want to pick on, although I’m not sure which of them are significant. But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Did you mean “consciousness”? To put is bluntly, if you haven’t; heard of MR, there is probably a lot you don’t know about the subject.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
You yourself are blocking off the possibility of understanding consciousness , subjectivity and experience by refusing to allow them as prima-facie, pre-theoretic phenomena.
You say that we must start with reality, but we cannot: (an accruate map of reality) is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist.
I get that you hate, hate, hate dualism or anything else that threatens physical monism, but you can’t prove physical monism by begging the question against it. You are doing it no favours.
Nobody else has a problem with robot pain as a meaningful possibility. You do because you have removed the fist person from your definitions.
Heh. That’s fair.
If having experiences is an important part of consciousness, then I’d expect you to reason about them, what induces them, their components, their similarities and differences. This “consciousness in general” phrasing is extremely weird.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
Of course, I mean your methodology starts..
I’m not sure that changes anything.
Can you argue your point? I can argue mine.
You say that we must start with reality, but we cannot: (an accruate map of,) reality is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
The question “where did you start” has some bad assumptions. Of course at first we all have to start from the same naive point. If we did arbitrarily start from different unrelated assumptions, expecting to agree on anything would be weird.
So, what happened is that I started from naive assumptions, and arrived at physicalism. Then when I ask myself a new question, I start from where I last stopped—discarding all of my progress would be weird.
You may think that dropping an initial assumption is inherently wrong, but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary. You might be able to convince me that I do need to keep some similar assumption for technical reasons, but that wouldn’t solve the “robot pain” problem.
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name. But when I drop consciousness, my life gets easier. How does that work?
There is a difference between a working hypothesis and an unfalsifiable dogma. It seems to you that there is nothing to explain about consciousness because you only accept 3rd-person empirical data, because of your ontology.
Could explain what assumption you are dropping, and why, without using the word magical.
I’d prefer if you settled on one claim.
That would be the problem for which there is no evidence except your say-so.
You can function practically without a concept of gravity, as people before Newton did. What you can get away with theoretically depends on what you are trying to explain. Perhaps there is a gravity sceptic out there somewhere insisting that “falling object” is a meaningless term, and that gravity is magic.
Is my position less falsifiable than yours? No, most statements about consciousness are unfalsifiable. I think that’s a strong hint that it’s a flawed concept.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head. I dropped it because I found that physicalism explains everything better. “Better” doesn’t mean that I have all the answers about anything, it just means that the answers consciousness gives are even worse.
I don’t understand what your problem with “magical” is?
Well, I suppose an assumption could be unnecessary without being meaningless, so the words aren’t identical, but I do refer to the same thing, when I use them in this context. I also recall explaining how a “meaningless” statement can be considered “false”. The question is, why are you so uncomfortable with paraphrasing? Do you feel that there are some substantial differences? Honestly, I mostly do this to clarify what I mean, not to obscure it.
The “robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do. That’s a pretty big problem, regardless of what I say. Now, when I ask if this or that idea solves “robot pain” problem, I’m not asking if it produces an actual test, I just ask for a smallest hint that maybe the test could exist.
That’s ridiculous. The mathematical law of gravity was written down by Newton, but the concept of gravity, in the sense that “things fall down”, is something most animals have. Do you literally think that nobody noticed gravity before Newton?
That’s not the problem.
The assumption is more that consciousness is something that needs explaining,
That’s wrong. If you can put a truth-value on a sentence , it is meaningful.
I think it is better to express yourself using words that mean what you are trying to express.
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
I think ti is likely that your entire argument is based on vagueness and semantic confusion,
There is a real problem of not being able to test for a pain sensation directly.
Why did it take you so long too express it that way? Perhaps the problem is this:
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”. Perhaps you have to use vagueness and confusion to make the invalid inference seem valid.
Wow, so you agree with me here? Is it not a problem to you at all, or just not “the” problem?
Invisible unicorns are immeasurable. They do not exist. The assumption that they do exist is unnecessary. The statement “invisible unicorns are purple” is meaningless. The words aren’t all exactly the same, but that doesn’t mean they aren’t all appropriate.
A long long time ago you wrote: You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”. So I assumed you understood that immeasurability is relevant here. Did you then forget?
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you
No, still not from that.
You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.
Which issues exactly?
Why not? Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist? Does “‘robot pain’ is meaningless” follow from the same better?
Meaningfulness, existence, etc.
Huh? It’s perfectly good as a standalone stament , it’s just that it doens’t have much to do with meaning or measurabiltiy.
Not really, because you haven’t explained why meaning should depend on measurability.
It is evident that this is a major source of our disagreement. Can you define “meaningless” for me, as you understand it? In particular, how it applies to grammatically correct statements.
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods.
Useless for communication.
Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).
Where is this going? You can’t stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
A bit too vague. Can I clarify that as “Useless for communication, because it transfers no information”? Even though that’s a bit too strict.
What is stopping me from assigning them truth values? I’m sure you meant, “meaningless statements cannot be proven or disproven”. But “proof” is a problematic concept. You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument. Anyway, isn’t (1.) enough?
It’s still entirely about meaning, measurability and existence. I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not and what it could follow from. Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”. Or maybe “invisible unicorns cannot be detected” does not follow from “we have no arguments suggesting that maybe ‘invisible unicorns’ could be something detectable”?
The fact that you can’t understand them.
If you cant understand a statement as exerting the existence of something, it isn’t meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
Because it needs premises along the lines of “what is not measurable is meaningless” and “what is meaningless is false”, but you have not been able to argue for either (except by gerrymandered definitions).
There’s an important difference between stipulating something to be indetectable … in any way, forever … and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is “true” in some way that has nothing to do with reality.
I’m trying to understand your definitions and how they’re different from mine.
I see that for you “meaningless” is a very narrow concept. But does that agree with your stated definition? In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Also, can you offer a concrete meaningless statement yourself? Preferably one in the form “X exists”.
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.
Well, you used it,.
Its’ bad because there’s nothign inside the box. It’s just a apriori argument.
I can also use”ftoy ljhbxd drgfjh”. Is that not meaningless either? Seriously, if you have no arguments, then don’t respond.
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You’re supposed to be able to check that somehow.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
The implicit argument is that meaning/communication is not restricted to literal truth.
What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam’s razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Yes, the unicorns don’t have to be undetectable be definition. They’re just undetectable by all methods that I’m aware of. If “invisible unicorns” have too much undetectability in the title, we can call them “ghost unicorns”. But, of course, if you do detect some unicorns, I’ll say that they aren’t the unicorns I’m talking about and that you’re just redefining this profound problem to suit you. Obviously this isn’t a perfect analogue for your “robot pain”, but I think it’s alright.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like “colourless green”, or category error, like “sleeping idea”.
Very low finite rather than infinitessimal or zero.
I don’t see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don’t feel pain. I don’t see how that can be valid.
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that “bitter purple” (or something) was a category error, and your answer was very underwhelming.
I say that “sleeping idea” is meaningless, because I don’t have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, “is this idea sleeping” is answered with “no”. It’s just that I honestly don’t have such a restriction. I use the exact same explanation for the meaninglessness of both “fgdghffgfc” and “robot pain”.
The question “is green colorless” has a perfectly good answer (“no, green is green”), unless you don’t think that colors can have colors (in that case it’s a category error too). But I’m nitpicking.
Here you treat detectability as just some random property of a thing. I’m saying that if you don’t know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can’t possibly say that it exists.
My “unicorn ghost” example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I’d have to make up an argument why this isn’t the unicorn I was talking about. “Robot pain” has no such flaws—it is devoid of any traces of meaningfulness.
300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.
I doubt that’s a good thing. It hasn’t been very productive so far.
“Seriously, if you have no arguments, then don’t respond.”
People who live in glass houses shouldn’t throw stones.
I means “does not have a meaning.”
In general, it doesn’t apply to grammatically correct sentences, and definitely not to statements. It’s possible that you will find something grammatically correct which is meaningless, but it would have to be severely contorted.
If you can ask the question, “How do you know?”, then the thing has a meaning. I will show you an example of something meaningless:
faheuh fr dhwuidfh d dhwudhdww
Note that there is no question of knowing or not knowing anything. When you can ask how you know something or don’t know it, then the thing has a meaning.
I’m sure you can see how unhelpful this is.
No.
It only explains the “-less” suffix. It’s fine as a dictionary definition, but that’s obviously not what I asked for. I need you to explain “meaning” as well.
You need no such thing, and as I said, we won’t be continuing the discussion of language until you show it has something to do with consciousness.
Noam Chomsky wrote “Colorless green ideas sleep furiously” in 1955.
Ideas don’t sleep, so they don’t sleep furiously. The sentence is false, not meaningless.
This topic has been discussed, fairly extensively.
Yes. No one has shown that it is meaningless and it pretty obviously is not.
That’s a definitions argument, isn’t it? Under some ideas of what “meaning”, well, means, such sentences are meaningful; under others they are not.
The problem with that is that if the word “meaning” has several meanings you will have a situation like this:
Suppose the word “meaning” has two meanings, A & B. But then we can ask what the word “meanings” means in the previous sentence: does it mean A, or B? If you answer that it means A, then the word “meaning” might have two meanings in the A sense, but five meanings in the B sense. But then we can ask what the word “meanings” means in the previous statement. And it might turn out that if the word “meanings” is taken in the B sense, the statement (about 2 and 5) is only true if we take the fourth meaning of the B sense, while in the 3rd sense, it has 7 meanings in the A sense, and 2 meanings in the B sense. And so on, ad infinitum.
All of that means that we have to accept a basic sense of meaning which comes before all the others if we want to talk about meaning at all. And in that basic sense, statements like that obviously have a meaning, whereas ones like “shirwho h wehjoeihqw dhfufh sjs” do not.
Your comment boils down to “It’s complicated, but I’m obviously right”. It’s not a very convincing argument.
Meaning is complicated. It is a function of at least four variables: the speaker, the listener, the message, and the context. It’s also well-trodden ground over which herds of philosophers regularly stampede and everything with the tag of “obviously” has been smashed into tiny little pieces by now.
You’re right about the “I’m obviously right” part, but not the rest. It boils down to “you have to start somewhere.” You can’t start out with many meanings of “meaning”, otherwise you don’t know what you mean by “meanings” in the sentence “I am starting out with many meanings of meaning.” You have to start with one meaning, and in that case you can know what you mean when you say “I am starting with one meaning of meaning.”
“eventually I found them unnecessary and unattractive”
It is typically considered unnecessary and unattractive to assert that the Emperor is naked.
There’s that word again.
Do you prefer “naive”? Not exactly the same thing, but similar.
The chair you are sitting on is a realisation; Van Gogh’s painting of his chair at Arles is a representation. You can’t sit on it.
That’s very vaguely phrased. There’s are questions of whether pain has phenomenal qualities, whether it is totally reducible to physical behaviour, and whether it is multiply realisable. If pain doesn’t have phenomenal properties, how do you decide which set of brain states get labelled as pain states?
But the concern is that you have no way of coming to know the answers to those questions. You have predetermined that everything must be treated as physics from the outset, so you will ineveitably get out the answer you put in. You are not treating the identity of pain with brain states as a falsifiable hypothesis.
There are uncontentious examples of multiply realisable things. Everything in computer science is MR—all algorithms, data structures , whatever. For the purposes of AI research, intelligence is assumed to be MR. There is no implication that MR things are things that “exist apart” from their realisations. So I don’t know where you are getting that from.
I would have to believe pain is MR to believe that; but the objection cannot be that nothing is MR. You are apparently being inconsistent about MR.
Colour and taste are different categories, therefore category error.
No, I’m treating the identity of pain with the memories thoughts and behaviors that express pain, as unfalsifiable. In other words, I loosely define pain “the thing that makes you say ouch”. That’s how definitions work—the theory that the thing I’m sitting on is a chair is also unfalsifiable. At that point the identity of pain with brain states is in principle falsifiable, you just induce the same state in two brains and observe only one saying ouch. Obviously, there are various difficulties with that exact scheme, it’s just a general sketch of how causality can be falsified.
I don’t recall suggesting that something isn’t MR. I don’t know why you think that MR is a problem for me. Like I said, there are multiple realizations of pain the same way that there are multiple realizations of chair.
Is that supposed to be a novel theory, or a dictionary definition?
Your suggesting pain can’t be instantiated in robots..
Definition, as I state right in the next sentence, and then confirm in the one after that. Is my text that unreadable?
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question. In the same way that wondering if some table can also be a chair is a stupid question. If you feel that you need an answer, feel free to choose arbitrarily. But then, if you think that having an answer helps you somehow, you’re probably doing something very wrong.
In the case of a simulated human brain, it might seem more natural to call those states “pain”, but then if you don’t, nobody will be able to prove you wrong.
The question asked dictionary definition.
Uncontroversially, you can prove something, or at least obtain a high standard of justification using falsifiable empiricism. Uncontroversially, you also can achieve a good level of justification using armchair reasoning based on valid deductions from standard definitions.
The use of nonstandard—stipulated, gerrymanderd, tendentious—is much dicier. You yourself made the comparison to compatiblism. In shades of gray terms, the redefinition manoeuvre isn’t completely beyond the pale, but it is nowhere near the gold standard of epistemology either—compatibliism, the “wretched subtefuge” remains somewhat contentious. The objection is is that compatibilists have changed the subject, are not in fact talking about free will.
And how are you justifying that suggestion? By appeal to personal intuition, which is also low grade epistemology.
It’s actually possible to answer that kind of question in a reasonably rigorous and formal way...you can show that a certain concept is leads to contradiction. But then such arguments are only convincing if they start from definitions that bear some relation to what a word usually means.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
For everyone else, pain is a feeling, a sensation, a phenomenal mode, a quale. You have left all that out of your definition , which is like definining a chair as something you cannot possibly sit on.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
Definitions aren’t handed from god in stone tablets. I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Oddly, I don’t feel like I’m doing the same thing compatibilists do. At least in my own head I explicitly have multiple versions of definitions (i.e. “if we define pain as <...> then <...>”). But I do worry if that’s always reflected in my text.
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I feel like I’ve asked you, though I’m not sure. This is an important point though. If we can’t agree even on that much, then we have some serious problems.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain. The word “similar”, of course, causes it’s own problems (how similar does it need to be?) but that’s nothing in comparison.
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course). It’s your definition that’s weird.
Dictionary definitions generally reflect popular usage. They are sometimes revised in terms of scientific discoveries—water is no longer defined as a basic element—but that requires more epistemic weigh than someone’s intuitive hunch.
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
Who are you communicating to when you use your own definitions?
It’s not relevant to anything. I thunk there can be meaningless statements, and I continue to think yo have no evidence that “robot pain” is one of them.
Says you. Why should I believe that?
Are you abndoning the position that “robot in pain” is meanngless in all cases?
I never said “all cases”, that would be ridiculous, the problems with “robot pain” depends on how the words are defined. With a strict physical definition the problem is easy, with a weaker physical definition, we have the usual classification problem, and with your definition the phrase is meaningless.
I don’t think I’ve ever tried to prove anything by redefining any words. There is some sort of miscommunication going on here. What I may do is try to convince you that my definitions are better, while matching common usage.
You’re asking this as though I maliciously misinterpreted what you mean by consciousness. Is that how you see this? What I tried to do is understand your definition to the best of my ability, and point out the problems in those. When talking about other definitions, I explicitly said things like “In this view pain is …” or “If you defined consciousness as …”. Was it actually unclear which definition I was talking about where, for all this time?
Solve it , then.
Prove that.
But using them proves nothing?
I am wondering who you communicate with when you use a private language>
Well, if you define pain exactly as “the state that follows damage and precedes the ‘ouch’” then you would damage the robot, observe it say ouch, and then proclaim that it experiences pain. It’s that simple. The fact that you asked, suggests that there’s something you’re seriously misunderstanding. But I can’t explain it if I don’t know what it is.
I feel like we’ve talked about this. In fact, here: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvhm
Remember when you offered a stupid proof that “purple is bitter” is category error, and then never replied to my response to it? Gosh, that was a while ago, and apparently we didn’t move an inch.
To summarize, I believe that the phrase is meaningless, because instead of showing to me how meaningful it is, you repeatedly ask me stupid questions. At least, that’s one additional data point.
Yes, definitions do not generally prove statements.
Considering that I provide you with the alternate definitions and explicitly state which definition I’m using where, I’m communicating with you.
Your solution is unconvincing because it can be fulfilled by code that is too simple to be to be convincing. If you change the definition of pain to remove the the subjective, felt aspect, then the resulting problem is easy to solve...but it’s not the original problem. It’s not that I can’t understand you, it’s that it’s hard to believe anyone could pull such a fraudulent manoeuvre.
Meaninglessness is not the default. Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
If definitions do not prove statements , you have no proof that robot pain is easy.
If you redefine pain, you are not making statements about pain in my language. Your schmain might be a trivially easy thing to understand, but it’s not what I asked about.
What the hell? I’m not just annoyed because of how accusatory this sounds, I’m annoyed because it apparently took you a week of talking about alternative definitions to realize that I am, at times, talking about alternative definitions. Are you not paying attention at all?
Well, it should be. I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic). Obviously, you can do whatever you want, but I need you to explain to me, how it makes sense to you.
Sure, in a similar way that people discussing god or homeopathy bothers me. It’s not exactly bad to discuss anything, but not all questions are worth the time spent on them either.
I did say “generally”. Definitions do prove statements about those definitions. That is “define X as Y” proves that “X is Y”. Of course, there are meaningful statements presented in the form “X is Y”, but in those cases, we already have X well defined as Z and the statement is really a shorthand for “Z is Y”. I guess I’m trying to convince you that in your case the definition Z does not exist, so making up a new one is the next best thing.
Yes, that’s because your language is broken.
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
That can’t possible work, as entirelyuseless has explained.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
The ordinary definition for pain clearly does exist, if that is what you mean.
Prove it.
No, pain is of ethical concern because you don’t like it. You don’t have to involve consciousness here. You involve it, because you want to.
Homeopathy is meaningful. God is meaningful only some of the time. But I didn’t mean to imply that they are analogues. They’re just other bad ideas that get way too much attention.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Is that a fact or an opinion?
“highly unpleasant physical sensation caused by illness or injury.”
have you got an exact definition of “concept”?
Requiring extreme precision in all things tends to bite you.
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
“You don’t have to involve consciousness here”—has two meanings:
one is “the concept of preference is simpler than the concept of consciousness”, which I would like to call a fact, although there are some problems with preference too. another is “consciousness is generally not necessary to explain morality”, which is more of an opinion.
Of course, now I’ll say that I need “sensation” defined.
I’d say it’s one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don’t want to bother. That’s because I have never considered “Is X a concept” to be an interesting question. And, frankly, I use the cord “concept” arbitrarily.
It’s you who thinks that “Can X feel pain” is an interesting question. At that point proper definitions become necessary. I don’t think I’m being extreme at all.
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;’t want to talk about consciousness.
Of course, I’ll need “defined” defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology. It’s a form of question-begging.
You used the word , surely you meant something by it.
Proper as in proper scotsman?
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
What?
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
You keep saying it s a broken concept.
That anything should feel like anything,
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes. I consider that “talking about consciousness”. What else is there to say about it?
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Yes, if I had actually said that. By the way, matter exists in you universe too.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If you want to know what “pain” means, sit on a thumbtack.
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
As i have previously pointed out, you cannot assume meaninglessness as a default.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
Are there classes of conscious entity?
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
Where exactly?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
Then you should consider all statements meaningless, without exception, since all of your arguments are made out of statements, and there cannot be an infinite regress of arguments.
That’s cute.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.
I don’t really know why you derive from this that all statements are meaningless. Maybe we disagree about what “meaningless” means? Wikipedia nicely explains that “A meaningless statement posits nothing of substance with which one could agree or disagree”. It’s easy for me to see that “undetectable purple unicorns exist” is a meaningless statement, and yet I have no problems with “it’s raining outside”.
How do you argue why “undetectable purple unicorns exist” is a meaningless statement? Maybe you think that it isn’t, and that we should debate whether they really exist?
“Please give an example of a subjective experience, other than consciousness, that has no physical evidence.”
All subjective experiences, including consciousness, are correlated with objective descriptions. E.g. a person who is awake can be described in ways objectively distinct from a person who is asleep. So there is always evidence for subjective experience. But that does not reduce the meaning of having a subjective experience to some objective description.
So for example “I am conscious” does not signify any objective description, but is correlated with various objective descriptions. Likewise, “I currently seem to see a blue object,” does not signify any objective description, but it is correlated with various objective descriptions.
Evidence is only evidence, in the full sense of the term, if you can interpret it,
The things are correlated. For example, every time I am awake and conscious, I have a relatively undamaged brain. So if someone else has an undamaged brain and does not appear to be sleeping, that is evidence that they are conscious.
“Meaning.” You keep using that word. I do not think it means what you think it means.
You’ll have to be more specific with your criticism.
“Meaning” refers to the fact that words are about things, and they are about whatever people want to talk about. You seem to be using the word rather differently, e.g. perhaps to refer to how you would test whether something is true, since you said that the word “pain” is meaningless applied to a robot since we have no way to test whether it feels pain. Or you have the idea that words are meaningless if they do not imply something in the “real world,” by which you understand an objective description. But since people talk about whatever they want to talk about, words can also signify subjective perceptions, and they do.
For starters, do we agree that the phrase “purple is bitter” is meaningless? Or at least that some grammatically correct strings of words can have no meaning?
“Purple is bitter” is not meaningless; it is false.
It is possible that some string of words that satisfies most or all grammatical rules is meaningless. However it is not possible that a string of words that a human says in order to convey their thoughts is meaningless; it will mean the thing they are thinking about.
Really? Synaesthesia aside, do you want to say that “Purple is not bitter” is true?
Are you equating meaning with intent?
Yes.
No. But if you have thoughts and intend to signify them, then your words have meaning from your thoughts.
Would you like to show some arguments why “Purple is not bitter” is true?
What about if you intend and fail?
Purple is a color and not a flavor. Bitter is a flavor and not a color. That strongly suggests that purple is not bitter.
That would depend on how exactly you failed. If you failed to speak any words, then obviously there was no meaning, although there was an intended meaning. If you failed to pronounce the words or type them correctly or whatever, there would be a vague spectrum from a situation similar to failing to speak any words, up to speaking them and succeeding. But there would be an intended meaning in every case, even if you failed to actually mean it.
I don’t think this is valid reasoning.
Cinchona tree bark is a part of a plant and not a flavour. Bitter is a flavour and not a part of a plant. That strongly suggests that this bark is not bitter.
The problem is that you are mixing the use of “bitter” as a noun and as an adjective. So there are two meanings, bitterness, and something bitter. You need to correct for that. It is obviously true that the bark is not bitterness, which is the relevant conclusion.
Huh? “Bitter” is an adjective—as you youself say, the noun is “bitterness”.
In both phrases—purple is (not) bitter and this bark is not bitter—it’s an adjective.
By the way, consider another phrase: Red is hot.
Is it true or false?
The original context of this discussion is whether these things are meaningful. It should be pretty obvious that the whole discussion presupposes that they are, including your own remarks. So since this is obvious, there is no need for further discussion of whether they are true or false in particular.
In the proposition “purple is [not] bitter” it seems clear to me that “bitter” is being used adjectivally. Imagine someone with a variety of synaesthesia that makes them perceive bitterness whenever faced with something purple; then I would say that for them purple is bitter. (In much the same sense as we might say that quinine is bitter.) For most people, colour perception and taste perception are not linked in any such way and therefore purple is not bitter.
This seems reasonable to me. In any case the argument wasn’t really about whether purple is bitter, but whether the sentence “purple is bitter” has any meaning at all. In fact is obviously has at least one meaning (which you mention here) and potentially several.
Your solution seems to consist of adopting an ethics that is explicitly non-universal.
There’s a slippery slope there. You start with “very little X” and slide to “entirely non-X”.
“very little” is a polite way to say “nothing”. It makes sense, especially next to the vague “has to do with” construct. So there is no slope here.
To clarify, are you disagreeing with me?
Your argument is either unsound or invalid, but I’m not sure which. Of course, personal experience of subjective statees does hae *something to do with detecting the same state in others.
Read the problem cousin_it posted again: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvd5
There is no detecting going on. If you’re clever (and have too much free time), you may come up with some ways that internal human experience helps to solve that problem, but noting significant. That’s why I used “little” instead of “nothing”.
But I wasn’t talking about the CR, I was talking in general.
What do you mean by “reality”? If you’re an empiricist, as it looks like you are, you mean “that which influinces our observations”. Now what is an “observation”? Good luck answering that question without resorting to qualia.
“observation” is what your roomba does to find the dirt on your floor.
How do you know? Does a falling rock also observe the gravitational field?
The same way I know what a chair is.
I’d have to say no here, but if you asked about plants observing light or even ice observing heat, I’d say “sure, why not”. There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.
What are you basing this distinction on? More importantly, how is whatever you’re basing this distinction on relevant to grounding the concept of empirical reality?
Using Eliezer’s formulation of “making beliefs pay rents in anticipated experiences” may make the relevant point clearer here. Specifically, what’s an “experience”?
I would say that object observes an event if it changes its state in response to this event. Yes, that’s a very low bar. First, gravity, isn’t an event, so “observe” is an awkward word. We can instead “measure” and then observe the results. Of course, if the gravity did change, the rock would presumably change it’s shape a tiny bit, which we may or may not count—that’s fine, “observation” is supposed to be on a spectrum.
Experiences are brain states, beliefs are also stored in the brain. Eliezer’s advice is equally good both for you and for a roomba, regardless of which of you is supposedly conscious. It may not work for plants or ice though—I don’t think I can find anything resembling beliefs in them, and even if I could, there would be no process to update them.
I agree with much of what you say but I am not sure it implies for cousin_it’s position what you think it does.
I’m sure it’s true that, as you put it elsewhere in the thread, consciousness is “extrapolated”: calling something conscious means that it resembles an awake normal human and not a rock, a human in a coma, etc., and there is no fact of the matter as to exactly how this should be extrapolated to (say) aliens or intelligent robots.
But this falls short of saying that at best, calling something conscious equals saying something about its externally observable behaviours.
For instance: suppose technology advances enough that we can (1) make exact duplicates of human beings, which (initially) exactly match the memories, personalities, capabilities, etc., of their originals, and (2) reversibly cause total paralysis in a human being, so that their mind no longer has any ability to produce externally observable effects, and (3) destroy a human being’s capacity for conscious thought while leaving autonomic functions like breathing normal.
(We can do #2 and #3 pretty well already, apart from reversibility. I want reversibility so that we can confirm later that the person was conscious while paralysed.)
So now we take a normal human being (clearly conscious). We duplicate them (#1). We paralyse them both (#2). Then we scramble the brain of one of them (#3). Then we observe them as much as you like.
I claim these two entities have exactly the same observable behaviours, past and present, but that we can reasonably consider one of them conscious and the other not. We can verify that one of them was conscious by reversing the paralysis. Verifying that the other wasn’t depends on our confidence that by mashing up most of their cerebral cortex (or whatever horrible thing we did in #3) really destroys consciousness, but this seems like a thing we could reasonably be quite confident of.
You might say that our judgement that one of these (ex-?) human beings is conscious is dependent on our ability to reverse the paralysis and check. But, given enough evidence that the induction of paralysis is harmlessly reversible, I claim we could be very confident even if we knew that after (say) a week both would be killed without the paralysis ever being reversed.
Indeed, we can always make two things seem indistinguishable, if we eliminate all of our abilities to distinguish them. The two bodies in your case could still be distinguished with an fmri scan, or similar tool. This might not count as “behavior”, but then I never wanted “behavior” to literally mean “hand movements”.
I think you could remove that by putting the two people into magical impenetrable boxes and then randomly killing one of them, through some schrodinger’s cat-like process. But I wouldn’t find that very interesting either. Yes, you can hide information, but it’s not just information about consciousness you’re hiding, but also about “ability to do arithmetic” and many other things. Now, if you could remove consciousness without removing anything else, that would be very interesting.
OK, so what did you mean by “behaviour” if it includes things you can only discover with an fMRI scan? (Possible “extreme” case: you simply mean that consciousness is something that happens in the physical world and supervenes on arrangements of atoms and fields and whatnot; I don’t think many here would disagree with that.)
If the criteria for consciousness include things you can’t observe “normally” but need fMRI scans and the like for (for the avoidance of doubt, I agree that they do) then you no longer have any excuse for answering “yes” to that last question.
My point wasn’t about hiding information; it was that much of the relevant information is already hidden, which you seemed to be denying when you said consciousness is just a matter of “behaviours”. It now seems like you weren’t intending to deny that at all; but in that case I no longer understand how what you’re saying is relevant to the OP.
The word behavior doesn’t really feature much in the ongoing discussions I have. My first post was an answer to OP, not meant as a stand-alone truth. But obviously, If “consciousness” means anything, it’s a thing that happens in the brain—I’d say it’s the thing that makes complex and human-like behaviors possible.
Normally is the key word here. There is nothing normal about your scenario. I need an fmri can for it, because there is nothing else that I can observe. Compared to that, the human in a box communicating through speech is very normal and quite sufficient. Unless the human is mute or malicious. Then I might need more complex tools.
It’s obscured, sure. But truly hiding information is hard. Speech isn’t that narrow of a window, by the way. Now, if I had to communicate with the agent in the box by sending one bit of information back and forth, that would be more of a problem.
I think you’re doing some priming here by adding “dutifully”.
I believe that causally the output “I see red” is connected to the actual experience of seeing red; while it’s possible (depending on the level of optimization) that the optimized upload is saying “icey led” with an accent to sound like the expected output it still seems more plausible that the brain structure generating the red experience is preserved (maintaining a causal representation of reality is generally more optimal/compressed than maintaining a lookup table)
You wouldn’t think that a book or Eliza program saying “I see red” were conscious, right? The question is whether optimizing an upload can make it close to an Eliza program for some topics. I think it’s possible, given how little we can say about consciousness (i.e. how few different responses we’d need to code into the Eliza program).
Not disagreeing in principle, it depends on the degree of optimization and the set of data you expect the upload to have low error on. Eliza will succeed on a very small set of data but will fail quickly on anything close to real-life. It’s possible that there’s a more compact representation that results in “I see red” than the DAG with consciousness in it, but I don’t think it’s that easy to optimize out without breaking other tests. BTW you’ve read Blindsight right? Great scifi on this topic basically (with aliens instead of uploads)