Going back to the word “arbitrary”, do we actually disagree about what it means, or is this just a useless tangent?
We probably do not disagree about what it means, but we disagree about what we are saying it means. I do say it means without any reason, although we can take it more specifically as “without any reason of the kind we are currently thinking about.”
I suggest that if the word meant “without good reason”, the word would be more useful.
If we take as I suggested, it would be possible in some cases to mean “without good reason,” namely without a reason of the kind we are currently thinking about, namely a good one.
In general, this topic came up because you were asserting that questions like “are tables also chairs” are stupid and only have arbitrary answers. If arbitrary means that there cannot be a good reason, then you are mistaken, because we have good reason for saying that tables are not chairs, and the stupidity would only be in saying that they are chairs, not in saying that they are not.
In regard to the issue of consciousness, the question is indeed a useless distraction. It is true that words like “pain” or even “consciousness” itself are vague, as are all words, and we exercise judgement when we extend them to new cases. That does not mean there is never a good reason to extend them. But more importantly, when we consider whether to extend “chair” to a new case, we can at least see what the thing looks like. In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain. And so the case is quite different from the case of the chair: as I said before, it is like asking if an unknown object behind a screen is a chair or not. Unknown, but definitely not arbitrary.
“without any reason of the kind we are currently thinking about.”
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
a coffee table is not merely something that has a certain shape, but something that was made for a certain intended use, even if you personally use it for sitting.
Here’e an example of weird reasons. How can shape not determine the difference? If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects? IKEA can have whatever intentions they want, but http://www.ikea.com/us/en/catalog/products/20299829/ is a stool. Are you seriously telling me that it isn’t? Now, one could say that a stool can be both a chair and a table, and I think that’s what IKEA does, but then you’ve already claimed this to be impossible.
In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem. There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
“Properties of the objects being classified” are much more extensive than you realize. For example, it is property of pain that it is subjective and only perceived by the one suffering it. Likewise, it is a property of a chair that someone made it for a certain purpose.
If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects?
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
In most cases of that kind, the thing being called a table really is a table, and not a stool. Obviously I cannot confirm this in the particular case since I do not intend to buy it. But it is related to the fact that it is made for a certain purpose, as I said. In other words, in most cases the thing is not suitable for use as a stool: it might collapse after one occasion of sitting on it, or anyway after several days. In other words, being made as a table, it is physically unsuitable to be used as a seat. And consequently if it did collapse, it would quite correct to say, “This collapsed because you were using it as a stool even though it is not one.”
That said, I already said that the intention of the makers is not 100% determining.
That’s assuming that “feeling” is a meaningful category.
That’s not subject to falsification, in the same way that it is not subject to falsification that the thing I am sitting on is called a “chair.” In other words, I already notice the similarity between all the things that are called feelings in the same way that I notice the similarity between chairs.
If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem.
Talk about assumptions. I assume, and you are assuming here, that I have a brain, because we know in most cases that when people have been examined, they turned out to have brains inside their heads. But the fact that my toe hurts when I stub it, is not an assumption. If it turned out that I did not have a brain, I would not say, “I must have been wrong about suffering pain.” I would say “My pain does not depend on a brain.” I pointed out your error in this matter several times earlier—the meaning of pain has absolutely nothing at all to do with brain activities or even the existence of a brain. As far as anyone knows, the pain I feel when I stub my toe could depend on a property of the moon, and the pain I feel when I bump into a lamppost on a property of Mt. Everest. If that were the case, it would affect in no way the fact that those two pains feel similar.
There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
This is completely wrong, for the reason I just stated. We are not talking about similarities between brain states—we are talking about the similarity of two feelings. So it does not matter if the robot’s brain state is similar to mine. It matters whether it feels similar, just as I noted that my different pains feel similar to one other, and would remain feeling similar, even if they depended on radically different physical objects like the moon and Mr. Everest.
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
When exactly is the intention relevant? If two objects have the same shape but different intended uses, and you still classify them the same, then the intention is not relevant. More generally, if we have variables X, Y and want to test if a function f(X,Y) depends not only on X, but also on Y, we have to find a point where f(X1, Y1) is not equal to f(X1, Y2). Are you saying that sometimes intention matters, just not for chairs? If not, I can only assume that intention doesn’t determine anything and only shape is important.
I already notice the similarity between all the things that are called feelings
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
If it turned out that I did not have a brain
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing. Unfortunately for you, you do have a brain.
It matters whether it feels similar
Are you going to feel the robot’s feeling and compare?
Are you saying that sometimes intention matters, just not for chairs?
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
Unquestionably, it can be meaningfully extended to robots. You simply mean the same thing that you mean in the regular case. The only question is whether there is any feeling there, not if “feeling” has a meaning, since we already admitted that it does have a meaning.
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing.
The possibility of falsification is a good thing for a physical theory. It is a very bad thing for a theory of the meaning of a word. As you already admitted, the fact that the pieces of furniture we normally sit on are called chairs is not subject to falsification, because that is just what is meant by “chair.” But a physical theory of a chair, e.g. “chairs are made of atoms,” is falsifiable, since someone could examine a chair and discover that it was not made of atoms. He would not then say, “We have discovered that ‘chair’ meant something different from what we thought.” He would say, “We knew what ‘chair’ meant, and that is unchanged, but we have learned something new about the physical constitution of chairs.”
In the same way, I am referring to certain feelings when I talk about “pain.” The fact that the word pain refers to those feelings cannot be falsified, because it is just what that word means. But whether pain depends on a brain activity is a falsifiable physical theory; it has nothing to do with the meaning of the word “pain.”
Unfortunately for you, you do have a brain.
Assuming that I do, that is fortunate, not unfortunate. But as I was saying, neither you or I know that I do, since neither of us has seen the inside of my head.
Are you going to feel the robot’s feeling and compare?
No. The question is not whether the robot has a feeling which feels similar to me as my feeling of pain; the question is whether the robot has a feeling that feels to the robot the same way that my feeling feels to me. And since this has two subjects in it, there is no subject that can feel them both and compare them. And this is just how it is, whether you like it or not, and this is what “pain” refers to, whether you like it or not.
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Can you actually support your claim that intention matters? To clarify, I’m suggesting that intention merely correlates with shape, but has no predictive power on its own.
It is a very bad thing for a theory of the meaning of a word.
It’s somewhat complicated. “Experiences are brain states” is to an extent a theory. “Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition. Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
And since this has two subjects in it, there is no subject that can feel them both and compare them.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
Can you actually support your claim that intention matters?
Artificial things are made for a purpose, and being made for a purpose is part of why they are called what they are called. This is an obvious fact about how these words are used and does not need additional support.
“Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition.
If you mean pain is the conscious state that follows in that situation, yes, if you mean the third person state that folllows, no.
Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason. Definitions shouldn’t be falsifiable, and are not physical theories.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
No. The stars outside event horizon of the visible universe are similar to the stars that we can see, but there is no way to compare them.
One can however ask the question, “Are the stars similar?” and one can answer yes or no. In the same way we can ask if the robot feels like we do and we can say yes and no. But there is access to the answer here, just as there no access in the case of the stars. That has nothing to do with the fact that either they are similar, or they are not, both in the case of the robot, and in the case of the stars.
This is an obvious fact about how these words are used and does not need additional support.
Wow, you have no idea how many bold claims you make. To clarify once again, when I ask if intention matters, I’m asking whether the relationship between intention and classification is causal, or just a correlation. You are supposed to know the difference between those two things, and you’re supposed to know, in theory, how to figure out which one is relevant in a specific case. This whole “does not need additional support” thing inspires no confidence.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
The stars outside event horizon
I hate bringing up modern physics, it has limited relevance here. Maybe they’ll figure out faster than light travel tomorrow, and your point will become moot. But if we must...
If we insist that something beyond the event horizon exists (I’d love to see how you define that word), we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too. Note though, that the certainty of beliefs derived this way should be pretty low. And in the case of robots, hardly any of this applies.
I’m asking whether the relationship between intention and classification is causal, or just a correlation.
It is causal, but not infallible.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
That’s your problem. Everyone else will still call it “the sun,” and when you say “the sun didn’t rise this morning,” your statement will still be false.
we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too.
Exactly. You can make arguments for and against robot pain as well. Meaning does not come from the fact that your arguments are strong or weak: the meaning is there first, even before you have any reason to think that the thing is true.
What words? The word “causal”? I’m asking for arguments why you think that the relationship between intention and classification is causal. I expect you to understand the difference between causation and correlation. Why is this so difficult for you?
As I said, this is how these words work, that is words like “chair” and so on. If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair. And those things are true even given the same form, so it is causal, not merely correlated.
If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
And those things are true even given the same form
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
I am not talking about evidence, but about meaning; when we say, “this is a chair,” part of the meaning is that it was made for sitting. So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair, in the ordinary sense of the term, because they were not made for sitting.
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
I don’t know where you think that was established. I said the fact that someone made something for a different purpose would not guarantee that it was that thing, as that if you made something in the shape of a hammer, and said it was for sitting, it would not be a chair. But the fact that it was made for sitting does indeed make it closer to being a chair, not by way of evidence or correlation, but directly by more closely matching the meaning of “chair.”
I don’t know where you think that was established.
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair? Would simply sitting on the first rock convert it into a chair?
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
By acting like you actually want to understand what is being said, rather than by acting like you are in a fight where you are trying to win. In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
In particular, consider my answer to your next question, because it is basically the same thing again.
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair?
There is no guarantee of this, because the word “chair” is vague. But it is true that there would be more reason to call the second rock a chair—that is, the meaning of “chair” would apply more properly to the second one than to the first one. So the answer might be yes, depending on all the other details of the situation.
Would simply sitting on the first rock convert it into a chair?
In general, no, because the word “chair” does not work that way. It does not mean what someone happens to sit on, but what is made for sitting on.
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
If you are not ignorant of how the word is used, you do have to involve the intention of the maker.
By acting like you actually want to understand what is being said
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer. Here I’m only asking you for a black and white answer on this very specific question. I understood your original reply, but I honestly have not idea how it was supposed to answer my specific question. When people refuse to give direct answers to specific questions, I infer that they’re conceding.
In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
What exactly do you mean by “vague”? The word “chair” refers to the category of chairs. Is the category itself “vague”?
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair. Apparently one that you have more knowledge of than I. So I keep asking you specific questions about this concept. And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff. By the way, this thread started here where you explicitly start talking about words and meanings, so that’s what we’re talking about.
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to, although my guess is to the IKEA discussion. In any case, the response there, and to anything similar that you might be discussing, is that there is no direct answer because the answer is “in some cases yes, in some cases no, depending on the particular circumstances.”
What exactly do you mean by “vague”?
First of all, all words are vague, so there is no such thing as “what exactly do you mean by.” No one ever means anything exactly. Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Is the category itself “vague”?
Yes. All categories are vague, because they are generated by a process similar to factor analysis, as I have said before, and this does not generate precise categories, but vague ones.
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair.
It is false that the meanings are arbitrary, for the reasons I have said. It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
This is true of all words. All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things. Therefore no words refer to precise concepts (in fact there is no such thing as a precise concept, as I have already said.)
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff.
First of all, you are the one who needs the “language 101” stuff, since you have been abusing the concept of meaning by confusing it with testability. Second, nope, we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t. You have been constantly confusing two entirely separate issues, and ignoring me when I point out the separateness of these issues. Let me do so one more time.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.” Those are two entirely separate things. Likewise, you have been confusing “this statement is vague” with “this statement is not testable.” These are two entirely separate things.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
Second, consider a line of stars outside the visible universe, except that some of the stars, on the right, are identical white dwarfs, and the ones to the left of them are identical red giants. Where exactly do the stars stop being white dwarfs and begin being red giants? This time, we cannot answer the question because there is no test to give us the answer. But vagueness is not an issue, because there is a sharp division between the two parts. We simply cannot find it by testing.
Third, consider a line of stars outside the the visible universe, constructed as in the first case. This time, there are two problems: we cannot test where the boundary is, and the boundary is vague. These are two entirely different issues.
Fourth, consider a line of things where the one on the left is a statue, the one on the right is a human being, and somewhere in the middle there are robotic things. Each thing differs by a single atom from the thing on its left, and from the thing on its right.
Now we have the question: “The statue is not conscious. The human being is conscious. Is the central robot conscious?” There are two separate issues here. One is that we cannot test for consciousness. The second is that the word “conscious” is vague. These are two entirely separate issues, just as they are in the above cases of the stars.
Let us prove this. Suppose you are the human being on the right. We begin to modify you, one atom at a time, moving you to the left. Now the issue is testable: you can ask yourself whether you are conscious, and if you have any answer at all, or even if you manage to ask yourself the question, then you are conscious. Note that this is quite different from anyone else asking the thing if it is conscious, because the question “does this thing say it is conscious” is not the same as “is this thing conscious.” But being conscious is having a first person point of view, so if you can ask yourself anything, you are conscious. Unfortunately, long before you cease to be conscious, you will cease to be able to ask yourself any questions. So you will still not be able to find a definite boundary between conscious and not conscious. Nonetheless, this proves that testability is entirely separate from vagueness.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to
Well, that explains a lot. It’s not exactly ancient history, and everything is properly quoted, so you really should know what I’m talking about. Yes, it’s about the identical table-chairs question from IKEA discussion, the one that I linked to just a few posts above.
Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Why are there no determinate boundaries though? I’m saying that boundaries are unclear only if you haven’t yet decided what they should be. But you seem to be saying that the boundaries inherently cannot be clear?
All categories are vague, because they are generated by a process similar to factor analysis
There is nothing vague about the results of factor analysis.
It is false that the meanings are arbitrary, for the reasons I have said.
On this topic, last we seemed to have agreed that “arbitrary” classification means “without reasons related to the properties of the objects classified”. I don’t recall you ever giving any such reasons.
It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
For example, you have said ‘”are tables also chairs” has a definite answer’. Note the word “definite”. You also keep insisting that there is factor analysis involved, which would also be an objective and natural way to assign objects to categories. By the way “natural” is the opposite of “arbitrary”.
All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things.
Yeah, I recall saying something like that myself. And the rest of your claims don’t go well with this one.
you are the one who needs the “language 101” stuff
Well, you decided that I need it, then made some wild and unsupported claims.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.”
Yes, the two statements are largely equivalent. Oddly, I don’t recall you mentioning testability or measurability anywhere in this thread before (I think there was something in another thread though).
Likewise, you have been confusing “this statement is vague” with “this statement is not testable.”
I don’t think I’ve done that. It’s unfortunate that after this you spent so much time trying to to prove something I don’t really disagree with. Why did you think that I’m confusing these things? Please quote.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you. You say “cannot possibly” and then give no reasons why.
I however have no problems with the vagueness here, because the two categories are only shorthands for some very specific properties of the starts (like mass). This is not true for “consciousness”.
Nonetheless, this proves that testability is entirely separate from vagueness.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you.
There is nothing stopping you only in the sense that nothing stops you from asserting falsehoods. (As we see is the case for you personally.)
It is intrinsically vague: “Red giant” does not and cannot have precise boundaries, as is true of all words. The same is true of “White dwarf.” If you cut the line, you will indeed be cutting it arbitrarily, as you say yourself, and this has nothing to do with the meaning of those words.
The rest does not respond to the comparison about consciousness, and as I said we won’t be discussing the comments on language.
“Red giant” does not and cannot have precise boundaries
Again, you make a claim and then offer no arguments to support it. “Red giant” is a term defined quite recently by a fairly small group of people. It means what those people wanted it to mean, and its boundaries are as precise as those people wanted them to be.
we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t.
You started the language discussion, but I have to explain why we’re continuing it? I continue, because I suspect that the reasoning errors you’re making about chairs are similar to the errors you’re making abut consciousness, and chairs are easier to talk about. But it’s only a suspicion. Also, I continue, because you’ve made some ridiculous claims and I’m not going to ignore them.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences
Assuming “experience” is a meaningful category.
with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
Yes, “feeling” and “experience”, are pretty much the same thing, I didn’t mean to imply otherwise in the text you quoted. Instead, the first sentence refers to your definition, and the second offers an alternative one.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
There is a classification problem with tables and chairs. Generally, I know what chairs and tables are supposed to be like, but there are objects similar both to chairs and to tables, and there isn’t any obvious way to choose which of those similarities are more important. At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
Yes, “feeling” and “experience”, are pretty much the same thing,
So they are either both meaningful, or both meaningless. But you have used “experience” as though it is meaningful,
and you have implied that “feeling” is meaningless.
That was a predictable problem. Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates, you cannot make that kind of statement, and so cannot express identity theory.
At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
Whereas, from my point of view, 1st person experience was there all along.
But you have used “experience” as though it is meaningful, and you have implied that “feeling” is meaningless.
No, I used “experience” as a label. Let me rewrite that part:
That’s assuming that “experience”, as you use that word, is a meaningful category. If you didn’t start from that assumption, and instead defined experiences as brain states, you could …
Is that better? I understand that having two definitions and two similar but not identical concepts in one sentence is confusing. But still I expect you to figure it out. Was “identified” the problem?
Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates <...>
What vocabulary relating to what mental states do I reject? Give examples.
Whereas, from my point of view, 1st person experience was there all along.
Wasn’t “chairness” there too? More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
That’s assuming that “experience”, as you use that word, is a meaningful category.
Does “meaningful” mean “meaningful” here , or is it being used as a misleading proxy for something like “immeasurable” or “unnecessary” or “tadasdatys doens’t like it”?
What vocabulary relating to what mental states do I reject?
You keep saying various words are meaningless. One would not want to use meaningless words, generally. OTOH, you have revealed elsewhere that you don’t use “meaningless” to mean “meaningless”. So who knows?
More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
Consciousness is in the dictiionary, chariness isn’t.
Consciousness is a concept used by science, chairness isn’t.
Consciousness is supported by empirical evidence, chairness isn’t.
It’s not that words are meaningless, it’s that you sometimes apply them in stupid ways. “Bitter” is a fine word, until you start discussing the “bitterness of purple”.
Consciousness is in the dictiionary, chariness isn’t.
Are dictionary writers the ultimate arbiters of what is real? “Unicorn” is also in the dictionary, by the way.
Consciousness is a concept used by science, chairness isn’t.
Physicalist, medical definition of consciousness is used by science. You accuse me of changing definitions when it suits me, and then proceed to do exactly that. I guess that’s what projection looks like.
Consciousness is supported by empirical evidence, chairness isn’t.
What evidence exactly? I have to assume my last paragraph applies here too.
If you can’t even come up with arguments why a silly concept I made up is flawed, maybe you shouldn’t be so certain in the meaningfulness of other concepts.
We probably do not disagree about what it means, but we disagree about what we are saying it means. I do say it means without any reason, although we can take it more specifically as “without any reason of the kind we are currently thinking about.”
If we take as I suggested, it would be possible in some cases to mean “without good reason,” namely without a reason of the kind we are currently thinking about, namely a good one.
In general, this topic came up because you were asserting that questions like “are tables also chairs” are stupid and only have arbitrary answers. If arbitrary means that there cannot be a good reason, then you are mistaken, because we have good reason for saying that tables are not chairs, and the stupidity would only be in saying that they are chairs, not in saying that they are not.
In regard to the issue of consciousness, the question is indeed a useless distraction. It is true that words like “pain” or even “consciousness” itself are vague, as are all words, and we exercise judgement when we extend them to new cases. That does not mean there is never a good reason to extend them. But more importantly, when we consider whether to extend “chair” to a new case, we can at least see what the thing looks like. In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain. And so the case is quite different from the case of the chair: as I said before, it is like asking if an unknown object behind a screen is a chair or not. Unknown, but definitely not arbitrary.
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
Here’e an example of weird reasons. How can shape not determine the difference? If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects? IKEA can have whatever intentions they want, but http://www.ikea.com/us/en/catalog/products/20299829/ is a stool. Are you seriously telling me that it isn’t? Now, one could say that a stool can be both a chair and a table, and I think that’s what IKEA does, but then you’ve already claimed this to be impossible.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem. There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
“Properties of the objects being classified” are much more extensive than you realize. For example, it is property of pain that it is subjective and only perceived by the one suffering it. Likewise, it is a property of a chair that someone made it for a certain purpose.
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
In most cases of that kind, the thing being called a table really is a table, and not a stool. Obviously I cannot confirm this in the particular case since I do not intend to buy it. But it is related to the fact that it is made for a certain purpose, as I said. In other words, in most cases the thing is not suitable for use as a stool: it might collapse after one occasion of sitting on it, or anyway after several days. In other words, being made as a table, it is physically unsuitable to be used as a seat. And consequently if it did collapse, it would quite correct to say, “This collapsed because you were using it as a stool even though it is not one.”
That said, I already said that the intention of the makers is not 100% determining.
That’s not subject to falsification, in the same way that it is not subject to falsification that the thing I am sitting on is called a “chair.” In other words, I already notice the similarity between all the things that are called feelings in the same way that I notice the similarity between chairs.
Talk about assumptions. I assume, and you are assuming here, that I have a brain, because we know in most cases that when people have been examined, they turned out to have brains inside their heads. But the fact that my toe hurts when I stub it, is not an assumption. If it turned out that I did not have a brain, I would not say, “I must have been wrong about suffering pain.” I would say “My pain does not depend on a brain.” I pointed out your error in this matter several times earlier—the meaning of pain has absolutely nothing at all to do with brain activities or even the existence of a brain. As far as anyone knows, the pain I feel when I stub my toe could depend on a property of the moon, and the pain I feel when I bump into a lamppost on a property of Mt. Everest. If that were the case, it would affect in no way the fact that those two pains feel similar.
This is completely wrong, for the reason I just stated. We are not talking about similarities between brain states—we are talking about the similarity of two feelings. So it does not matter if the robot’s brain state is similar to mine. It matters whether it feels similar, just as I noted that my different pains feel similar to one other, and would remain feeling similar, even if they depended on radically different physical objects like the moon and Mr. Everest.
When exactly is the intention relevant? If two objects have the same shape but different intended uses, and you still classify them the same, then the intention is not relevant. More generally, if we have variables X, Y and want to test if a function f(X,Y) depends not only on X, but also on Y, we have to find a point where f(X1, Y1) is not equal to f(X1, Y2). Are you saying that sometimes intention matters, just not for chairs? If not, I can only assume that intention doesn’t determine anything and only shape is important.
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing. Unfortunately for you, you do have a brain.
Are you going to feel the robot’s feeling and compare?
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Unquestionably, it can be meaningfully extended to robots. You simply mean the same thing that you mean in the regular case. The only question is whether there is any feeling there, not if “feeling” has a meaning, since we already admitted that it does have a meaning.
The possibility of falsification is a good thing for a physical theory. It is a very bad thing for a theory of the meaning of a word. As you already admitted, the fact that the pieces of furniture we normally sit on are called chairs is not subject to falsification, because that is just what is meant by “chair.” But a physical theory of a chair, e.g. “chairs are made of atoms,” is falsifiable, since someone could examine a chair and discover that it was not made of atoms. He would not then say, “We have discovered that ‘chair’ meant something different from what we thought.” He would say, “We knew what ‘chair’ meant, and that is unchanged, but we have learned something new about the physical constitution of chairs.”
In the same way, I am referring to certain feelings when I talk about “pain.” The fact that the word pain refers to those feelings cannot be falsified, because it is just what that word means. But whether pain depends on a brain activity is a falsifiable physical theory; it has nothing to do with the meaning of the word “pain.”
Assuming that I do, that is fortunate, not unfortunate. But as I was saying, neither you or I know that I do, since neither of us has seen the inside of my head.
No. The question is not whether the robot has a feeling which feels similar to me as my feeling of pain; the question is whether the robot has a feeling that feels to the robot the same way that my feeling feels to me. And since this has two subjects in it, there is no subject that can feel them both and compare them. And this is just how it is, whether you like it or not, and this is what “pain” refers to, whether you like it or not.
Can you actually support your claim that intention matters? To clarify, I’m suggesting that intention merely correlates with shape, but has no predictive power on its own.
It’s somewhat complicated. “Experiences are brain states” is to an extent a theory. “Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition. Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
Artificial things are made for a purpose, and being made for a purpose is part of why they are called what they are called. This is an obvious fact about how these words are used and does not need additional support.
If you mean pain is the conscious state that follows in that situation, yes, if you mean the third person state that folllows, no.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason. Definitions shouldn’t be falsifiable, and are not physical theories.
No. The stars outside event horizon of the visible universe are similar to the stars that we can see, but there is no way to compare them.
One can however ask the question, “Are the stars similar?” and one can answer yes or no. In the same way we can ask if the robot feels like we do and we can say yes and no. But there is access to the answer here, just as there no access in the case of the stars. That has nothing to do with the fact that either they are similar, or they are not, both in the case of the robot, and in the case of the stars.
Wow, you have no idea how many bold claims you make. To clarify once again, when I ask if intention matters, I’m asking whether the relationship between intention and classification is causal, or just a correlation. You are supposed to know the difference between those two things, and you’re supposed to know, in theory, how to figure out which one is relevant in a specific case. This whole “does not need additional support” thing inspires no confidence.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
I hate bringing up modern physics, it has limited relevance here. Maybe they’ll figure out faster than light travel tomorrow, and your point will become moot. But if we must...
If we insist that something beyond the event horizon exists (I’d love to see how you define that word), we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too. Note though, that the certainty of beliefs derived this way should be pretty low. And in the case of robots, hardly any of this applies.
It is causal, but not infallible.
That’s your problem. Everyone else will still call it “the sun,” and when you say “the sun didn’t rise this morning,” your statement will still be false.
Exactly. You can make arguments for and against robot pain as well. Meaning does not come from the fact that your arguments are strong or weak: the meaning is there first, even before you have any reason to think that the thing is true.
Ok, do you have any arguments to support that claim?
That may depend on the specific circumstances of the discovery. Also, different people can use the same words in different ways.
Arguments like what?
As I said, this is how people use the words.
Like yours, for example.
What words? The word “causal”? I’m asking for arguments why you think that the relationship between intention and classification is causal. I expect you to understand the difference between causation and correlation. Why is this so difficult for you?
I understand the difference, and I have no difficulties here. I said it was causal, not merely correlative.
Ok, do you have any arguments to support that it is causal?
As I said, this is how these words work, that is words like “chair” and so on. If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair. And those things are true even given the same form, so it is causal, not merely correlated.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
I am not talking about evidence, but about meaning; when we say, “this is a chair,” part of the meaning is that it was made for sitting. So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair, in the ordinary sense of the term, because they were not made for sitting.
I don’t know where you think that was established. I said the fact that someone made something for a different purpose would not guarantee that it was that thing, as that if you made something in the shape of a hammer, and said it was for sitting, it would not be a chair. But the fact that it was made for sitting does indeed make it closer to being a chair, not by way of evidence or correlation, but directly by more closely matching the meaning of “chair.”
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair? Would simply sitting on the first rock convert it into a chair?
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
By acting like you actually want to understand what is being said, rather than by acting like you are in a fight where you are trying to win. In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
In particular, consider my answer to your next question, because it is basically the same thing again.
There is no guarantee of this, because the word “chair” is vague. But it is true that there would be more reason to call the second rock a chair—that is, the meaning of “chair” would apply more properly to the second one than to the first one. So the answer might be yes, depending on all the other details of the situation.
In general, no, because the word “chair” does not work that way. It does not mean what someone happens to sit on, but what is made for sitting on.
If you are not ignorant of how the word is used, you do have to involve the intention of the maker.
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer. Here I’m only asking you for a black and white answer on this very specific question. I understood your original reply, but I honestly have not idea how it was supposed to answer my specific question. When people refuse to give direct answers to specific questions, I infer that they’re conceding.
What exactly do you mean by “vague”? The word “chair” refers to the category of chairs. Is the category itself “vague”?
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair. Apparently one that you have more knowledge of than I. So I keep asking you specific questions about this concept. And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff. By the way, this thread started here where you explicitly start talking about words and meanings, so that’s what we’re talking about.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to, although my guess is to the IKEA discussion. In any case, the response there, and to anything similar that you might be discussing, is that there is no direct answer because the answer is “in some cases yes, in some cases no, depending on the particular circumstances.”
First of all, all words are vague, so there is no such thing as “what exactly do you mean by.” No one ever means anything exactly. Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Yes. All categories are vague, because they are generated by a process similar to factor analysis, as I have said before, and this does not generate precise categories, but vague ones.
It is false that the meanings are arbitrary, for the reasons I have said. It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
This is true of all words. All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things. Therefore no words refer to precise concepts (in fact there is no such thing as a precise concept, as I have already said.)
First of all, you are the one who needs the “language 101” stuff, since you have been abusing the concept of meaning by confusing it with testability. Second, nope, we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t. You have been constantly confusing two entirely separate issues, and ignoring me when I point out the separateness of these issues. Let me do so one more time.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.” Those are two entirely separate things. Likewise, you have been confusing “this statement is vague” with “this statement is not testable.” These are two entirely separate things.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
Second, consider a line of stars outside the visible universe, except that some of the stars, on the right, are identical white dwarfs, and the ones to the left of them are identical red giants. Where exactly do the stars stop being white dwarfs and begin being red giants? This time, we cannot answer the question because there is no test to give us the answer. But vagueness is not an issue, because there is a sharp division between the two parts. We simply cannot find it by testing.
Third, consider a line of stars outside the the visible universe, constructed as in the first case. This time, there are two problems: we cannot test where the boundary is, and the boundary is vague. These are two entirely different issues.
Fourth, consider a line of things where the one on the left is a statue, the one on the right is a human being, and somewhere in the middle there are robotic things. Each thing differs by a single atom from the thing on its left, and from the thing on its right.
Now we have the question: “The statue is not conscious. The human being is conscious. Is the central robot conscious?” There are two separate issues here. One is that we cannot test for consciousness. The second is that the word “conscious” is vague. These are two entirely separate issues, just as they are in the above cases of the stars.
Let us prove this. Suppose you are the human being on the right. We begin to modify you, one atom at a time, moving you to the left. Now the issue is testable: you can ask yourself whether you are conscious, and if you have any answer at all, or even if you manage to ask yourself the question, then you are conscious. Note that this is quite different from anyone else asking the thing if it is conscious, because the question “does this thing say it is conscious” is not the same as “is this thing conscious.” But being conscious is having a first person point of view, so if you can ask yourself anything, you are conscious. Unfortunately, long before you cease to be conscious, you will cease to be able to ask yourself any questions. So you will still not be able to find a definite boundary between conscious and not conscious. Nonetheless, this proves that testability is entirely separate from vagueness.
Well, that explains a lot. It’s not exactly ancient history, and everything is properly quoted, so you really should know what I’m talking about. Yes, it’s about the identical table-chairs question from IKEA discussion, the one that I linked to just a few posts above.
Why are there no determinate boundaries though? I’m saying that boundaries are unclear only if you haven’t yet decided what they should be. But you seem to be saying that the boundaries inherently cannot be clear?
There is nothing vague about the results of factor analysis.
On this topic, last we seemed to have agreed that “arbitrary” classification means “without reasons related to the properties of the objects classified”. I don’t recall you ever giving any such reasons.
For example, you have said ‘”are tables also chairs” has a definite answer’. Note the word “definite”. You also keep insisting that there is factor analysis involved, which would also be an objective and natural way to assign objects to categories. By the way “natural” is the opposite of “arbitrary”.
Yeah, I recall saying something like that myself. And the rest of your claims don’t go well with this one.
Well, you decided that I need it, then made some wild and unsupported claims.
Yes, the two statements are largely equivalent. Oddly, I don’t recall you mentioning testability or measurability anywhere in this thread before (I think there was something in another thread though).
I don’t think I’ve done that. It’s unfortunate that after this you spent so much time trying to to prove something I don’t really disagree with. Why did you think that I’m confusing these things? Please quote.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you. You say “cannot possibly” and then give no reasons why.
I however have no problems with the vagueness here, because the two categories are only shorthands for some very specific properties of the starts (like mass). This is not true for “consciousness”.
It’s not a test if “no” is unobservable.
There is nothing stopping you only in the sense that nothing stops you from asserting falsehoods. (As we see is the case for you personally.)
It is intrinsically vague: “Red giant” does not and cannot have precise boundaries, as is true of all words. The same is true of “White dwarf.” If you cut the line, you will indeed be cutting it arbitrarily, as you say yourself, and this has nothing to do with the meaning of those words.
The rest does not respond to the comparison about consciousness, and as I said we won’t be discussing the comments on language.
Again, you make a claim and then offer no arguments to support it. “Red giant” is a term defined quite recently by a fairly small group of people. It means what those people wanted it to mean, and its boundaries are as precise as those people wanted them to be.
You started the language discussion, but I have to explain why we’re continuing it? I continue, because I suspect that the reasoning errors you’re making about chairs are similar to the errors you’re making abut consciousness, and chairs are easier to talk about. But it’s only a suspicion. Also, I continue, because you’ve made some ridiculous claims and I’m not going to ignore them.
Assuming “experience” is a meaningful category.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
Yes, “feeling” and “experience”, are pretty much the same thing, I didn’t mean to imply otherwise in the text you quoted. Instead, the first sentence refers to your definition, and the second offers an alternative one.
There is a classification problem with tables and chairs. Generally, I know what chairs and tables are supposed to be like, but there are objects similar both to chairs and to tables, and there isn’t any obvious way to choose which of those similarities are more important. At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
So they are either both meaningful, or both meaningless. But you have used “experience” as though it is meaningful, and you have implied that “feeling” is meaningless.
That was a predictable problem. Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates, you cannot make that kind of statement, and so cannot express identity theory.
Whereas, from my point of view, 1st person experience was there all along.
No, I used “experience” as a label. Let me rewrite that part:
That’s assuming that “experience”, as you use that word, is a meaningful category. If you didn’t start from that assumption, and instead defined experiences as brain states, you could …
Is that better? I understand that having two definitions and two similar but not identical concepts in one sentence is confusing. But still I expect you to figure it out. Was “identified” the problem?
What vocabulary relating to what mental states do I reject? Give examples.
Wasn’t “chairness” there too? More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
As opposed to what?
Does “meaningful” mean “meaningful” here , or is it being used as a misleading proxy for something like “immeasurable” or “unnecessary” or “tadasdatys doens’t like it”?
You keep saying various words are meaningless. One would not want to use meaningless words, generally. OTOH, you have revealed elsewhere that you don’t use “meaningless” to mean “meaningless”. So who knows?
Consciousness is in the dictiionary, chariness isn’t.
Consciousness is a concept used by science, chairness isn’t.
Consciousness is supported by empirical evidence, chairness isn’t.
It’s not that words are meaningless, it’s that you sometimes apply them in stupid ways. “Bitter” is a fine word, until you start discussing the “bitterness of purple”.
Are dictionary writers the ultimate arbiters of what is real? “Unicorn” is also in the dictionary, by the way.
Physicalist, medical definition of consciousness is used by science. You accuse me of changing definitions when it suits me, and then proceed to do exactly that. I guess that’s what projection looks like.
What evidence exactly? I have to assume my last paragraph applies here too.
If you can’t even come up with arguments why a silly concept I made up is flawed, maybe you shouldn’t be so certain in the meaningfulness of other concepts.