I″m with EntirelyUseless. You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”.
There are two ways to look at a meaningless phrase. One is to say that perfectly good words are being used wrong. Another is to say that there is some problem with the words themselves. If we look at, say, “purple is bitter”, the first way is clearly more appropriate. If we look at “robot feels pain”, it’s hard for me to tell, which way I prefer.
Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated.
Here is my claim that “robot feels pain” is a meaningless statement. More generally, a question is meaningless, if an answer to it transfers no information about the real world. I can answer “is purple bitter” either way, and that would tell you nothing about the color purple. Likewise, I could answer “does this robot feel pain” and that would tell you nothing about the robot or what you should do with it. At best, a “yes” would mean that the robot can detect pressure or damage, and then say “ouch” or run away. But that’s clearly not the kind of pain we’re talking about.
More generally, a question is meaningless, if an answer to it transfers no information about the real world.
Since you are equaiting reality with objectivity, you are simply declaring statements about subjectivity meaningless by fiat.
I can answer “is purple bitter” either way, and that would tell you nothing about the color purple
That’s because it is a category error.
Likewise, I could answer “does this robot feel pain” and that would tell you nothing about the robot or what you should do with it.
Of course it tells me what I should do. It’s ethically relevant if a robot feels pain. if it feels pain when damaged, I should not damage it.
t best, a “yes” would mean that the robot can detect pressure or damage, and then say “ouch” or run away. But that’s clearly not the kind of pain we’re talking about.
How do you know? You aer assuming that no roobt can have a real subjective sensation of pain, and you have no way of knowing that one way or the other, and your arguments are question begging and inconsistent. (If “robots do (not) feel pain” is meaningless, as you sometimes say, it cannot also be false, as you sometimes also say).
Since you are equaiting reality with objectivity, you are simply declaring statements about subjectivity meaningless by fiat.
I’m claiming that all subjective experiences have objective descriptions. Please give an example of a subjective experience, other than consciousness, that has no physical evidence. Obviously, I will try to argue that either there is something objective you missed, or that the subjective experience is as poorly defined as consciousness.
<”bitter purple”> is a category error.
But “robot pain” isn’t? How did you come to those conclusions?
if it feels pain when damaged, I should not damage it.
That’s not how this works. Rats feel pain without a doubt, but we destroy them quite freely. Whether you will damage the robot is decided by many factors. E.g. if there is some benefit to the damage, if the robot will scream out in pain, or if it’s likely to damage you in return. The robot’s subjective experience of pain only matters if you decide that it matters—this is true for all categories, no matter how artificial.
How do you know?
Are you asking about the “at best” part? Because the rest of that sentence seems quite mundane. Here “at best” is about the limits of my own imagination. You’re welcome to suggest something better.
If “robots do (not) feel pain” is meaningless, as you sometimes say, it cannot also be false, as you sometimes also say.
That’s not contradictory, even if slightly inconsistent. “is purple bitter” is a meaningless question, but the answer “no” is clearly a lot more appropriate than “yes”. The line between falsehood and nonsense is quite blurry. I think we can freely call all nonsense statements false, without any negative consequences.
That’s not how this works. Rats feel pain without a doubt, but we destroy them quite freely. Whether you will damage the robot is decided by many factors.
That’s not how it works either. You can’t infer zero moral relevance of some factor by noting that other factors acan countervail.
The robot’s subjective experience of pain only matters if you decide that it matters
I’m not morally omniscient. The robots experience of pain matters if it features in some scheme of ideal moral reasoning. To put i another way, you just proved that nothing is morally relevant, if you proved anything at all.
Are you asking about the “at best” part? Because the rest of that sentence seems quite mundane. Here “at best” is about the limits of my own imagination.
Well, you do seem to have a subjective intuition that robots will never feel pain. Others intuit differently. What happened to all the science stuff?
The robots experience of pain matters if it features in some scheme of ideal moral reasoning.
Gosh, I really don’t want to start talking about morality now. But I have to point out that the “bitterness of purple” can also matter, if it features in some scheme of ideal moral reasoning. At least if you accept that this moral reasoning could require arbitrary concepts and not just ones grounded in reality.
Well, you do seem to have a subjective intuition that robots will never feel pain.
No, I ran a deterministic procedure in my brain, called “is X well defined”, on “robot pain”, and it returned “no”. It’s only subjective in the sense that mine is different from yours, if you have such a procedure at all. The procedure, by the way, works by searching for alternative definitions of things, such that the given concept is neither trivial nor stupid. Unfortunately, failure to find such definitions does not produce a proof of non-existence, so I’m quite open to the idea that I missed something, it’s just that you inspire little confidence.
I did not mean to imply that ideal moral reasoning is weird and unguessable....only that you should not take imperfect moral reasoning (whose?) to be the last word. The idea that deliberately causing pain is wrong is not contentious, and you don’t actually have an argument against it.
It’s only subjective in the sense that mine is different from yours
That’s not a very interesting sense. Is height also subjective, since we are not equally tall? This sense is also very far from the magical “subjective experience” you’ve used. I guess the problematic word in that phrase is “experience”, not “subjective”?
Height is not a subjective judgement because it is not a judgement. If judgements are going to vary, that matters, because then who knows what the truth is?
I’m claiming that all subjective experiences have objective descriptions.
I would say that almost none have descriptions, where description means description. But is sounds as though you might actually be talking about physical correlates.
Please give an example of a subjective experience, other than consciousness, that has no physical evidence.
I can’t make much sense of that, since all subjective experiences occur within consciosuness.
I don’t know why you think I am denying that changes in consciousness have correlations in physical activity.
I am pointing out that we cannot determine much about conscious subjective states from external physicial evidence, because we don’t know how to work back from the one to the other. We can’t recover the richness of conscious experience from externals. But we know it is there, in ourselves at least, because being conscious mean having access to your own conscious experience. You are putting the blame on consciousness itself, saying it is a nothingy thing, when the problem is your techniques.
The requrrement that everything be rooted in (external) reality in order to be meaningful is unreasonable, because i, in cases like this, it requires you to have a sort of omniscience before you can talk at all. (It’s fine to defiene temperature as what thermometers measure once you have accurate thermometers).
you might actually be talking about physical correlates.
You see, I’m proposing the radical new view that the world is made of atoms and other “stuff”, and that most words refer to some configurations of this stuff. In this view “pain” doesn’t just correlate with some brain activity, it is that brain activity. The brain activity of pain is an objective fact, and if you were to describe that objective fact, you get an objective description. In this view, the existence of human pain is as real as the existence of chairs. But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
I am pointing out that we cannot determine much about conscious subjective states from external physicial evidence, because we don’t know how to work back from the one to the other.
I’m pointing out that you cannot work out one from another, because your concept of consciousness has no properties or attributes that are more grounded in reality than consciousness itself. You need to play rationalist taboo. If you defined consciousness as “ability to process external events” or “ability to generate thoughts” or “the process that makes some people say they’re conscious”, finding a correspondence between consciousness and brain states would be possible, even if not easy. But you seem to refuse such definitions, you call them correlates, which suggests that there could be a consciousness that satisfies none of them. I’m pointing out that even if you had complete knowledge about everything going on in a particular brain, you still wouldn’t be able to tell if it is conscious, because your concept of consciousness is broken.
It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter, and also without denying that it eventually does. Treating pain semantically as some specific brain activity buys you nothing in terms of the ability to communicate and understand …. when you don’t know which precise kind...which you don’t. If Purple and Bitter are both Brain Activity Not Otherwise Specified, they are the same. If you can solve the mind body problem , then you will be in the position to specify the different kinds of brain activity they are. But you can also distinguish them , here and now, using the subjectively obvious difference. And without committing yourself to evil dualism.
It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter
I have never claimed otherwise. In fact, there is literally nothing that I have exact description of, in terms of matter—neither pain nor chairs. But you have to know something. I know that “chair is what I sit on” and from that there is a natural way to derive many statements about chairs. I know that “gravity is what makes things fall down”, and from that there is a fairly straightforward way to the current modern understanding of gravity. There is nothing that you know about consciousness, from which you can derive a more accurate and more material description.
Treating pain semantically as some specific brain activity buys you nothing
It buys me the ability to look at “do robots feel pain” and see that it’s a stupid question.
I see a model that claims to reproduce some of the behaviors of the human mind. Why is that relevant? Where are your subjective experiences in it?
Also, to clarify, when I say “you know nothing”, I’m not asking for some complex model or theory, I’m asking for the starting point from which those models and theories were constructed.
prove that it is a stupid question.
Proof is a high bar, and I don’t know how to reach it. You could teach me by showing a proof, for example, that “is purple bitter” is a stupid question. Although I suspect that I would find your proof circular.
Well, for one, you have been unwilling to share any such knowledge. Is it a secret, perhaps?
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
Where are your subjective experiences in it?
I was responding to your claim that “there is nothing that you know about consciousness, from which you can derive a more accurate and more material description.”. This has been done, so that claim was false. You have shifted the ground.
that “is purple bitter” is a stupid question.
Purple is a colour, bitter is taste, therefore category error.
Proof is a high bar
Then why be so sure about things? Why not say “dunno” to “can robots feel pain?”.
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it. I ask you if it has subjective experiences, because that seems to be the most important aspect of consciousness to you. If you can’t find them in this model, then the model is on my side, not yours.
Purple is a colour, bitter is taste, therefore category error.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
<...>
Then why be so sure about things?
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple. Maybe we mean different things when we say “proof”?
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it
That’s still an irrelevant objection. The issue is whether the concept of consciousness can be built on and refined, or whether it should be abandoned. GWT shows that it can be built on, and it is unreasonable to demand perfection.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple.
Is that worse than saying you know things you don’t know?
Sometimes different people use the same words to mean different things. I deduce that GWT does not build on consciousness as you understand it, because it doesn’t have the most important feature to you. It builds on consciousness as I understand it. How is that irrelevant?
Is that worse than saying you know things you don’t know?
You mean, is saying “dunno” to everything worse than saying something is true without having absolute 100% confidence? Yes. What kind of question is that?
Also, why did you quote my “category error” response? This doesn’t answer that at all.
But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
Why is the question about the “are tables also chairs” not meaningful? Structured knowledge databases like Wikidata have to answer that question.
Image that a country has a general tariff for furniture and there’s a tariff exemption for chairs. One clever businessman who sells tables starts to say that his tables are chairs. In that case, the question can become important enough that a large sum of money is spent on a legal process to answer the question.
This seems like a good comment to illustrate, once again, your abuse of the idea of meaning.
I’m proposing the radical new view that the world is made of atoms and other “stuff”, and that most words refer to some configurations of this stuff.
There are two ways to understand this claim: 1) most words refer to things which happen also to be configurations of atoms and stuff. 2) most words mean certain configurations of atoms.
The first interpretation would be fairly sensible. In practice you are adopting the second interpretation. This second interpretation is utterly false.
Consider the word “chair.” Does the word chair mean a configuration of atoms that has a particular shape that we happen to consider chairlike?
Suppose someone approached a chair in your house with an atomic microscope and discovered that it was not made of atoms, but was a continuous substance without any boundaries in it. Would you suddenly say that it was not a chair? Not at all. You would say “this chair is not made of atoms.” This proves conclusively that the meaning of the word chair has nothing whatsoever to do with “a configuration of atoms.” A chair is in fact a configuration of atoms; but this is a description of a thing, not a description of a word.
In this view “pain” doesn’t just correlate with some brain activity, it is that brain activity.
This could be true, if you mean this as a factual statement. It is utterly false, if you mean it as an explanation of the word “pain,” which refers to a certain subjective experience. The word “pain” is not about brain activity in the same way that the word “chair” is not about atoms, as explained above.
But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
I would just note that “are tables also chairs” has a definite answer, and is quite meaningful.
I’m pointing out that you cannot work out one from another, because your concept of consciousness has no properties or attributes that are more grounded in reality than consciousness itself. You need to play rationalist taboo. If you defined consciousness as “ability to process external events” or “ability to generate thoughts” or “the process that makes some people say they’re conscious”, finding a correspondence between consciousness and brain states would be possible, even if not easy. But you seem to refuse such definitions, you call them correlates, which suggests that there could be a consciousness that satisfies none of them.
I would say that being a chair (according to the meaning of the word) is correlated with being made of atoms. It may be perfectly correlated in fact; there may be no chair which is not made of atoms, and it may be factually impossible to find or make a chair which is not. But this is a matter for empirical investigation; it is not a matter of the meaning of the word. The meaning of the word is quite open to the possibility that there is a chair not made of atoms. In the same, the meaning of the word “consciousness” refers to a subjective experience, not to any objective description, and consequently in principle the meaning of the word is open to application to a consciousness which does not satisfy any particular objective description, as long as the subjective experience is present.
Suppose someone approached a chair in your house with an atomic microscope and discovered that it was not made of atoms
I explicitly added “other stuff” to my sentence to avoid this sort of argument. I don’t want or need to be tied to current understanding of physics here.
But even if I had only said “atoms”, this would not be a problem. After seeing a chair that I previously thought was impossible, I can update what I mean by “chair”. In the same, but more mundane way, I can go to a chair expo, see a radical new design of chair, and update my category as well. The meaning of “chair” does not come down from the sky fully formed, it is constructed by me.
I would just note that “are tables also chairs” has a definite answer, and is quite meaningful.
For one thing (not the only thing), chairs are things that are normally used for sitting. Tables are not normally used for sitting, so they are not chairs. Nothing arbitrary about that reasoning.
Where do those definitions come from? Do you know what “arbitrary” means? By the way, I have chairs that I have never sat on, and there are tables I’ve sat on quite a bit. What is “normally”?
The meaning of words comes from people’s usage (which is precisely why words do not mean anything like what you think they do.)
Do you know what “arbitrary” means?
Yes.
What is “normally”?
The vast majority of tables are rarely or never sat on. The table in my house has never been sat on. The vast majority of chairs are frequently sat on, like the ones in my house. It may not be the only normal thing, but certainly what happens in the vast majority of cases is normal.
Also, I said “for one thing.” Even if people normally sat on tables, they would not be chairs, because they do not have the appropriate structure, just as benches are not chairs.
Also, I’d point out that what I mean by “chair” is not equivalent to people’s usage. You could call it “reverse engineered” from people’s usage. There are some differences. Do you know where those come from?
Obviously I don’t even know how most people use those words—I only know about my acquaintances and people on TV, I could be living in a bubble, I could be using many words wrong.
benches are not chairs.
Stools are chairs, but benches are just wide stools. So if I have a small table (such as a coffee table), and use it for sitting, it’s not a bench, it’s a stool and therefore a chair?
In case it’s not obvious what I’m doing, I intend to ask you these stupid questions until you realize that they are stupid questions, that they don’t matter and that the correct way to answer them is to pull answers out of your ass (i.e. arbitrarily).
Roughly speaking, because if one performed factor analysis on their life experiences, they would have factors more or less corresponding to the words they use.
There are some differences. Do you know where those come from?
Yes, largely from your own experience of the usage contexts of the word chair, which as you say could be somewhat different from the overall usage patterns, although it is unlikely that there are large differences.
So if I have a small table (such as a coffee table), and use it for sitting, it’s not a bench, it’s a stool and therefore a chair?
No. As I said before, “for one thing.” There are still reasons why a coffee table would not suddenly become a stool, even if in fact you use it for that.
I intend to ask you these stupid questions until you realize that they are stupid questions, that they don’t matter and that the correct way to answer them is to pull answers out of your ass (i.e. arbitrarily).
That’s incorrect, so I won’t realize that no matter how many such questions you ask.
That said, it is true that we extend words to additional things when we think the new things are similar enough to the old things.
The problem of “consciousness” is that we have no idea how similar the new thing is to the old thing, no matter how many objective descriptions we come up with for the new thing. That is: the problem is that we have no idea whether the robot is conscious or not, no matter what objective facts we know about the robot. This does not mean that we can arbitrarily decide to say “let’s extend the word conscious to the robot.” It’s like this situation: there is an object behind a screen, and you are never allowed to look behind the screen. Should we call the object a “chair” or not?
The fact that language is both vague and extendible does not suddenly entitle you to say that we can say that an object behind a screen is either “a chair” or “not a chair” without first looking behind the screen. And in the case under discussion, no one yet knows a way to look behind the screen, and possibly there is no such way.
Roughly speaking, because if one performed factor analysis on their life experiences, they would have factors more or less corresponding to the words they use.
What makes you believe that? Ideally we’d want something like this to be true, but assuming that it is true seems a bit naive. There are also some serious technical problems with the idea (how do you quantify experiences? what do you do when different people have different experiences but have to use the same words? etc).
No. As I said before, “for one thing.” There are still reasons why a coffee table would not suddenly become a stool, even if in fact you use it for that.
I don’t think you know what “arbitrary” means. It does not mean “completely random”. In a deterministic world everything has some explanation. It’s just that sometimes the explanations are long, bizarre and kind of stupid, if you look at them.
That is: the problem is that we have no idea whether the robot is conscious or not, no matter what objective facts we know about the robot.
Likewise, we cannot say whether some new object is a chair only by knowing objective facts about the object. We also need to know what the word “chair” refers to. And in the case that our definition of “chair” doesn’t help us, we’re going to have to extend it in some arbitrary way.
It’s like this situation: there is an object behind a screen, and you are never allowed to look behind the screen.
Replace “you are never allowed” with “it is impossible”. Then I will suggest that the object does not exist.
What makes you believe that? Ideally we’d want something like this to be true, but assuming that it is true seems a bit naive.
If nothing like that were true, words would actually be arbitrary, as you suppose. Then if we looked for reasonable boundaries, they would fall in random places. For example, it might turn out that the word “chair” refers in some cases to physical objects that people sit on, and in other cases to tiny underground animals. Words don’t work like this, which gives me good reason to think that something like this is true. It is not “assuming” anything.
I don’t think you know what “arbitrary” means. It does not mean “completely random”. In a deterministic world everything has some explanation. It’s just that sometimes the explanations are long, bizarre and kind of stupid, if you look at them.
My point is that the meaning of words is not long, bizarre, and stupid. The meanings of words are actually quite reasonable. Also, you are mistaken about the meaning of “arbitrary”. It does in fact mean not having a reason; you are just saying that sometimes “this doesn’t have a reason” is shorthand for saying that it doesn’t have a good reason. But even understood like this, the meaning of words is not arbitrary.
Likewise, we cannot say whether some new object is a chair only by knowing objective facts about the object. We also need to know what the word “chair” refers to. And in the case that our definition of “chair” doesn’t help us, we’re going to have to extend it in some arbitrary way.
We will have to extend it, but it will not have to be in an arbitrary way. There may be a good reason why we extend it the way that we do, not a stupid reason.
Replace “you are never allowed” with “it is impossible”. Then I will suggest that the object does not exist.
Then your suggestion is false; even if there is a screen that you cannot go behind, it would not mean that there is nothing inside it. You cannot look beyond the event horizon of the visible universe, but there are things beyond it.
For example, it might turn out that the word “chair” refers in some cases to physical objects that people sit on, and in other cases to tiny underground animals.
“Chair” happens not to refer to animals, although it can mean “chairman”. “Stool” can refer to several things, including poop. Finally, “shrew” refers both to a tiny animal and an annoying woman. Words do work like this. Surely there are some bizarre historical reasons how the word got those two meanings. But you have to admit, that these reasons have very little to do with the properties of small animals and annoying women.
I’m not saying that there are no forces working to simplify language. But there is a very large gap between that and “factor analysis on life experiences”.
Also, you are mistaken about the meaning of “arbitrary”. It does in fact mean not having a reason
“Arbitrary” literally means “decided by arbiter”, as opposed to “decided by the rule of law”, i.e. there was a question that “the law” couldn’t answer directly, and an arbiter had to decide “arbitrarily”. It doesn’t mean that the arbiter flipped a coin.
Going back to chairs and tables, you can always find some excuse why a coffee table I sit on isn’t a chair, and I can always find some excuse why it should be (I mean, how the hell is http://www.ikea.com/us/en/catalog/products/20299829/ not a stool?). We could live perfectly well is a world were my excuses are right and yours are wrong. The reasons we don’t are long bizarre and stupid. That falls under “arbitrary” perfectly well.
You cannot look beyond the event horizon of the visible universe, but there are things beyond it.
I’d rather not bring modern physics into this, but I have to point out, that suggesting those things don’t exist will cause no problems to anyone. At worst, this suggestion would lose to another idea through Occam’s razor.
Also, cases where a “single” word has two meanings are cases of two words. They are not examples of words that have arbitrary meanings.
You first said that if there was nothing like factor analysis, some words would have two unrelated meanings, then I point out that lots of words have two unrelated meanings, and now you say that one word with two meanings is two words (by definition, I assume?), contradicting your own claim we started from. Do you see how bad this looks?
Sure, there are words that have different meanings and different origins, that, thanks to some arbitrary modifications, end up sounding the same. There is an argument to discount those. But lots of words do have the same origin and the new meaning is a direct modification of the old one.
You first said that if there was nothing like factor analysis, some words would have two unrelated meanings, then I point out that lots of words have two unrelated meanings, and now you say that one word with two meanings is two words (by definition, I assume?), contradicting your own claim we started from. Do you see how bad this looks?
You misunderstood. The point is that if there was not some common meaning, the applications of a word would be random. This does not happen in any of the cases we have discussed, and two entirely unrelated usages are cases of two words.
But lots of words do have the same origin and the new meaning is a direct modification of the old one.
This is true, and there is nothing stupid or arbitrary about this way of getting a secondary meaning.
The point is that if there was not some common meaning, the applications of a word would be random.
I have never denied that the ways different people use the same words are similar. This however does nothing to support your “factor analysis” theory, nor does it have anything to do with words that have multiple unrelated meanings.
and two entirely unrelated usages are cases of two words.
This is a claim with no justification. The whole “one word is two words” formulation is inherently bizarre. Of course, saying that “committee chair” and “armchair” are both “chairs”, doesn’t mean the two things are actually similar. Likewise putting both “armchair” and “stool” in under one label does not reduce their differences, and putting “stool” and “coffee table” in different categories does not reduce their similarities.
I have never denied that the ways different people use the same words are similar. This however does nothing to support your “factor analysis” theory
Sure it does. People use words in similar ways because their lives have similar factors.
nor does it have anything to do with words that have multiple unrelated meanings.
Technically, there are no such words. As I said, these are multiple words that use similar spellings.
This is a claim with no justification. The whole “one word is two words” formulation is inherently bizarre. Of course, saying that “committee chair” and “armchair” are both “chairs”, doesn’t mean the two things are actually similar.
Consider these two statements:
1) A committee chair and an armchair are both “chairs.”
2) A committee chair and an armchair are both chairs.
The first statement is true, and simply says that both a committee chair and an armchair can be named with the sound “chair”.
The second statement, of course, is utterly false, because there is no meaning of “chairs” that makes it true. And that is because there is not a word that has both of those meanings; there are two words which are spelled and spoken alike.
Likewise putting both “armchair” and “stool” in under one label does not reduce their differences, and putting “stool” and “coffee table” in different categories does not reduce their similarities.
In fact, using different names adds a difference: the fact that the things are named differently. Still, overall you are more right than wrong about this, even though you have the tendency to ignore the real reasons for names in favor of appearances, as when you say that “pain” means “what makes someone say ouch.” Obviously, if someone says “ouch” because he wishes to deceive you that he is feeling pain, pain will not be the wish to deceive someone that he is feeling pain. Pain is a subjective feeling; and in a similar way, a coffee table is not merely something that has a certain shape, but something that was made for a certain intended use, even if you personally use it for sitting.
People use words in similar ways because their lives have similar factors.
No, people use words in similar ways, because they want to communicate with each other. And because word meanings are usually inherited rather than constructed. It’s not false that the factors are usually similar, but not all true statements follow from one another. Some people with very different factors may use words similarly and others with similar factors will eventually use them differently.
The second statement, of course, is utterly false, because there is no meaning of “chairs” that makes it true.
Again, nobody thinks that the two things are similar or share properties, but that’s exactly what you asked for. If you want a milder example, I can offer “computer”, which can refer to an electronic device or to a human who does arithmetic (old usage). The two meanings are still very different, but they do share a property (they both compute), and it’s easy to see that a sentence “I had computers calculate this solution” is natural and could refer to either (or both). At the same time, using two different words for them (e.g., let’s call humans who compute “analysts”) would also be natural. The reasons we don’t use two words have very little to do with the properties of humans or electronic devices.
“Arbitrary” literally means “decided by arbiter”, as opposed to “decided by the rule of law”
Etymology is not meaning.
I’d rather not bring modern physics into this,
That’s not up to you; you made the argument that if there is a screen that you cannot look behind, there is nothing behind it. That argument is false.
suggesting those things don’t exist will cause no problems to anyone.
The suggestion will be false, whether or not it causes problems for anyone.
At worst, this suggestion would lose to another idea through Occam’s razor.
Exactly like suggesting that other people’s conscious experiences do not exist, since this would mean that the reason for your own talk about your own experiences differs from the reason for other people’s talk about their experiences. There is no reason to believe in such a difference.
on the contrary, it would be stupid to attempt to make meaning exactly correspond with etymology.
That’s a bold claim. God forbid words have consistent meanings over time!
Going back to the word “arbitrary”, do we actually disagree about what it means, or is this just a useless tangent? You said it means “without any reason”. The obvious problem is that almost everything has some reasons, including whims of small children, delusions of the insane and results of fair coin flips. I suggest that if the word meant “without good reason”, the word would be more useful.
Going back to the word “arbitrary”, do we actually disagree about what it means, or is this just a useless tangent?
We probably do not disagree about what it means, but we disagree about what we are saying it means. I do say it means without any reason, although we can take it more specifically as “without any reason of the kind we are currently thinking about.”
I suggest that if the word meant “without good reason”, the word would be more useful.
If we take as I suggested, it would be possible in some cases to mean “without good reason,” namely without a reason of the kind we are currently thinking about, namely a good one.
In general, this topic came up because you were asserting that questions like “are tables also chairs” are stupid and only have arbitrary answers. If arbitrary means that there cannot be a good reason, then you are mistaken, because we have good reason for saying that tables are not chairs, and the stupidity would only be in saying that they are chairs, not in saying that they are not.
In regard to the issue of consciousness, the question is indeed a useless distraction. It is true that words like “pain” or even “consciousness” itself are vague, as are all words, and we exercise judgement when we extend them to new cases. That does not mean there is never a good reason to extend them. But more importantly, when we consider whether to extend “chair” to a new case, we can at least see what the thing looks like. In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain. And so the case is quite different from the case of the chair: as I said before, it is like asking if an unknown object behind a screen is a chair or not. Unknown, but definitely not arbitrary.
“without any reason of the kind we are currently thinking about.”
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
a coffee table is not merely something that has a certain shape, but something that was made for a certain intended use, even if you personally use it for sitting.
Here’e an example of weird reasons. How can shape not determine the difference? If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects? IKEA can have whatever intentions they want, but http://www.ikea.com/us/en/catalog/products/20299829/ is a stool. Are you seriously telling me that it isn’t? Now, one could say that a stool can be both a chair and a table, and I think that’s what IKEA does, but then you’ve already claimed this to be impossible.
In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem. There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
“Properties of the objects being classified” are much more extensive than you realize. For example, it is property of pain that it is subjective and only perceived by the one suffering it. Likewise, it is a property of a chair that someone made it for a certain purpose.
If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects?
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
In most cases of that kind, the thing being called a table really is a table, and not a stool. Obviously I cannot confirm this in the particular case since I do not intend to buy it. But it is related to the fact that it is made for a certain purpose, as I said. In other words, in most cases the thing is not suitable for use as a stool: it might collapse after one occasion of sitting on it, or anyway after several days. In other words, being made as a table, it is physically unsuitable to be used as a seat. And consequently if it did collapse, it would quite correct to say, “This collapsed because you were using it as a stool even though it is not one.”
That said, I already said that the intention of the makers is not 100% determining.
That’s assuming that “feeling” is a meaningful category.
That’s not subject to falsification, in the same way that it is not subject to falsification that the thing I am sitting on is called a “chair.” In other words, I already notice the similarity between all the things that are called feelings in the same way that I notice the similarity between chairs.
If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem.
Talk about assumptions. I assume, and you are assuming here, that I have a brain, because we know in most cases that when people have been examined, they turned out to have brains inside their heads. But the fact that my toe hurts when I stub it, is not an assumption. If it turned out that I did not have a brain, I would not say, “I must have been wrong about suffering pain.” I would say “My pain does not depend on a brain.” I pointed out your error in this matter several times earlier—the meaning of pain has absolutely nothing at all to do with brain activities or even the existence of a brain. As far as anyone knows, the pain I feel when I stub my toe could depend on a property of the moon, and the pain I feel when I bump into a lamppost on a property of Mt. Everest. If that were the case, it would affect in no way the fact that those two pains feel similar.
There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
This is completely wrong, for the reason I just stated. We are not talking about similarities between brain states—we are talking about the similarity of two feelings. So it does not matter if the robot’s brain state is similar to mine. It matters whether it feels similar, just as I noted that my different pains feel similar to one other, and would remain feeling similar, even if they depended on radically different physical objects like the moon and Mr. Everest.
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
When exactly is the intention relevant? If two objects have the same shape but different intended uses, and you still classify them the same, then the intention is not relevant. More generally, if we have variables X, Y and want to test if a function f(X,Y) depends not only on X, but also on Y, we have to find a point where f(X1, Y1) is not equal to f(X1, Y2). Are you saying that sometimes intention matters, just not for chairs? If not, I can only assume that intention doesn’t determine anything and only shape is important.
I already notice the similarity between all the things that are called feelings
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
If it turned out that I did not have a brain
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing. Unfortunately for you, you do have a brain.
It matters whether it feels similar
Are you going to feel the robot’s feeling and compare?
Are you saying that sometimes intention matters, just not for chairs?
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
Unquestionably, it can be meaningfully extended to robots. You simply mean the same thing that you mean in the regular case. The only question is whether there is any feeling there, not if “feeling” has a meaning, since we already admitted that it does have a meaning.
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing.
The possibility of falsification is a good thing for a physical theory. It is a very bad thing for a theory of the meaning of a word. As you already admitted, the fact that the pieces of furniture we normally sit on are called chairs is not subject to falsification, because that is just what is meant by “chair.” But a physical theory of a chair, e.g. “chairs are made of atoms,” is falsifiable, since someone could examine a chair and discover that it was not made of atoms. He would not then say, “We have discovered that ‘chair’ meant something different from what we thought.” He would say, “We knew what ‘chair’ meant, and that is unchanged, but we have learned something new about the physical constitution of chairs.”
In the same way, I am referring to certain feelings when I talk about “pain.” The fact that the word pain refers to those feelings cannot be falsified, because it is just what that word means. But whether pain depends on a brain activity is a falsifiable physical theory; it has nothing to do with the meaning of the word “pain.”
Unfortunately for you, you do have a brain.
Assuming that I do, that is fortunate, not unfortunate. But as I was saying, neither you or I know that I do, since neither of us has seen the inside of my head.
Are you going to feel the robot’s feeling and compare?
No. The question is not whether the robot has a feeling which feels similar to me as my feeling of pain; the question is whether the robot has a feeling that feels to the robot the same way that my feeling feels to me. And since this has two subjects in it, there is no subject that can feel them both and compare them. And this is just how it is, whether you like it or not, and this is what “pain” refers to, whether you like it or not.
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Can you actually support your claim that intention matters? To clarify, I’m suggesting that intention merely correlates with shape, but has no predictive power on its own.
It is a very bad thing for a theory of the meaning of a word.
It’s somewhat complicated. “Experiences are brain states” is to an extent a theory. “Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition. Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
And since this has two subjects in it, there is no subject that can feel them both and compare them.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
Can you actually support your claim that intention matters?
Artificial things are made for a purpose, and being made for a purpose is part of why they are called what they are called. This is an obvious fact about how these words are used and does not need additional support.
“Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition.
If you mean pain is the conscious state that follows in that situation, yes, if you mean the third person state that folllows, no.
Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason. Definitions shouldn’t be falsifiable, and are not physical theories.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
No. The stars outside event horizon of the visible universe are similar to the stars that we can see, but there is no way to compare them.
One can however ask the question, “Are the stars similar?” and one can answer yes or no. In the same way we can ask if the robot feels like we do and we can say yes and no. But there is access to the answer here, just as there no access in the case of the stars. That has nothing to do with the fact that either they are similar, or they are not, both in the case of the robot, and in the case of the stars.
This is an obvious fact about how these words are used and does not need additional support.
Wow, you have no idea how many bold claims you make. To clarify once again, when I ask if intention matters, I’m asking whether the relationship between intention and classification is causal, or just a correlation. You are supposed to know the difference between those two things, and you’re supposed to know, in theory, how to figure out which one is relevant in a specific case. This whole “does not need additional support” thing inspires no confidence.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
The stars outside event horizon
I hate bringing up modern physics, it has limited relevance here. Maybe they’ll figure out faster than light travel tomorrow, and your point will become moot. But if we must...
If we insist that something beyond the event horizon exists (I’d love to see how you define that word), we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too. Note though, that the certainty of beliefs derived this way should be pretty low. And in the case of robots, hardly any of this applies.
I’m asking whether the relationship between intention and classification is causal, or just a correlation.
It is causal, but not infallible.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
That’s your problem. Everyone else will still call it “the sun,” and when you say “the sun didn’t rise this morning,” your statement will still be false.
we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too.
Exactly. You can make arguments for and against robot pain as well. Meaning does not come from the fact that your arguments are strong or weak: the meaning is there first, even before you have any reason to think that the thing is true.
What words? The word “causal”? I’m asking for arguments why you think that the relationship between intention and classification is causal. I expect you to understand the difference between causation and correlation. Why is this so difficult for you?
As I said, this is how these words work, that is words like “chair” and so on. If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair. And those things are true even given the same form, so it is causal, not merely correlated.
If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
And those things are true even given the same form
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
I am not talking about evidence, but about meaning; when we say, “this is a chair,” part of the meaning is that it was made for sitting. So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair, in the ordinary sense of the term, because they were not made for sitting.
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
I don’t know where you think that was established. I said the fact that someone made something for a different purpose would not guarantee that it was that thing, as that if you made something in the shape of a hammer, and said it was for sitting, it would not be a chair. But the fact that it was made for sitting does indeed make it closer to being a chair, not by way of evidence or correlation, but directly by more closely matching the meaning of “chair.”
I don’t know where you think that was established.
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair? Would simply sitting on the first rock convert it into a chair?
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
By acting like you actually want to understand what is being said, rather than by acting like you are in a fight where you are trying to win. In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
In particular, consider my answer to your next question, because it is basically the same thing again.
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair?
There is no guarantee of this, because the word “chair” is vague. But it is true that there would be more reason to call the second rock a chair—that is, the meaning of “chair” would apply more properly to the second one than to the first one. So the answer might be yes, depending on all the other details of the situation.
Would simply sitting on the first rock convert it into a chair?
In general, no, because the word “chair” does not work that way. It does not mean what someone happens to sit on, but what is made for sitting on.
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
If you are not ignorant of how the word is used, you do have to involve the intention of the maker.
By acting like you actually want to understand what is being said
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer. Here I’m only asking you for a black and white answer on this very specific question. I understood your original reply, but I honestly have not idea how it was supposed to answer my specific question. When people refuse to give direct answers to specific questions, I infer that they’re conceding.
In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
What exactly do you mean by “vague”? The word “chair” refers to the category of chairs. Is the category itself “vague”?
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair. Apparently one that you have more knowledge of than I. So I keep asking you specific questions about this concept. And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff. By the way, this thread started here where you explicitly start talking about words and meanings, so that’s what we’re talking about.
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to, although my guess is to the IKEA discussion. In any case, the response there, and to anything similar that you might be discussing, is that there is no direct answer because the answer is “in some cases yes, in some cases no, depending on the particular circumstances.”
What exactly do you mean by “vague”?
First of all, all words are vague, so there is no such thing as “what exactly do you mean by.” No one ever means anything exactly. Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Is the category itself “vague”?
Yes. All categories are vague, because they are generated by a process similar to factor analysis, as I have said before, and this does not generate precise categories, but vague ones.
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair.
It is false that the meanings are arbitrary, for the reasons I have said. It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
This is true of all words. All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things. Therefore no words refer to precise concepts (in fact there is no such thing as a precise concept, as I have already said.)
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff.
First of all, you are the one who needs the “language 101” stuff, since you have been abusing the concept of meaning by confusing it with testability. Second, nope, we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t. You have been constantly confusing two entirely separate issues, and ignoring me when I point out the separateness of these issues. Let me do so one more time.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.” Those are two entirely separate things. Likewise, you have been confusing “this statement is vague” with “this statement is not testable.” These are two entirely separate things.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
Second, consider a line of stars outside the visible universe, except that some of the stars, on the right, are identical white dwarfs, and the ones to the left of them are identical red giants. Where exactly do the stars stop being white dwarfs and begin being red giants? This time, we cannot answer the question because there is no test to give us the answer. But vagueness is not an issue, because there is a sharp division between the two parts. We simply cannot find it by testing.
Third, consider a line of stars outside the the visible universe, constructed as in the first case. This time, there are two problems: we cannot test where the boundary is, and the boundary is vague. These are two entirely different issues.
Fourth, consider a line of things where the one on the left is a statue, the one on the right is a human being, and somewhere in the middle there are robotic things. Each thing differs by a single atom from the thing on its left, and from the thing on its right.
Now we have the question: “The statue is not conscious. The human being is conscious. Is the central robot conscious?” There are two separate issues here. One is that we cannot test for consciousness. The second is that the word “conscious” is vague. These are two entirely separate issues, just as they are in the above cases of the stars.
Let us prove this. Suppose you are the human being on the right. We begin to modify you, one atom at a time, moving you to the left. Now the issue is testable: you can ask yourself whether you are conscious, and if you have any answer at all, or even if you manage to ask yourself the question, then you are conscious. Note that this is quite different from anyone else asking the thing if it is conscious, because the question “does this thing say it is conscious” is not the same as “is this thing conscious.” But being conscious is having a first person point of view, so if you can ask yourself anything, you are conscious. Unfortunately, long before you cease to be conscious, you will cease to be able to ask yourself any questions. So you will still not be able to find a definite boundary between conscious and not conscious. Nonetheless, this proves that testability is entirely separate from vagueness.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to
Well, that explains a lot. It’s not exactly ancient history, and everything is properly quoted, so you really should know what I’m talking about. Yes, it’s about the identical table-chairs question from IKEA discussion, the one that I linked to just a few posts above.
Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Why are there no determinate boundaries though? I’m saying that boundaries are unclear only if you haven’t yet decided what they should be. But you seem to be saying that the boundaries inherently cannot be clear?
All categories are vague, because they are generated by a process similar to factor analysis
There is nothing vague about the results of factor analysis.
It is false that the meanings are arbitrary, for the reasons I have said.
On this topic, last we seemed to have agreed that “arbitrary” classification means “without reasons related to the properties of the objects classified”. I don’t recall you ever giving any such reasons.
It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
For example, you have said ‘”are tables also chairs” has a definite answer’. Note the word “definite”. You also keep insisting that there is factor analysis involved, which would also be an objective and natural way to assign objects to categories. By the way “natural” is the opposite of “arbitrary”.
All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things.
Yeah, I recall saying something like that myself. And the rest of your claims don’t go well with this one.
you are the one who needs the “language 101” stuff
Well, you decided that I need it, then made some wild and unsupported claims.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.”
Yes, the two statements are largely equivalent. Oddly, I don’t recall you mentioning testability or measurability anywhere in this thread before (I think there was something in another thread though).
Likewise, you have been confusing “this statement is vague” with “this statement is not testable.”
I don’t think I’ve done that. It’s unfortunate that after this you spent so much time trying to to prove something I don’t really disagree with. Why did you think that I’m confusing these things? Please quote.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you. You say “cannot possibly” and then give no reasons why.
I however have no problems with the vagueness here, because the two categories are only shorthands for some very specific properties of the starts (like mass). This is not true for “consciousness”.
Nonetheless, this proves that testability is entirely separate from vagueness.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you.
There is nothing stopping you only in the sense that nothing stops you from asserting falsehoods. (As we see is the case for you personally.)
It is intrinsically vague: “Red giant” does not and cannot have precise boundaries, as is true of all words. The same is true of “White dwarf.” If you cut the line, you will indeed be cutting it arbitrarily, as you say yourself, and this has nothing to do with the meaning of those words.
The rest does not respond to the comparison about consciousness, and as I said we won’t be discussing the comments on language.
“Red giant” does not and cannot have precise boundaries
Again, you make a claim and then offer no arguments to support it. “Red giant” is a term defined quite recently by a fairly small group of people. It means what those people wanted it to mean, and its boundaries are as precise as those people wanted them to be.
we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t.
You started the language discussion, but I have to explain why we’re continuing it? I continue, because I suspect that the reasoning errors you’re making about chairs are similar to the errors you’re making abut consciousness, and chairs are easier to talk about. But it’s only a suspicion. Also, I continue, because you’ve made some ridiculous claims and I’m not going to ignore them.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences
Assuming “experience” is a meaningful category.
with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
Yes, “feeling” and “experience”, are pretty much the same thing, I didn’t mean to imply otherwise in the text you quoted. Instead, the first sentence refers to your definition, and the second offers an alternative one.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
There is a classification problem with tables and chairs. Generally, I know what chairs and tables are supposed to be like, but there are objects similar both to chairs and to tables, and there isn’t any obvious way to choose which of those similarities are more important. At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
Yes, “feeling” and “experience”, are pretty much the same thing,
So they are either both meaningful, or both meaningless. But you have used “experience” as though it is meaningful,
and you have implied that “feeling” is meaningless.
That was a predictable problem. Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates, you cannot make that kind of statement, and so cannot express identity theory.
At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
Whereas, from my point of view, 1st person experience was there all along.
But you have used “experience” as though it is meaningful, and you have implied that “feeling” is meaningless.
No, I used “experience” as a label. Let me rewrite that part:
That’s assuming that “experience”, as you use that word, is a meaningful category. If you didn’t start from that assumption, and instead defined experiences as brain states, you could …
Is that better? I understand that having two definitions and two similar but not identical concepts in one sentence is confusing. But still I expect you to figure it out. Was “identified” the problem?
Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates <...>
What vocabulary relating to what mental states do I reject? Give examples.
Whereas, from my point of view, 1st person experience was there all along.
Wasn’t “chairness” there too? More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
That’s assuming that “experience”, as you use that word, is a meaningful category.
Does “meaningful” mean “meaningful” here , or is it being used as a misleading proxy for something like “immeasurable” or “unnecessary” or “tadasdatys doens’t like it”?
What vocabulary relating to what mental states do I reject?
You keep saying various words are meaningless. One would not want to use meaningless words, generally. OTOH, you have revealed elsewhere that you don’t use “meaningless” to mean “meaningless”. So who knows?
More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
Consciousness is in the dictiionary, chariness isn’t.
Consciousness is a concept used by science, chairness isn’t.
Consciousness is supported by empirical evidence, chairness isn’t.
It’s not that words are meaningless, it’s that you sometimes apply them in stupid ways. “Bitter” is a fine word, until you start discussing the “bitterness of purple”.
Consciousness is in the dictiionary, chariness isn’t.
Are dictionary writers the ultimate arbiters of what is real? “Unicorn” is also in the dictionary, by the way.
Consciousness is a concept used by science, chairness isn’t.
Physicalist, medical definition of consciousness is used by science. You accuse me of changing definitions when it suits me, and then proceed to do exactly that. I guess that’s what projection looks like.
Consciousness is supported by empirical evidence, chairness isn’t.
What evidence exactly? I have to assume my last paragraph applies here too.
If you can’t even come up with arguments why a silly concept I made up is flawed, maybe you shouldn’t be so certain in the meaningfulness of other concepts.
That the brain is not quiescent when experiencing pain is an objective fact. But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Please check out multiple realisability.
Because of that, none can genuinely tell whether an advanced robot has genuine qualia. That includes you, although you are inclined to think that your subjective intuitions are objective knowledge.
But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Sure, but what does that have to do with anything? Does “objective” mean “well understood” to you?
multiple realisability
There are multiple representations of pain the same way that there are multiple representations of chair.
It is ridiculous how much of this debate is about the basic problem of classification, rather than anything to do with brains. Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument. Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
your subjective intuitions
Calling my reasoning, even if not fully formal, “subjective intuitions” seems rude. I’m not sure if there is some point you’re trying to express with that.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
Not sure where you see me talking about intelligence. But intelligence is far more well defined and measurable than consciousness. Multiple realizability has nothing to do with that.
But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Sure, but what does that have to do with anything?
We do, on the other hand, know subjecively what pain feels like..
Does “objective” mean “well understood” to you?
That’s not the point. The point is that if we have words referring to subjective sensations, like “purple” and “bitter”, we can distinguish them subjectively. But if we discard out subjective insight into them, as you are proposing, and replace them with vague objective descriptions—vague, because no one knows precise descriptions of the full gamut of atomic configurations which implement pain. -- then you take a step backwards. You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter. Basing semantics on objective facts, or “reality” as you call it. only works if you know which fact is which. You are promoting something which sounds good, but doesn’t work—as a research program. Of course it works just fine at getting applause from an audience of dualism-haters.
.multiple realisability
There are multiple representations
Are you talking about realisations or representations?
Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument.
No one has made that argument. The point is not that it is not ultimately true that subjetive states are brain states, it is that rejecting the subjective entirely, at this stage, is not useful. Quite the reverse. Consciousness is the only thing we know from the inside—why throw that away?
Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
If we know what casues or predicts something, then we can sometimes reproduce it in radically different ways. That is the basis of artificial intelligence: realising intelligence in a medium other than a brain.
But you are saying that pain cannot be realised by a robot, and your reasons for sayign that are entirely non-empirical.
We might be able to refine the concept of consciousness as part of a research programme, but research programmes have to start with folk concepts.
We do, on the other hand, know subjecively what pain feels like..
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires. But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not. It’s as if you assume something is real, just because it comes out of people’s mouths.
realisations or representations
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
No one has made that argument.
Parts of my text are referring to the arguments I saw in wikipedia under “multiple realizaility”. But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. In particular, you have to believe this to even ask whether robots feel pain.
and your reasons for sayign that are entirely non-empirical
I’m still waiting for your empirical reasons why “purple is not bitter”, or better yet, “purple is not a chair”, if you feel the concept of bitterness is too subjective.
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
But not much of an argument for using semantics grounded in (physical) reality. Doing so does not buy you maximum precision in absolute terms, and , what is worse, the alternative , of grounding terms for types of experience is 1st person experience, can give you more precision.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires.
You may believe that, but do you know it?
But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not.
The difference is that I accept the possibility that first person evidence could falsify 3rd person theory.
It’s as if you assume something is real, just because it comes out of people’s mouths.
I’m not taking 1st person to mean 3rd person reports of (someone elses) 1st person experience.
Doing so does not buy you maximum precision in absolute terms
What sort of precision are you talking about? More generally, you’re repeatedly said that the concept of consciousness is very useful. I don’t think I’ve seen that usefulness. I suspect that elaborating here is your best bet to convince me of anything. Although even if you did convince me of the usefulness of the term, that wouldn’t help the “robot pain” problem much.
You may believe that, but do you know it?
That’s a slightly weird question. Is it somehow different from “why do you believe that” ? I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary. It’s very likely that “zapping” isn’t quite sufficient, depending on how generously you interpret that word. But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
I’m not taking 1st person to mean 3rd person reports of 1st person experience.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper. Obviously, paper is slower and less accurate. But you seem to be implying a more fundamental difference between those two methods of data storage. Why is that?
A realisation of type X has type X, a representation of type X has type “representation”.
I like type theory. Let X be what I’m sitting on. Type of X is “chair”, type of “chair” is “category”, a painting of X is a representation of X, it is not a representation of “chair”. Representations of “chair”, in the same sense that painting represents X might not exist. Somehow I’m quite comfortable saying that an object of type Y is what represents Y. “Instantiates” might be the best word (curiously though, google uses “represent” to define it). Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
More generally, you’re repeatedly said that the concept of consciousness is very useful
I have said that actual experience is useful to pin down the meaning s of words referring to exerpeince.
You may believe that, but do you know it?
That’s a slightly weird question#
Not at all. That there is a difference betewen belief and knowledge is very standard.
I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary.
There’s an extensive literature of arguments to the contrary,
But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
It is the idea that you can learn aout the inward or 1st person by purely outward or 3rd person means that is contentious.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper.
No, I am saying that my first person is me, and your first person is you. SO my first person information is my experience, not someone else’s report of their experience.
Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
Well, you said that the two R words mean the same hting , when by established usage, they don’t That looks like a source of confusion to me.
Not at all. That there is a difference betewen belief and knowledge is very standard.
I assure you that none of the beliefs I state here were generated by flipping a coin. They are all to some extent justified. That’s why the question is weird—did you expect me to answer “no”?
There’s an extensive literature of arguments to the contrary
There is extensive literature of arguments in favor of god or homeopathy. Doesn’t make those things real. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments (likewise for god and homeopathy). However you seem to have read quite a bit more, and you haven’t raised my confidence in the value of that literature so far.
my first person information is my experience, not someone else’s report of their experience.
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences. I’m not saying that this hypothesis is true, I’m only saying that you don’t know it to be false. And if it did happen to be true, then your internal reasoning about your experiences would not be much different from my reasoning about your experiences written on paper (aside from the low precision of our language). Curiously, a physical experiment is more likely to test this hypothesis, than your internal reasoning.
That looks like a source of confusion to me.
It is a potential source of confusion, but that doesn’t mean it’s causing any right now. Maybe if we talked about representations such as paintings, it would cause some. Regardless, I’ll try to use the words you prefer. Debating their differences and similarities is very orthogonal to our main topic.
There’s an extensive literature of arguments to the contrary
There is extensive literature of arguments in favor of god or homeopathy.
You said there was a “lack” of arguments to the contrary, and I pointed out that there wasn’t.
. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments
Then why didn’t you say lack of good arguments? And why didn’t you say what is wrong with them?
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences.
“Magic” is not a helpful phrase.
You need to distinguish ontology and epistemology. Experiences and memories and so on have their physical correlates—ontology—but that does not mean you can comprehend them—epistemology. We might be able to find ways of translating between correlates and experience, but only if we don’t ignore experience as an epistemology. But, again, taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
internal reasoning about your experiences
Experience is experience, not reasoning about experience.
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with? Please explain your thought process, I really want to know. You see, productive debate requires some amount of generosity. I may not be polite, but I don’t think you’re illiterate or insane, and I don’t think I nitpick about things this obvious.
Maybe this is a symptom that you’re tired of the whole thread? You know you can stop whenever you want, right?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves. That’s where my “baseless hypothesis” comes from.
To clarify, the hypothesis isn’t a direct response to something you said, it’s a new angle I want to look at, to help me understand what you’re talking about.
“Magic” is not a helpful phrase.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation. I realize that this might not be obvious. Though I feel that this is a natural use of the word.
taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
That’s fine. There are some things that I’d want to pick on, although I’m not sure which of them are significant. But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with?
Did you mean “consciousness”? To put is bluntly, if you haven’t; heard of MR, there is probably a lot you don’t know about the subject.
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation.
You yourself are blocking off the possibility of understanding consciousness , subjectivity and experience by refusing to allow them as prima-facie, pre-theoretic phenomena.
You say that we must start with reality, but we cannot: (an accruate map of reality) is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist.
I get that you hate, hate, hate dualism or anything else that threatens physical monism, but you can’t prove physical monism by begging the question against it. You are doing it no favours.
But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Nobody else has a problem with robot pain as a meaningful possibility. You do because you have removed the fist person from your definitions.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
If having experiences is an important part of consciousness, then I’d expect you to reason about them, what induces them, their components, their similarities and differences. This “consciousness in general” phrasing is extremely weird.
Starting there means discarding any other kind of prima-facie evidence.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
You say that we must start with reality, but we cannot: (an accruate map of,) reality is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
The question “where did you start” has some bad assumptions. Of course at first we all have to start from the same naive point. If we did arbitrarily start from different unrelated assumptions, expecting to agree on anything would be weird.
So, what happened is that I started from naive assumptions, and arrived at physicalism. Then when I ask myself a new question, I start from where I last stopped—discarding all of my progress would be weird.
You may think that dropping an initial assumption is inherently wrong, but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary. You might be able to convince me that I do need to keep some similar assumption for technical reasons, but that wouldn’t solve the “robot pain” problem.
The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name. But when I drop consciousness, my life gets easier. How does that work?
I start from where I last stopped—discarding all of my progress would be weird.
There is a difference between a working hypothesis and an unfalsifiable dogma. It seems to you that there is nothing to explain about consciousness because you only accept 3rd-person empirical data, because of your ontology.
You may think that dropping an initial assumption is inherently wrong,
Could explain what assumption you are dropping, and why, without using the word magical.
but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary.
I’d prefer if you settled on one claim.
the “robot pain” problem.
That would be the problem for which there is no evidence except your say-so.
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name.
You can function practically without a concept of gravity, as people before Newton did. What you can get away with theoretically depends on what you are trying to explain. Perhaps there is a gravity sceptic out there somewhere insisting that “falling object” is a meaningless term, and that gravity is magic.
There is a difference between a working hypothesis and an unfalsifiable dogma.
Is my position less falsifiable than yours? No, most statements about consciousness are unfalsifiable. I think that’s a strong hint that it’s a flawed concept.
Could explain what assumption you are dropping, and why, without using the word magical.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head. I dropped it because I found that physicalism explains everything better. “Better” doesn’t mean that I have all the answers about anything, it just means that the answers consciousness gives are even worse.
I don’t understand what your problem with “magical” is?
I’d prefer if you settled on one claim.
Well, I suppose an assumption could be unnecessary without being meaningless, so the words aren’t identical, but I do refer to the same thing, when I use them in this context. I also recall explaining how a “meaningless” statement can be considered “false”. The question is, why are you so uncomfortable with paraphrasing? Do you feel that there are some substantial differences? Honestly, I mostly do this to clarify what I mean, not to obscure it.
That would be the problem for which there is no evidence except your say-so.
The “robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do. That’s a pretty big problem, regardless of what I say. Now, when I ask if this or that idea solves “robot pain” problem, I’m not asking if it produces an actual test, I just ask for a smallest hint that maybe the test could exist.
You can function practically without a concept of gravity, as people before Newton did.
That’s ridiculous. The mathematical law of gravity was written down by Newton, but the concept of gravity, in the sense that “things fall down”, is something most animals have. Do you literally think that nobody noticed gravity before Newton?
most statements about consciousness are unfalsifiable
That’s not the problem.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head.
The assumption is more that consciousness is something that needs explaining,
I also recall explaining how a “meaningless” statement can be considered “false”.
That’s wrong. If you can put a truth-value on a sentence , it is meaningful.
The question is, why are you so uncomfortable with paraphrasing?
I think it is better to express yourself using words that mean what you are trying to express.
Do you feel that there are some substantial differences?
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
Honestly, I mostly do this to clarify what I mean, not to obscure it.
I think ti is likely that your entire argument is based on vagueness and semantic confusion,
robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do.
There is a real problem of not being able to test for a pain sensation directly.
Why did it take you so long too express it that way? Perhaps the problem is this:
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”. Perhaps you have to use vagueness and confusion to make the invalid inference seem valid.
Wow, so you agree with me here? Is it not a problem to you at all, or just not “the” problem?
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
Invisible unicorns are immeasurable. They do not exist. The assumption that they do exist is unnecessary. The statement “invisible unicorns are purple” is meaningless. The words aren’t all exactly the same, but that doesn’t mean they aren’t all appropriate.
Why did it take you so long too express it that way?
A long long time ago you wrote: You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”. So I assumed you understood that immeasurability is relevant here. Did you then forget?
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”.
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
So I assumed you understood that immeasurability is relevant here
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”.
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
No, still not from that.
You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.
but you have brought in a bunch of different issues without explaining how they interrelate
Which issues exactly?
No, still not from that.
Why not? Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist? Does “‘robot pain’ is meaningless” follow from the same better?
but you have brought in a bunch of different issues without explaining how they interrelate
Which issues exactly
Meaningfulness, existence, etc.
Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist?
Huh? It’s perfectly good as a standalone stament , it’s just that it doens’t have much to do with meaning or measurabiltiy.
Does “‘robot pain’ is meaningless” follow from the [we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific] better?
Not really, because you haven’t explained why meaning should depend on measurability.
It is evident that this is a major source of our disagreement. Can you define “meaningless” for me, as you understand it? In particular, how it applies to grammatically correct statements.
It’s perfectly good as a standalone stament
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods.
. Can you define “meaningless” for me, as you understand it? In
Useless for communication.
Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods
Where is this going? You can’t stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
A bit too vague. Can I clarify that as “Useless for communication, because it transfers no information”? Even though that’s a bit too strict.
Meaningless statements cannot have truth values assigned to them.
What is stopping me from assigning them truth values? I’m sure you meant, “meaningless statements cannot be proven or disproven”. But “proof” is a problematic concept. You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument. Anyway, isn’t (1.) enough?
Where is this going?
It’s still entirely about meaning, measurability and existence. I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not and what it could follow from. Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”. Or maybe “invisible unicorns cannot be detected” does not follow from “we have no arguments suggesting that maybe ‘invisible unicorns’ could be something detectable”?
What is stopping me from assigning them truth values?
The fact that you can’t understand them.
You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument.
If you cant understand a statement as exerting the existence of something, it isn’t meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not
Because it needs premises along the lines of “what is not measurable is meaningless” and “what is meaningless is false”, but you have not been able to argue for either (except by gerrymandered definitions).
Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”
There’s an important difference between stipulating something to be indetectable … in any way, forever … and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is “true” in some way that has nothing to do with reality.
What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I’m trying to understand your definitions and how they’re different from mine.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
I see that for you “meaningless” is a very narrow concept. But does that agree with your stated definition? In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Also, can you offer a concrete meaningless statement yourself? Preferably one in the form “X exists”.
What happens if a robot pain detector is invented tomorrow?
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.
In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Well, you used it,.
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad.
Its’ bad because there’s nothign inside the box. It’s just a apriori argument.
I can also use”ftoy ljhbxd drgfjh”. Is that not meaningless either? Seriously, if you have no arguments, then don’t respond.
What happens if a robot pain detector is invented tomorrow?
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You’re supposed to be able to check that somehow.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
If you have no arguments, then don’t respond.
The implicit argument is that meaning/communication is not restricted to literal truth.
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow?
What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam’s razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Now you imply that they possible could be detected, in which case I withdraw my original claim
Yes, the unicorns don’t have to be undetectable be definition. They’re just undetectable by all methods that I’m aware of. If “invisible unicorns” have too much undetectability in the title, we can call them “ghost unicorns”. But, of course, if you do detect some unicorns, I’ll say that they aren’t the unicorns I’m talking about and that you’re just redefining this profound problem to suit you. Obviously this isn’t a perfect analogue for your “robot pain”, but I think it’s alright.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
I asked you before to propose a meaningless statement of your own.
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like “colourless green”, or category error, like “sleeping idea”.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
Very low finite rather than infinitessimal or zero.
I don’t see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don’t feel pain. I don’t see how that can be valid.
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that “bitter purple” (or something) was a category error, and your answer was very underwhelming.
I say that “sleeping idea” is meaningless, because I don’t have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, “is this idea sleeping” is answered with “no”. It’s just that I honestly don’t have such a restriction. I use the exact same explanation for the meaninglessness of both “fgdghffgfc” and “robot pain”.
a contradiction, like “colourless green”
The question “is green colorless” has a perfectly good answer (“no, green is green”), unless you don’t think that colors can have colors (in that case it’s a category error too). But I’m nitpicking.
starts with your not knowing something, how to detect robot pain
Here you treat detectability as just some random property of a thing. I’m saying that if you don’t know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can’t possibly say that it exists.
My “unicorn ghost” example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I’d have to make up an argument why this isn’t the unicorn I was talking about. “Robot pain” has no such flaws—it is devoid of any traces of meaningfulness.
300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.
Can you define “meaningless” for me, as you understand it?
I means “does not have a meaning.”
In particular, how it applies to grammatically correct statements.
In general, it doesn’t apply to grammatically correct sentences, and definitely not to statements. It’s possible that you will find something grammatically correct which is meaningless, but it would have to be severely contorted.
How do you know?
If you can ask the question, “How do you know?”, then the thing has a meaning. I will show you an example of something meaningless:
faheuh fr dhwuidfh d dhwudhdww
Note that there is no question of knowing or not knowing anything. When you can ask how you know something or don’t know it, then the thing has a meaning.
It only explains the “-less” suffix. It’s fine as a dictionary definition, but that’s obviously not what I asked for. I need you to explain “meaning” as well.
The problem with that is that if the word “meaning” has several meanings you will have a situation like this:
Suppose the word “meaning” has two meanings, A & B. But then we can ask what the word “meanings” means in the previous sentence: does it mean A, or B? If you answer that it means A, then the word “meaning” might have two meanings in the A sense, but five meanings in the B sense. But then we can ask what the word “meanings” means in the previous statement. And it might turn out that if the word “meanings” is taken in the B sense, the statement (about 2 and 5) is only true if we take the fourth meaning of the B sense, while in the 3rd sense, it has 7 meanings in the A sense, and 2 meanings in the B sense. And so on, ad infinitum.
All of that means that we have to accept a basic sense of meaning which comes before all the others if we want to talk about meaning at all. And in that basic sense, statements like that obviously have a meaning, whereas ones like “shirwho h wehjoeihqw dhfufh sjs” do not.
we have to accept a basic sense … And in that basic sense, statements like that obviously have a meaning
Your comment boils down to “It’s complicated, but I’m obviously right”. It’s not a very convincing argument.
Meaning is complicated. It is a function of at least four variables: the speaker, the listener, the message, and the context. It’s also well-trodden ground over which herds of philosophers regularly stampede and everything with the tag of “obviously” has been smashed into tiny little pieces by now.
Your comment boils down to “It’s complicated, but I’m obviously right”.
You’re right about the “I’m obviously right” part, but not the rest. It boils down to “you have to start somewhere.” You can’t start out with many meanings of “meaning”, otherwise you don’t know what you mean by “meanings” in the sentence “I am starting out with many meanings of meaning.” You have to start with one meaning, and in that case you can know what you mean when you say “I am starting with one meaning of meaning.”
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
The chair you are sitting on is a realisation; Van Gogh’s painting of his chair at Arles is a representation. You can’t sit on it.
But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. I
That’s very vaguely phrased. There’s are questions of whether pain has phenomenal qualities, whether it is totally reducible to physical behaviour, and whether it is multiply realisable. If pain doesn’t have phenomenal properties, how do you decide which set of brain states get labelled as pain states?
But the concern is that you have no way of coming to know the answers to those questions. You have predetermined that everything must be treated as physics from the outset, so you will ineveitably get out the answer you put in. You are not
treating the identity of pain with brain states as a falsifiable hypothesis.
There are uncontentious examples of multiply realisable things. Everything in computer science is MR—all algorithms, data structures , whatever. For the purposes of AI research, intelligence is assumed to be MR. There is no implication that MR things are things that “exist apart” from their realisations. So I don’t know where you are getting that from.
In particular, you have to believe this to even ask whether robots feel pain.
I would have to believe pain is MR to believe that; but the objection cannot be that nothing is MR. You are apparently being inconsistent about MR.
“purple is not bitter
Colour and taste are different categories, therefore category error.
You are not treating the identity of pain with brain states as a falsifiable hypothesis.
No, I’m treating the identity of pain with the memories thoughts and behaviors that express pain, as unfalsifiable. In other words, I loosely define pain “the thing that makes you say ouch”. That’s how definitions work—the theory that the thing I’m sitting on is a chair is also unfalsifiable. At that point the identity of pain with brain states is in principle falsifiable, you just induce the same state in two brains and observe only one saying ouch. Obviously, there are various difficulties with that exact scheme, it’s just a general sketch of how causality can be falsified.
There are uncontentious examples of multiply realisable things.
I don’t recall suggesting that something isn’t MR. I don’t know why you think that MR is a problem for me. Like I said, there are multiple realizations of pain the same way that there are multiple realizations of chair.
Is that supposed to be a novel theory, or a dictionary definition?
Definition, as I state right in the next sentence, and then confirm in the one after that. Is my text that unreadable?
Your suggesting pain can’t be instantiated in robots..
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question. In the same way that wondering if some table can also be a chair is a stupid question. If you feel that you need an answer, feel free to choose arbitrarily. But then, if you think that having an answer helps you somehow, you’re probably doing something very wrong.
In the case of a simulated human brain, it might seem more natural to call those states “pain”, but then if you don’t, nobody will be able to prove you wrong.
Is that supposed to be a novel theory, or a dictionary definition?
Definition, as I state right in the next sentence
The question asked dictionary definition.
Uncontroversially, you can prove something, or at least obtain a high standard of justification using falsifiable empiricism. Uncontroversially, you also can achieve a good level of justification using armchair reasoning based on valid deductions from standard definitions.
The use of nonstandard—stipulated, gerrymanderd, tendentious—is much dicier. You yourself made the comparison to compatiblism. In shades of gray terms, the redefinition manoeuvre isn’t completely beyond the pale, but it is nowhere near the gold standard of epistemology either—compatibliism, the “wretched subtefuge” remains somewhat contentious. The objection is is that compatibilists have changed the subject, are not in fact talking about free will.
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question.
And how are you justifying that suggestion? By appeal to personal intuition, which is also low grade epistemology.
It’s actually possible to answer that kind of question in a reasonably rigorous and formal way...you can show that a certain concept is leads to contradiction. But then such arguments are only convincing if they start from definitions that bear some relation to what a word usually means.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
For everyone else, pain is a feeling, a sensation, a phenomenal mode, a quale. You have left all that out of your definition , which is like definining a chair as something you cannot possibly sit on.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
the redefinition manoeuvre
Definitions aren’t handed from god in stone tablets. I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Oddly, I don’t feel like I’m doing the same thing compatibilists do. At least in my own head I explicitly have multiple versions of definitions (i.e. “if we define pain as <...> then <...>”). But I do worry if that’s always reflected in my text.
And how are you justifying that suggestion?
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I feel like I’ve asked you, though I’m not sure. This is an important point though. If we can’t agree even on that much, then we have some serious problems.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain. The word “similar”, of course, causes it’s own problems (how similar does it need to be?) but that’s nothing in comparison.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course). It’s your definition that’s weird.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
Dictionary definitions generally reflect popular usage. They are sometimes revised in terms of scientific discoveries—water is no longer defined as a basic element—but that requires more epistemic weigh than someone’s intuitive hunch.
Definitions aren’t handed from god in stone tablets
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Who are you communicating to when you use your own definitions?
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I
It’s not relevant to anything. I thunk there can be meaningless statements, and I continue to think yo have no evidence that “robot pain” is one of them.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain.
Says you. Why should I believe that?
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course).
Are you abndoning the position that “robot in pain” is meanngless in all cases?
Are you abndoning the position that “robot in pain” is meanngless in all cases?
I never said “all cases”, that would be ridiculous, the problems with “robot pain” depends on how the words are defined. With a strict physical definition the problem is easy, with a weaker physical definition, we have the usual classification problem, and with your definition the phrase is meaningless.
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
I don’t think I’ve ever tried to prove anything by redefining any words. There is some sort of miscommunication going on here. What I may do is try to convince you that my definitions are better, while matching common usage.
Who are you communicating to when you use your own definitions?
You’re asking this as though I maliciously misinterpreted what you mean by consciousness. Is that how you see this? What I tried to do is understand your definition to the best of my ability, and point out the problems in those. When talking about other definitions, I explicitly said things like “In this view pain is …” or “If you defined consciousness as …”. Was it actually unclear which definition I was talking about where, for all this time?
Well, if you define pain exactly as “the state that follows damage and precedes the ‘ouch’” then you would damage the robot, observe it say ouch, and then proclaim that it experiences pain. It’s that simple. The fact that you asked, suggests that there’s something you’re seriously misunderstanding. But I can’t explain it if I don’t know what it is.
Remember when you offered a stupid proof that “purple is bitter” is category error, and then never replied to my response to it? Gosh, that was a while ago, and apparently we didn’t move an inch.
To summarize, I believe that the phrase is meaningless, because instead of showing to me how meaningful it is, you repeatedly ask me stupid questions. At least, that’s one additional data point.
But using them proves nothing?
Yes, definitions do not generally prove statements.
I am wondering who you communicate with when you use a private language
Considering that I provide you with the alternate definitions and explicitly state which definition I’m using where, I’m communicating with you.
Your solution is unconvincing because it can be fulfilled by code that is too simple to be to be convincing. If you change the definition of pain to remove the the subjective, felt aspect, then the resulting problem is easy to solve...but it’s not the original problem. It’s not that I can’t understand you, it’s that it’s hard to believe anyone could pull such a fraudulent manoeuvre.
Meaninglessness is not the default. Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
If definitions do not prove statements , you have no proof that robot pain is easy.
If you redefine pain, you are not making statements about pain in my language. Your schmain might be a trivially easy thing to understand, but it’s not what I asked about.
What the hell? I’m not just annoyed because of how accusatory this sounds, I’m annoyed because it apparently took you a week of talking about alternative definitions to realize that I am, at times, talking about alternative definitions. Are you not paying attention at all?
Meaninglessness is not the default.
Well, it should be. I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic). Obviously, you can do whatever you want, but I need you to explain to me, how it makes sense to you.
Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
Sure, in a similar way that people discussing god or homeopathy bothers me. It’s not exactly bad to discuss anything, but not all questions are worth the time spent on them either.
If definitions do not prove statements , you have no proof that robot pain is easy.
I did say “generally”. Definitions do prove statements about those definitions. That is “define X as Y” proves that “X is Y”. Of course, there are meaningful statements presented in the form “X is Y”, but in those cases, we already have X well defined as Z and the statement is really a shorthand for “Z is Y”. I guess I’m trying to convince you that in your case the definition Z does not exist, so making up a new one is the next best thing.
If you redefine pain, you are not making statements about pain in my language.
I am, at times, talking about alternative definition
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
Meaninglessness is not the default.
Well, it should be
That can’t possible work, as entirelyuseless has explained.
Sure, in a similar way that people discussing god or homeopathy bothers me.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
in your case the definition Z does not exist, so making up a new one is the next best thing.
The ordinary definition for pain clearly does exist, if that is what you mean.
Robot pain is of ethical concern because pain hurts.
No, pain is of ethical concern because you don’t like it. You don’t have to involve consciousness here. You involve it, because you want to.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
Homeopathy is meaningful. God is meaningful only some of the time. But I didn’t mean to imply that they are analogues. They’re just other bad ideas that get way too much attention.
The ordinary definition for pain clearly does exist, if that is what you mean.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
“You don’t have to involve consciousness here”—has two meanings: one is “the concept of preference is simpler than the concept of consciousness”, which I would like to call a fact, although there are some problems with preference too.
another is “consciousness is generally not necessary to explain morality”, which is more of an opinion.
“highly unpleasant physical sensation caused by illness or injury.”
Of course, now I’ll say that I need “sensation” defined.
have you got an exact definition of “concept”?
Requiring extreme precision in all things tends to bite you.
I’d say it’s one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don’t want to bother. That’s because I have never considered “Is X a concept” to be an interesting question. And, frankly, I use the cord “concept” arbitrarily.
It’s you who thinks that “Can X feel pain” is an interesting question. At that point proper definitions become necessary. I don’t think I’m being extreme at all.
Obviously, anything can be of ethical concert, if you really want it to be
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
is “the concept of preference is simpler than the concept of consciousness”, w
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
“consciousness is generally not necessary to explain morality”, which is more of an opinion.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;’t want to talk about consciousness.
Of course, now I’ll say that I need “sensation” defined.
Of course, I’ll need “defined” defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology. It’s a form of question-begging.
. That’s because I have never considered “Is X a concept” to be an interesting question.
You used the word , surely you meant something by it.
At that point proper definitions become necessary.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
and also don;’t want to talk about consciousness.
What?
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
I’ll need “defined” defined
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
You used the word , surely you meant something by it.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as in proper scotsman?
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
and also don;’t want to talk about consciousness.
What?
You keep saying it s a broken concept.
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain?
That anything should feel like anything,
Proper as in proper scotsman?
Proper as not circular.
Circular as in
“Everything is made of matter.
matter is what everything is made of.” ?
Yes. I consider that “talking about consciousness”. What else is there to say about it?
That anything should feel like anything,
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes, if I had actually said that. By the way, matter exists in you universe too.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If it refers to something else, then I’ll need you to paraphrase.
If you want to know what “pain” means, sit on a thumbtack.
You can say “torture is wrong”, but that has no implications about the physical world
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
They only need to know about robot pain if “robot pain” is a phrase that describes something.
As i have previously pointed out, you cannot assume meaninglessness as a default.
morality, which has many of the same problems as consciousness, and is even less defensible.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
That is a start, but we can’t gather data from entities that cannot speak
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
Are there classes of conscious entity?
Morality or objective morality? They are different.
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
but I don’t necessarily understand what it would mean for a different kind of mind.
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
It seems you are no longer ruling out a science of other minds
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
I’ve already told you what it would mean
Where exactly?
Is the first half of the conversation meaningful and the second half meaningless?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic).
Then you should consider all statements meaningless, without exception, since all of your arguments are made out of statements, and there cannot be an infinite regress of arguments.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.
it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
I don’t really know why you derive from this that all statements are meaningless. Maybe we disagree about what “meaningless” means? Wikipedia nicely explains that “A meaningless statement posits nothing of substance with which one could agree or disagree”. It’s easy for me to see that “undetectable purple unicorns exist” is a meaningless statement, and yet I have no problems with “it’s raining outside”.
How do you argue why “undetectable purple unicorns exist” is a meaningless statement? Maybe you think that it isn’t, and that we should debate whether they really exist?
“Please give an example of a subjective experience, other than consciousness, that has no physical evidence.”
All subjective experiences, including consciousness, are correlated with objective descriptions. E.g. a person who is awake can be described in ways objectively distinct from a person who is asleep. So there is always evidence for subjective experience. But that does not reduce the meaning of having a subjective experience to some objective description.
So for example “I am conscious” does not signify any objective description, but is correlated with various objective descriptions. Likewise, “I currently seem to see a blue object,” does not signify any objective description, but it is correlated with various objective descriptions.
The things are correlated. For example, every time I am awake and conscious, I have a relatively undamaged brain. So if someone else has an undamaged brain and does not appear to be sleeping, that is evidence that they are conscious.
I″m with EntirelyUseless. You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”.
There are two ways to look at a meaningless phrase. One is to say that perfectly good words are being used wrong. Another is to say that there is some problem with the words themselves. If we look at, say, “purple is bitter”, the first way is clearly more appropriate. If we look at “robot feels pain”, it’s hard for me to tell, which way I prefer.
I don’t think you have established any problem of meaning , so the question of which problem doesn’t arise.
Here is my claim that “robot feels pain” is a meaningless statement. More generally, a question is meaningless, if an answer to it transfers no information about the real world. I can answer “is purple bitter” either way, and that would tell you nothing about the color purple. Likewise, I could answer “does this robot feel pain” and that would tell you nothing about the robot or what you should do with it. At best, a “yes” would mean that the robot can detect pressure or damage, and then say “ouch” or run away. But that’s clearly not the kind of pain we’re talking about.
Since you are equaiting reality with objectivity, you are simply declaring statements about subjectivity meaningless by fiat.
That’s because it is a category error.
Of course it tells me what I should do. It’s ethically relevant if a robot feels pain. if it feels pain when damaged, I should not damage it.
How do you know? You aer assuming that no roobt can have a real subjective sensation of pain, and you have no way of knowing that one way or the other, and your arguments are question begging and inconsistent. (If “robots do (not) feel pain” is meaningless, as you sometimes say, it cannot also be false, as you sometimes also say).
I’m claiming that all subjective experiences have objective descriptions. Please give an example of a subjective experience, other than consciousness, that has no physical evidence. Obviously, I will try to argue that either there is something objective you missed, or that the subjective experience is as poorly defined as consciousness.
But “robot pain” isn’t? How did you come to those conclusions?
That’s not how this works. Rats feel pain without a doubt, but we destroy them quite freely. Whether you will damage the robot is decided by many factors. E.g. if there is some benefit to the damage, if the robot will scream out in pain, or if it’s likely to damage you in return. The robot’s subjective experience of pain only matters if you decide that it matters—this is true for all categories, no matter how artificial.
Are you asking about the “at best” part? Because the rest of that sentence seems quite mundane. Here “at best” is about the limits of my own imagination. You’re welcome to suggest something better.
That’s not contradictory, even if slightly inconsistent. “is purple bitter” is a meaningless question, but the answer “no” is clearly a lot more appropriate than “yes”. The line between falsehood and nonsense is quite blurry. I think we can freely call all nonsense statements false, without any negative consequences.
That’s not how it works either. You can’t infer zero moral relevance of some factor by noting that other factors acan countervail.
I’m not morally omniscient. The robots experience of pain matters if it features in some scheme of ideal moral reasoning. To put i another way, you just proved that nothing is morally relevant, if you proved anything at all.
Well, you do seem to have a subjective intuition that robots will never feel pain. Others intuit differently. What happened to all the science stuff?
Gosh, I really don’t want to start talking about morality now. But I have to point out that the “bitterness of purple” can also matter, if it features in some scheme of ideal moral reasoning. At least if you accept that this moral reasoning could require arbitrary concepts and not just ones grounded in reality.
No, I ran a deterministic procedure in my brain, called “is X well defined”, on “robot pain”, and it returned “no”. It’s only subjective in the sense that mine is different from yours, if you have such a procedure at all. The procedure, by the way, works by searching for alternative definitions of things, such that the given concept is neither trivial nor stupid. Unfortunately, failure to find such definitions does not produce a proof of non-existence, so I’m quite open to the idea that I missed something, it’s just that you inspire little confidence.
I did not mean to imply that ideal moral reasoning is weird and unguessable....only that you should not take imperfect moral reasoning (whose?) to be the last word. The idea that deliberately causing pain is wrong is not contentious, and you don’t actually have an argument against it.
That’s the sense that matters.
That’s not a very interesting sense. Is height also subjective, since we are not equally tall? This sense is also very far from the magical “subjective experience” you’ve used. I guess the problematic word in that phrase is “experience”, not “subjective”?
Height is not a subjective judgement because it is not a judgement. If judgements are going to vary, that matters, because then who knows what the truth is?
I would say that almost none have descriptions, where description means description. But is sounds as though you might actually be talking about physical correlates.
I can’t make much sense of that, since all subjective experiences occur within consciosuness.
I don’t know why you think I am denying that changes in consciousness have correlations in physical activity.
I am pointing out that we cannot determine much about conscious subjective states from external physicial evidence, because we don’t know how to work back from the one to the other. We can’t recover the richness of conscious experience from externals. But we know it is there, in ourselves at least, because being conscious mean having access to your own conscious experience. You are putting the blame on consciousness itself, saying it is a nothingy thing, when the problem is your techniques.
The requrrement that everything be rooted in (external) reality in order to be meaningful is unreasonable, because i, in cases like this, it requires you to have a sort of omniscience before you can talk at all. (It’s fine to defiene temperature as what thermometers measure once you have accurate thermometers).
You see, I’m proposing the radical new view that the world is made of atoms and other “stuff”, and that most words refer to some configurations of this stuff. In this view “pain” doesn’t just correlate with some brain activity, it is that brain activity. The brain activity of pain is an objective fact, and if you were to describe that objective fact, you get an objective description. In this view, the existence of human pain is as real as the existence of chairs. But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
I’m pointing out that you cannot work out one from another, because your concept of consciousness has no properties or attributes that are more grounded in reality than consciousness itself. You need to play rationalist taboo. If you defined consciousness as “ability to process external events” or “ability to generate thoughts” or “the process that makes some people say they’re conscious”, finding a correspondence between consciousness and brain states would be possible, even if not easy. But you seem to refuse such definitions, you call them correlates, which suggests that there could be a consciousness that satisfies none of them. I’m pointing out that even if you had complete knowledge about everything going on in a particular brain, you still wouldn’t be able to tell if it is conscious, because your concept of consciousness is broken.
It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter, and also without denying that it eventually does. Treating pain semantically as some specific brain activity buys you nothing in terms of the ability to communicate and understand …. when you don’t know which precise kind...which you don’t. If Purple and Bitter are both Brain Activity Not Otherwise Specified, they are the same. If you can solve the mind body problem , then you will be in the position to specify the different kinds of brain activity they are. But you can also distinguish them , here and now, using the subjectively obvious difference. And without committing yourself to evil dualism.
I have never claimed otherwise. In fact, there is literally nothing that I have exact description of, in terms of matter—neither pain nor chairs. But you have to know something. I know that “chair is what I sit on” and from that there is a natural way to derive many statements about chairs. I know that “gravity is what makes things fall down”, and from that there is a fairly straightforward way to the current modern understanding of gravity. There is nothing that you know about consciousness, from which you can derive a more accurate and more material description.
It buys me the ability to look at “do robots feel pain” and see that it’s a stupid question.
What evil dualism?
How do you know? And what of things like https://en.wikipedia.org/wiki/Global_Workspace_Theory ?
It doesn’t seem to have given you the ability to prove that it is a stupid question.
Well, for one, you have been unwilling to share any such knowledge. Is it a secret, perhaps?
I see a model that claims to reproduce some of the behaviors of the human mind. Why is that relevant? Where are your subjective experiences in it?
Also, to clarify, when I say “you know nothing”, I’m not asking for some complex model or theory, I’m asking for the starting point from which those models and theories were constructed.
Proof is a high bar, and I don’t know how to reach it. You could teach me by showing a proof, for example, that “is purple bitter” is a stupid question. Although I suspect that I would find your proof circular.
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
I was responding to your claim that “there is nothing that you know about consciousness, from which you can derive a more accurate and more material description.”. This has been done, so that claim was false. You have shifted the ground.
Purple is a colour, bitter is taste, therefore category error.
Then why be so sure about things? Why not say “dunno” to “can robots feel pain?”.
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it. I ask you if it has subjective experiences, because that seems to be the most important aspect of consciousness to you. If you can’t find them in this model, then the model is on my side, not yours.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple. Maybe we mean different things when we say “proof”?
That’s still an irrelevant objection. The issue is whether the concept of consciousness can be built on and refined, or whether it should be abandoned. GWT shows that it can be built on, and it is unreasonable to demand perfection.
Is that worse than saying you know things you don’t know?
Sometimes different people use the same words to mean different things. I deduce that GWT does not build on consciousness as you understand it, because it doesn’t have the most important feature to you. It builds on consciousness as I understand it. How is that irrelevant?
You mean, is saying “dunno” to everything worse than saying something is true without having absolute 100% confidence? Yes. What kind of question is that?
Also, why did you quote my “category error” response? This doesn’t answer that at all.
If we assume that the sweet spot is somwhere between 0% and 100%, are you sure you are saying “dunno” enough.
Quite sure. How about you?
And, again, what sort of question is that? What response did you expect?
Why is the question about the “are tables also chairs” not meaningful? Structured knowledge databases like Wikidata have to answer that question.
Image that a country has a general tariff for furniture and there’s a tariff exemption for chairs. One clever businessman who sells tables starts to say that his tables are chairs. In that case, the question can become important enough that a large sum of money is spent on a legal process to answer the question.
This seems like a good comment to illustrate, once again, your abuse of the idea of meaning.
There are two ways to understand this claim: 1) most words refer to things which happen also to be configurations of atoms and stuff. 2) most words mean certain configurations of atoms.
The first interpretation would be fairly sensible. In practice you are adopting the second interpretation. This second interpretation is utterly false.
Consider the word “chair.” Does the word chair mean a configuration of atoms that has a particular shape that we happen to consider chairlike?
Suppose someone approached a chair in your house with an atomic microscope and discovered that it was not made of atoms, but was a continuous substance without any boundaries in it. Would you suddenly say that it was not a chair? Not at all. You would say “this chair is not made of atoms.” This proves conclusively that the meaning of the word chair has nothing whatsoever to do with “a configuration of atoms.” A chair is in fact a configuration of atoms; but this is a description of a thing, not a description of a word.
This could be true, if you mean this as a factual statement. It is utterly false, if you mean it as an explanation of the word “pain,” which refers to a certain subjective experience. The word “pain” is not about brain activity in the same way that the word “chair” is not about atoms, as explained above.
I would just note that “are tables also chairs” has a definite answer, and is quite meaningful.
I would say that being a chair (according to the meaning of the word) is correlated with being made of atoms. It may be perfectly correlated in fact; there may be no chair which is not made of atoms, and it may be factually impossible to find or make a chair which is not. But this is a matter for empirical investigation; it is not a matter of the meaning of the word. The meaning of the word is quite open to the possibility that there is a chair not made of atoms. In the same, the meaning of the word “consciousness” refers to a subjective experience, not to any objective description, and consequently in principle the meaning of the word is open to application to a consciousness which does not satisfy any particular objective description, as long as the subjective experience is present.
I explicitly added “other stuff” to my sentence to avoid this sort of argument. I don’t want or need to be tied to current understanding of physics here.
But even if I had only said “atoms”, this would not be a problem. After seeing a chair that I previously thought was impossible, I can update what I mean by “chair”. In the same, but more mundane way, I can go to a chair expo, see a radical new design of chair, and update my category as well. The meaning of “chair” does not come down from the sky fully formed, it is constructed by me.
I want to see that.
Then you should admit that words like “chair” or “consciousness” do not have anything about physics in their meaning.
Tables are not chairs.
I was hoping to see the reasoning behind that. Where does the answer come from? Obviously, you chose it arbitrarily.
For one thing (not the only thing), chairs are things that are normally used for sitting. Tables are not normally used for sitting, so they are not chairs. Nothing arbitrary about that reasoning.
Where do those definitions come from? Do you know what “arbitrary” means? By the way, I have chairs that I have never sat on, and there are tables I’ve sat on quite a bit. What is “normally”?
The meaning of words comes from people’s usage (which is precisely why words do not mean anything like what you think they do.)
Yes.
The vast majority of tables are rarely or never sat on. The table in my house has never been sat on. The vast majority of chairs are frequently sat on, like the ones in my house. It may not be the only normal thing, but certainly what happens in the vast majority of cases is normal.
Also, I said “for one thing.” Even if people normally sat on tables, they would not be chairs, because they do not have the appropriate structure, just as benches are not chairs.
Why do people use the words that way?
Also, I’d point out that what I mean by “chair” is not equivalent to people’s usage. You could call it “reverse engineered” from people’s usage. There are some differences. Do you know where those come from?
Obviously I don’t even know how most people use those words—I only know about my acquaintances and people on TV, I could be living in a bubble, I could be using many words wrong.
Stools are chairs, but benches are just wide stools. So if I have a small table (such as a coffee table), and use it for sitting, it’s not a bench, it’s a stool and therefore a chair?
In case it’s not obvious what I’m doing, I intend to ask you these stupid questions until you realize that they are stupid questions, that they don’t matter and that the correct way to answer them is to pull answers out of your ass (i.e. arbitrarily).
Roughly speaking, because if one performed factor analysis on their life experiences, they would have factors more or less corresponding to the words they use.
Yes, largely from your own experience of the usage contexts of the word chair, which as you say could be somewhat different from the overall usage patterns, although it is unlikely that there are large differences.
No. As I said before, “for one thing.” There are still reasons why a coffee table would not suddenly become a stool, even if in fact you use it for that.
That’s incorrect, so I won’t realize that no matter how many such questions you ask.
That said, it is true that we extend words to additional things when we think the new things are similar enough to the old things.
The problem of “consciousness” is that we have no idea how similar the new thing is to the old thing, no matter how many objective descriptions we come up with for the new thing. That is: the problem is that we have no idea whether the robot is conscious or not, no matter what objective facts we know about the robot. This does not mean that we can arbitrarily decide to say “let’s extend the word conscious to the robot.” It’s like this situation: there is an object behind a screen, and you are never allowed to look behind the screen. Should we call the object a “chair” or not?
The fact that language is both vague and extendible does not suddenly entitle you to say that we can say that an object behind a screen is either “a chair” or “not a chair” without first looking behind the screen. And in the case under discussion, no one yet knows a way to look behind the screen, and possibly there is no such way.
What makes you believe that? Ideally we’d want something like this to be true, but assuming that it is true seems a bit naive. There are also some serious technical problems with the idea (how do you quantify experiences? what do you do when different people have different experiences but have to use the same words? etc).
I don’t think you know what “arbitrary” means. It does not mean “completely random”. In a deterministic world everything has some explanation. It’s just that sometimes the explanations are long, bizarre and kind of stupid, if you look at them.
Likewise, we cannot say whether some new object is a chair only by knowing objective facts about the object. We also need to know what the word “chair” refers to. And in the case that our definition of “chair” doesn’t help us, we’re going to have to extend it in some arbitrary way.
Replace “you are never allowed” with “it is impossible”. Then I will suggest that the object does not exist.
If nothing like that were true, words would actually be arbitrary, as you suppose. Then if we looked for reasonable boundaries, they would fall in random places. For example, it might turn out that the word “chair” refers in some cases to physical objects that people sit on, and in other cases to tiny underground animals. Words don’t work like this, which gives me good reason to think that something like this is true. It is not “assuming” anything.
My point is that the meaning of words is not long, bizarre, and stupid. The meanings of words are actually quite reasonable. Also, you are mistaken about the meaning of “arbitrary”. It does in fact mean not having a reason; you are just saying that sometimes “this doesn’t have a reason” is shorthand for saying that it doesn’t have a good reason. But even understood like this, the meaning of words is not arbitrary.
We will have to extend it, but it will not have to be in an arbitrary way. There may be a good reason why we extend it the way that we do, not a stupid reason.
Then your suggestion is false; even if there is a screen that you cannot go behind, it would not mean that there is nothing inside it. You cannot look beyond the event horizon of the visible universe, but there are things beyond it.
“Chair” happens not to refer to animals, although it can mean “chairman”. “Stool” can refer to several things, including poop. Finally, “shrew” refers both to a tiny animal and an annoying woman. Words do work like this. Surely there are some bizarre historical reasons how the word got those two meanings. But you have to admit, that these reasons have very little to do with the properties of small animals and annoying women.
I’m not saying that there are no forces working to simplify language. But there is a very large gap between that and “factor analysis on life experiences”.
“Arbitrary” literally means “decided by arbiter”, as opposed to “decided by the rule of law”, i.e. there was a question that “the law” couldn’t answer directly, and an arbiter had to decide “arbitrarily”. It doesn’t mean that the arbiter flipped a coin.
Going back to chairs and tables, you can always find some excuse why a coffee table I sit on isn’t a chair, and I can always find some excuse why it should be (I mean, how the hell is http://www.ikea.com/us/en/catalog/products/20299829/ not a stool?). We could live perfectly well is a world were my excuses are right and yours are wrong. The reasons we don’t are long bizarre and stupid. That falls under “arbitrary” perfectly well.
I’d rather not bring modern physics into this, but I have to point out, that suggesting those things don’t exist will cause no problems to anyone. At worst, this suggestion would lose to another idea through Occam’s razor.
Also, cases where a “single” word has two meanings are cases of two words. They are not examples of words that have arbitrary meanings.
You first said that if there was nothing like factor analysis, some words would have two unrelated meanings, then I point out that lots of words have two unrelated meanings, and now you say that one word with two meanings is two words (by definition, I assume?), contradicting your own claim we started from. Do you see how bad this looks?
Sure, there are words that have different meanings and different origins, that, thanks to some arbitrary modifications, end up sounding the same. There is an argument to discount those. But lots of words do have the same origin and the new meaning is a direct modification of the old one.
You misunderstood. The point is that if there was not some common meaning, the applications of a word would be random. This does not happen in any of the cases we have discussed, and two entirely unrelated usages are cases of two words.
This is true, and there is nothing stupid or arbitrary about this way of getting a secondary meaning.
I have never denied that the ways different people use the same words are similar. This however does nothing to support your “factor analysis” theory, nor does it have anything to do with words that have multiple unrelated meanings.
This is a claim with no justification. The whole “one word is two words” formulation is inherently bizarre. Of course, saying that “committee chair” and “armchair” are both “chairs”, doesn’t mean the two things are actually similar. Likewise putting both “armchair” and “stool” in under one label does not reduce their differences, and putting “stool” and “coffee table” in different categories does not reduce their similarities.
Sure it does. People use words in similar ways because their lives have similar factors.
Technically, there are no such words. As I said, these are multiple words that use similar spellings.
Consider these two statements:
1) A committee chair and an armchair are both “chairs.” 2) A committee chair and an armchair are both chairs.
The first statement is true, and simply says that both a committee chair and an armchair can be named with the sound “chair”.
The second statement, of course, is utterly false, because there is no meaning of “chairs” that makes it true. And that is because there is not a word that has both of those meanings; there are two words which are spelled and spoken alike.
In fact, using different names adds a difference: the fact that the things are named differently. Still, overall you are more right than wrong about this, even though you have the tendency to ignore the real reasons for names in favor of appearances, as when you say that “pain” means “what makes someone say ouch.” Obviously, if someone says “ouch” because he wishes to deceive you that he is feeling pain, pain will not be the wish to deceive someone that he is feeling pain. Pain is a subjective feeling; and in a similar way, a coffee table is not merely something that has a certain shape, but something that was made for a certain intended use, even if you personally use it for sitting.
No, people use words in similar ways, because they want to communicate with each other. And because word meanings are usually inherited rather than constructed. It’s not false that the factors are usually similar, but not all true statements follow from one another. Some people with very different factors may use words similarly and others with similar factors will eventually use them differently.
Again, nobody thinks that the two things are similar or share properties, but that’s exactly what you asked for. If you want a milder example, I can offer “computer”, which can refer to an electronic device or to a human who does arithmetic (old usage). The two meanings are still very different, but they do share a property (they both compute), and it’s easy to see that a sentence “I had computers calculate this solution” is natural and could refer to either (or both). At the same time, using two different words for them (e.g., let’s call humans who compute “analysts”) would also be natural. The reasons we don’t use two words have very little to do with the properties of humans or electronic devices.
Etymology is not meaning.
That’s not up to you; you made the argument that if there is a screen that you cannot look behind, there is nothing behind it. That argument is false.
The suggestion will be false, whether or not it causes problems for anyone.
Exactly like suggesting that other people’s conscious experiences do not exist, since this would mean that the reason for your own talk about your own experiences differs from the reason for other people’s talk about their experiences. There is no reason to believe in such a difference.
It would be, if word meanings weren’t assigned in a stupid way. And it does usually help to understand the word.
By “problem” I meant “contradiction”. Contradictions is how we establish what is true and what is false.
Maybe you didn’t want to quote the “at worst” part? Because now I almost agree with you.
Wrong. There’s nothing stupid about this process; on the contrary, it would be stupid to attempt to make meaning exactly correspond with etymology.
They are one way, not the only way. Your positing that something doesn’t exist because you cannot access it is absurd, contradiction or not.
That’s a bold claim. God forbid words have consistent meanings over time!
Going back to the word “arbitrary”, do we actually disagree about what it means, or is this just a useless tangent? You said it means “without any reason”. The obvious problem is that almost everything has some reasons, including whims of small children, delusions of the insane and results of fair coin flips. I suggest that if the word meant “without good reason”, the word would be more useful.
We probably do not disagree about what it means, but we disagree about what we are saying it means. I do say it means without any reason, although we can take it more specifically as “without any reason of the kind we are currently thinking about.”
If we take as I suggested, it would be possible in some cases to mean “without good reason,” namely without a reason of the kind we are currently thinking about, namely a good one.
In general, this topic came up because you were asserting that questions like “are tables also chairs” are stupid and only have arbitrary answers. If arbitrary means that there cannot be a good reason, then you are mistaken, because we have good reason for saying that tables are not chairs, and the stupidity would only be in saying that they are chairs, not in saying that they are not.
In regard to the issue of consciousness, the question is indeed a useless distraction. It is true that words like “pain” or even “consciousness” itself are vague, as are all words, and we exercise judgement when we extend them to new cases. That does not mean there is never a good reason to extend them. But more importantly, when we consider whether to extend “chair” to a new case, we can at least see what the thing looks like. In the case of the robot, we do not know what the robot’s feeling is like, or even if it has any. So we cannot compare it to our feeling of pain. And so the case is quite different from the case of the chair: as I said before, it is like asking if an unknown object behind a screen is a chair or not. Unknown, but definitely not arbitrary.
What sort of reasons are we currently talking about though? I want to hear reasons based on the properties of the objects being classified. You seem to accept whatever reasons you can come up with.
Here’e an example of weird reasons. How can shape not determine the difference? If IKEA made two identical objects and labeled one “chair” and another “table”, would they then actually be different objects? IKEA can have whatever intentions they want, but http://www.ikea.com/us/en/catalog/products/20299829/ is a stool. Are you seriously telling me that it isn’t? Now, one could say that a stool can be both a chair and a table, and I think that’s what IKEA does, but then you’ve already claimed this to be impossible.
That’s assuming that “feeling” is a meaningful category. If you didn’t start from that assumption, and instead identified your experiences with brain states, you could go one step further and ask “are the states of the robot’s processor/memory similar to my brain states”, but then you hit the obvious classification problem. There are some similarities and there are some differences, and you have to choose which of those are the most important to you, and there is no one right way to do it. Lack of knowledge isn’t the main problem here.
“Properties of the objects being classified” are much more extensive than you realize. For example, it is property of pain that it is subjective and only perceived by the one suffering it. Likewise, it is a property of a chair that someone made it for a certain purpose.
The intention of the one who makes a chair is relevant, but not necessarily completely determinate. If someone says “I am making a chair,” but it turns out that the thing has the shape of a hammer, it still will not be a chair.
In most cases of that kind, the thing being called a table really is a table, and not a stool. Obviously I cannot confirm this in the particular case since I do not intend to buy it. But it is related to the fact that it is made for a certain purpose, as I said. In other words, in most cases the thing is not suitable for use as a stool: it might collapse after one occasion of sitting on it, or anyway after several days. In other words, being made as a table, it is physically unsuitable to be used as a seat. And consequently if it did collapse, it would quite correct to say, “This collapsed because you were using it as a stool even though it is not one.”
That said, I already said that the intention of the makers is not 100% determining.
That’s not subject to falsification, in the same way that it is not subject to falsification that the thing I am sitting on is called a “chair.” In other words, I already notice the similarity between all the things that are called feelings in the same way that I notice the similarity between chairs.
Talk about assumptions. I assume, and you are assuming here, that I have a brain, because we know in most cases that when people have been examined, they turned out to have brains inside their heads. But the fact that my toe hurts when I stub it, is not an assumption. If it turned out that I did not have a brain, I would not say, “I must have been wrong about suffering pain.” I would say “My pain does not depend on a brain.” I pointed out your error in this matter several times earlier—the meaning of pain has absolutely nothing at all to do with brain activities or even the existence of a brain. As far as anyone knows, the pain I feel when I stub my toe could depend on a property of the moon, and the pain I feel when I bump into a lamppost on a property of Mt. Everest. If that were the case, it would affect in no way the fact that those two pains feel similar.
This is completely wrong, for the reason I just stated. We are not talking about similarities between brain states—we are talking about the similarity of two feelings. So it does not matter if the robot’s brain state is similar to mine. It matters whether it feels similar, just as I noted that my different pains feel similar to one other, and would remain feeling similar, even if they depended on radically different physical objects like the moon and Mr. Everest.
When exactly is the intention relevant? If two objects have the same shape but different intended uses, and you still classify them the same, then the intention is not relevant. More generally, if we have variables X, Y and want to test if a function f(X,Y) depends not only on X, but also on Y, we have to find a point where f(X1, Y1) is not equal to f(X1, Y2). Are you saying that sometimes intention matters, just not for chairs? If not, I can only assume that intention doesn’t determine anything and only shape is important.
Yes, in this case by “meaningful” I meant, “can be meaningfully extended to robots”. I admit that this wasn’t clear.
Indeed, if it turned out that you did not have a brain or anything equivalent, you would have falsified my theory. Falsification is a good thing. Unfortunately for you, you do have a brain.
Are you going to feel the robot’s feeling and compare?
Intention matters for chairs as well. I am saying that all of the circumstances are relevant, and one factor like intention may not completely settle it, but it is one of the factors that determine the matter.
Unquestionably, it can be meaningfully extended to robots. You simply mean the same thing that you mean in the regular case. The only question is whether there is any feeling there, not if “feeling” has a meaning, since we already admitted that it does have a meaning.
The possibility of falsification is a good thing for a physical theory. It is a very bad thing for a theory of the meaning of a word. As you already admitted, the fact that the pieces of furniture we normally sit on are called chairs is not subject to falsification, because that is just what is meant by “chair.” But a physical theory of a chair, e.g. “chairs are made of atoms,” is falsifiable, since someone could examine a chair and discover that it was not made of atoms. He would not then say, “We have discovered that ‘chair’ meant something different from what we thought.” He would say, “We knew what ‘chair’ meant, and that is unchanged, but we have learned something new about the physical constitution of chairs.”
In the same way, I am referring to certain feelings when I talk about “pain.” The fact that the word pain refers to those feelings cannot be falsified, because it is just what that word means. But whether pain depends on a brain activity is a falsifiable physical theory; it has nothing to do with the meaning of the word “pain.”
Assuming that I do, that is fortunate, not unfortunate. But as I was saying, neither you or I know that I do, since neither of us has seen the inside of my head.
No. The question is not whether the robot has a feeling which feels similar to me as my feeling of pain; the question is whether the robot has a feeling that feels to the robot the same way that my feeling feels to me. And since this has two subjects in it, there is no subject that can feel them both and compare them. And this is just how it is, whether you like it or not, and this is what “pain” refers to, whether you like it or not.
Can you actually support your claim that intention matters? To clarify, I’m suggesting that intention merely correlates with shape, but has no predictive power on its own.
It’s somewhat complicated. “Experiences are brain states” is to an extent a theory. “Pain is the state that follows stubbing your toe and precedes saying ouch” is more of a pure definition. Similarly we could say that the sun is “giant ball of gas undergoing nuclear fusion” or we could say that it is “a bright spot in the sky”—the first is more of a theory than the second, but somehow I’m comfortable calling both of them “definitions”.
Well, if two things can be similar (or the same), then there has to be some way to compare them, right?
Artificial things are made for a purpose, and being made for a purpose is part of why they are called what they are called. This is an obvious fact about how these words are used and does not need additional support.
If you mean pain is the conscious state that follows in that situation, yes, if you mean the third person state that folllows, no.
Then you should wake up and stop being comfortable; the second is a better definition, exactly for that reason. Definitions shouldn’t be falsifiable, and are not physical theories.
No. The stars outside event horizon of the visible universe are similar to the stars that we can see, but there is no way to compare them.
One can however ask the question, “Are the stars similar?” and one can answer yes or no. In the same way we can ask if the robot feels like we do and we can say yes and no. But there is access to the answer here, just as there no access in the case of the stars. That has nothing to do with the fact that either they are similar, or they are not, both in the case of the robot, and in the case of the stars.
Wow, you have no idea how many bold claims you make. To clarify once again, when I ask if intention matters, I’m asking whether the relationship between intention and classification is causal, or just a correlation. You are supposed to know the difference between those two things, and you’re supposed to know, in theory, how to figure out which one is relevant in a specific case. This whole “does not need additional support” thing inspires no confidence.
No, if tomorrow I found out that the “bright spot in the sky” is not a giant ball of gas undergoing fusion, but a powerful flashlight orbiting earth, I’m going to stop calling it “sun”.
I hate bringing up modern physics, it has limited relevance here. Maybe they’ll figure out faster than light travel tomorrow, and your point will become moot. But if we must...
If we insist that something beyond the event horizon exists (I’d love to see how you define that word), we may still claim that the objects beyond it are similar to the objects here, if we have some arguments to support that. A heavy dose of Occam’s razor helps too. Note though, that the certainty of beliefs derived this way should be pretty low. And in the case of robots, hardly any of this applies.
It is causal, but not infallible.
That’s your problem. Everyone else will still call it “the sun,” and when you say “the sun didn’t rise this morning,” your statement will still be false.
Exactly. You can make arguments for and against robot pain as well. Meaning does not come from the fact that your arguments are strong or weak: the meaning is there first, even before you have any reason to think that the thing is true.
Ok, do you have any arguments to support that claim?
That may depend on the specific circumstances of the discovery. Also, different people can use the same words in different ways.
Arguments like what?
As I said, this is how people use the words.
Like yours, for example.
What words? The word “causal”? I’m asking for arguments why you think that the relationship between intention and classification is causal. I expect you to understand the difference between causation and correlation. Why is this so difficult for you?
I understand the difference, and I have no difficulties here. I said it was causal, not merely correlative.
Ok, do you have any arguments to support that it is causal?
As I said, this is how these words work, that is words like “chair” and so on. If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair. And those things are true even given the same form, so it is causal, not merely correlated.
Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that’s not the same as influencing the classification.
So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn’t.
I am not talking about evidence, but about meaning; when we say, “this is a chair,” part of the meaning is that it was made for sitting. So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair, in the ordinary sense of the term, because they were not made for sitting.
I don’t know where you think that was established. I said the fact that someone made something for a different purpose would not guarantee that it was that thing, as that if you made something in the shape of a hammer, and said it was for sitting, it would not be a chair. But the fact that it was made for sitting does indeed make it closer to being a chair, not by way of evidence or correlation, but directly by more closely matching the meaning of “chair.”
Well, I asked you almost that exact question, you quoted it, and replied with something other than “yes”. How was I supposed to interpret that?
So, if I find one chair-shaped rock, it’s not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair? Would simply sitting on the first rock convert it into a chair?
I can understand why you wouldn’t want to call a chair shaped rock a “chair”. But you don’t have to involve the intention of the maker for this.
By acting like you actually want to understand what is being said, rather than by acting like you are in a fight where you are trying to win. In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague.
In particular, consider my answer to your next question, because it is basically the same thing again.
There is no guarantee of this, because the word “chair” is vague. But it is true that there would be more reason to call the second rock a chair—that is, the meaning of “chair” would apply more properly to the second one than to the first one. So the answer might be yes, depending on all the other details of the situation.
In general, no, because the word “chair” does not work that way. It does not mean what someone happens to sit on, but what is made for sitting on.
If you are not ignorant of how the word is used, you do have to involve the intention of the maker.
I think you already forgot how this particular part of the thread started. First I said that we had established that “X is false”, then you disagreed, then I pointed out that I had asked “is X true?” and you had no direct answer. Here I’m only asking you for a black and white answer on this very specific question. I understood your original reply, but I honestly have not idea how it was supposed to answer my specific question. When people refuse to give direct answers to specific questions, I infer that they’re conceding.
What exactly do you mean by “vague”? The word “chair” refers to the category of chairs. Is the category itself “vague”?
I have been telling you form the beginning, that the meanings of words are constructed individually and arbitrarily on a case by case basis. But you keep acting like there is some shared, absolute and natural concept of a chair. Apparently one that you have more knowledge of than I. So I keep asking you specific questions about this concept. And finally, you seem to agree that you don’t actually know what the corner cases are or should be, but apparently that’s not because people use words as they please, but because this shared absolute and natural concept of a chair is “vague”, whatever that means.
We can talk more about what this has to do with consciousness when we get past the “language 101” stuff. By the way, this thread started here where you explicitly start talking about words and meanings, so that’s what we’re talking about.
You are correct that “I forgot”, in the sense that I don’t know exactly what you are referring to, although my guess is to the IKEA discussion. In any case, the response there, and to anything similar that you might be discussing, is that there is no direct answer because the answer is “in some cases yes, in some cases no, depending on the particular circumstances.”
First of all, all words are vague, so there is no such thing as “what exactly do you mean by.” No one ever means anything exactly. Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.
Yes. All categories are vague, because they are generated by a process similar to factor analysis, as I have said before, and this does not generate precise categories, but vague ones.
It is false that the meanings are arbitrary, for the reasons I have said. It is also false that there is some “absolute and natural concept of a chair,” and I have never suggested that there is.
This is true of all words. All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things. Therefore no words refer to precise concepts (in fact there is no such thing as a precise concept, as I have already said.)
First of all, you are the one who needs the “language 101” stuff, since you have been abusing the concept of meaning by confusing it with testability. Second, nope, we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn’t. You have been constantly confusing two entirely separate issues, and ignoring me when I point out the separateness of these issues. Let me do so one more time.
You have been confusing the idea “this statement has a meaning” with “this statement is testable.” Those are two entirely separate things. Likewise, you have been confusing “this statement is vague” with “this statement is not testable.” These are two entirely separate things.
Consider a line of stars. The one at the left end is a red giant. The one at the right end is a white dwarf. In between, the stars each differ from the previous one by a single atom. Then you have a question of vagueness. When exactly do we stop calling them white dwarfs and start calling them red giants? There cannot possibly be a precise answer. This has nothing to do with testability; we can test whatever we want. The problem is that the terminology is vague, and there is no precise answer because it is vague.
Second, consider a line of stars outside the visible universe, except that some of the stars, on the right, are identical white dwarfs, and the ones to the left of them are identical red giants. Where exactly do the stars stop being white dwarfs and begin being red giants? This time, we cannot answer the question because there is no test to give us the answer. But vagueness is not an issue, because there is a sharp division between the two parts. We simply cannot find it by testing.
Third, consider a line of stars outside the the visible universe, constructed as in the first case. This time, there are two problems: we cannot test where the boundary is, and the boundary is vague. These are two entirely different issues.
Fourth, consider a line of things where the one on the left is a statue, the one on the right is a human being, and somewhere in the middle there are robotic things. Each thing differs by a single atom from the thing on its left, and from the thing on its right.
Now we have the question: “The statue is not conscious. The human being is conscious. Is the central robot conscious?” There are two separate issues here. One is that we cannot test for consciousness. The second is that the word “conscious” is vague. These are two entirely separate issues, just as they are in the above cases of the stars.
Let us prove this. Suppose you are the human being on the right. We begin to modify you, one atom at a time, moving you to the left. Now the issue is testable: you can ask yourself whether you are conscious, and if you have any answer at all, or even if you manage to ask yourself the question, then you are conscious. Note that this is quite different from anyone else asking the thing if it is conscious, because the question “does this thing say it is conscious” is not the same as “is this thing conscious.” But being conscious is having a first person point of view, so if you can ask yourself anything, you are conscious. Unfortunately, long before you cease to be conscious, you will cease to be able to ask yourself any questions. So you will still not be able to find a definite boundary between conscious and not conscious. Nonetheless, this proves that testability is entirely separate from vagueness.
Well, that explains a lot. It’s not exactly ancient history, and everything is properly quoted, so you really should know what I’m talking about. Yes, it’s about the identical table-chairs question from IKEA discussion, the one that I linked to just a few posts above.
Why are there no determinate boundaries though? I’m saying that boundaries are unclear only if you haven’t yet decided what they should be. But you seem to be saying that the boundaries inherently cannot be clear?
There is nothing vague about the results of factor analysis.
On this topic, last we seemed to have agreed that “arbitrary” classification means “without reasons related to the properties of the objects classified”. I don’t recall you ever giving any such reasons.
For example, you have said ‘”are tables also chairs” has a definite answer’. Note the word “definite”. You also keep insisting that there is factor analysis involved, which would also be an objective and natural way to assign objects to categories. By the way “natural” is the opposite of “arbitrary”.
Yeah, I recall saying something like that myself. And the rest of your claims don’t go well with this one.
Well, you decided that I need it, then made some wild and unsupported claims.
Yes, the two statements are largely equivalent. Oddly, I don’t recall you mentioning testability or measurability anywhere in this thread before (I think there was something in another thread though).
I don’t think I’ve done that. It’s unfortunate that after this you spent so much time trying to to prove something I don’t really disagree with. Why did you think that I’m confusing these things? Please quote.
This is only as vague as you want it to be. If you want, you can cut the line, based on whatever reason, and call all the starts on one side “red giants” and stars on the other side “white dwarfs”. It would be pointless, but there is nothing stopping you. You say “cannot possibly” and then give no reasons why.
I however have no problems with the vagueness here, because the two categories are only shorthands for some very specific properties of the starts (like mass). This is not true for “consciousness”.
It’s not a test if “no” is unobservable.
There is nothing stopping you only in the sense that nothing stops you from asserting falsehoods. (As we see is the case for you personally.)
It is intrinsically vague: “Red giant” does not and cannot have precise boundaries, as is true of all words. The same is true of “White dwarf.” If you cut the line, you will indeed be cutting it arbitrarily, as you say yourself, and this has nothing to do with the meaning of those words.
The rest does not respond to the comparison about consciousness, and as I said we won’t be discussing the comments on language.
Again, you make a claim and then offer no arguments to support it. “Red giant” is a term defined quite recently by a fairly small group of people. It means what those people wanted it to mean, and its boundaries are as precise as those people wanted them to be.
You started the language discussion, but I have to explain why we’re continuing it? I continue, because I suspect that the reasoning errors you’re making about chairs are similar to the errors you’re making abut consciousness, and chairs are easier to talk about. But it’s only a suspicion. Also, I continue, because you’ve made some ridiculous claims and I’m not going to ignore them.
Assuming “experience” is a meaningful category.
You hit a classification problem, which is that you don’t know which similarities are relevant, and therefore what kind of group you are aiming at. If you don’t reject experience and feeling, you at least know that you are trying to group brains states that are experientially similar.
Yes, “feeling” and “experience”, are pretty much the same thing, I didn’t mean to imply otherwise in the text you quoted. Instead, the first sentence refers to your definition, and the second offers an alternative one.
There is a classification problem with tables and chairs. Generally, I know what chairs and tables are supposed to be like, but there are objects similar both to chairs and to tables, and there isn’t any obvious way to choose which of those similarities are more important. At this point someone could suggest that, if we started with the assumption that each object has an associated “chairness”, then that is what we would want to compare. I’m sure you can see why that doesn’t solve anything. And yet this is exactly what you do with experiences, from my point of view.
So they are either both meaningful, or both meaningless. But you have used “experience” as though it is meaningful, and you have implied that “feeling” is meaningless.
That was a predictable problem. Physical identity theory requires statements of the form ” is equivalent to ”. if you reject all vocabulary relating to mental sates, you cannot make that kind of statement, and so cannot express identity theory.
Whereas, from my point of view, 1st person experience was there all along.
No, I used “experience” as a label. Let me rewrite that part:
That’s assuming that “experience”, as you use that word, is a meaningful category. If you didn’t start from that assumption, and instead defined experiences as brain states, you could …
Is that better? I understand that having two definitions and two similar but not identical concepts in one sentence is confusing. But still I expect you to figure it out. Was “identified” the problem?
What vocabulary relating to what mental states do I reject? Give examples.
Wasn’t “chairness” there too? More importantly, can you actually offer an argument why “chairness” doesn’t exist, other than pointing out that I just made it up? Preferably an argument that I couldn’t make about consciousness just by swapping a few words?
As opposed to what?
Does “meaningful” mean “meaningful” here , or is it being used as a misleading proxy for something like “immeasurable” or “unnecessary” or “tadasdatys doens’t like it”?
You keep saying various words are meaningless. One would not want to use meaningless words, generally. OTOH, you have revealed elsewhere that you don’t use “meaningless” to mean “meaningless”. So who knows?
Consciousness is in the dictiionary, chariness isn’t.
Consciousness is a concept used by science, chairness isn’t.
Consciousness is supported by empirical evidence, chairness isn’t.
It’s not that words are meaningless, it’s that you sometimes apply them in stupid ways. “Bitter” is a fine word, until you start discussing the “bitterness of purple”.
Are dictionary writers the ultimate arbiters of what is real? “Unicorn” is also in the dictionary, by the way.
Physicalist, medical definition of consciousness is used by science. You accuse me of changing definitions when it suits me, and then proceed to do exactly that. I guess that’s what projection looks like.
What evidence exactly? I have to assume my last paragraph applies here too.
If you can’t even come up with arguments why a silly concept I made up is flawed, maybe you shouldn’t be so certain in the meaningfulness of other concepts.
That the brain is not quiescent when experiencing pain is an objective fact. But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Please check out multiple realisability.
Because of that, none can genuinely tell whether an advanced robot has genuine qualia. That includes you, although you are inclined to think that your subjective intuitions are objective knowledge.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
Sure, but what does that have to do with anything? Does “objective” mean “well understood” to you?
There are multiple representations of pain the same way that there are multiple representations of chair.
It is ridiculous how much of this debate is about the basic problem of classification, rather than anything to do with brains. Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument. Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
Calling my reasoning, even if not fully formal, “subjective intuitions” seems rude. I’m not sure if there is some point you’re trying to express with that.
Not sure where you see me talking about intelligence. But intelligence is far more well defined and measurable than consciousness. Multiple realizability has nothing to do with that.
We do, on the other hand, know subjecively what pain feels like..
That’s not the point. The point is that if we have words referring to subjective sensations, like “purple” and “bitter”, we can distinguish them subjectively. But if we discard out subjective insight into them, as you are proposing, and replace them with vague objective descriptions—vague, because no one knows precise descriptions of the full gamut of atomic configurations which implement pain. -- then you take a step backwards. You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter. Basing semantics on objective facts, or “reality” as you call it. only works if you know which fact is which. You are promoting something which sounds good, but doesn’t work—as a research program. Of course it works just fine at getting applause from an audience of dualism-haters.
Are you talking about realisations or representations?
No one has made that argument. The point is not that it is not ultimately true that subjetive states are brain states, it is that rejecting the subjective entirely, at this stage, is not useful. Quite the reverse. Consciousness is the only thing we know from the inside—why throw that away?
If we know what casues or predicts something, then we can sometimes reproduce it in radically different ways. That is the basis of artificial intelligence: realising intelligence in a medium other than a brain.
But you are saying that pain cannot be realised by a robot, and your reasons for sayign that are entirely non-empirical.
We might be able to refine the concept of consciousness as part of a research programme, but research programmes have to start with folk concepts.
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires. But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not. It’s as if you assume something is real, just because it comes out of people’s mouths.
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
Parts of my text are referring to the arguments I saw in wikipedia under “multiple realizaility”. But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. In particular, you have to believe this to even ask whether robots feel pain.
I’m still waiting for your empirical reasons why “purple is not bitter”, or better yet, “purple is not a chair”, if you feel the concept of bitterness is too subjective.
But not much of an argument for using semantics grounded in (physical) reality. Doing so does not buy you maximum precision in absolute terms, and , what is worse, the alternative , of grounding terms for types of experience is 1st person experience, can give you more precision.
You may believe that, but do you know it?
The difference is that I accept the possibility that first person evidence could falsify 3rd person theory.
I’m not taking 1st person to mean 3rd person reports of (someone elses) 1st person experience.
What sort of precision are you talking about? More generally, you’re repeatedly said that the concept of consciousness is very useful. I don’t think I’ve seen that usefulness. I suspect that elaborating here is your best bet to convince me of anything. Although even if you did convince me of the usefulness of the term, that wouldn’t help the “robot pain” problem much.
That’s a slightly weird question. Is it somehow different from “why do you believe that” ? I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary. It’s very likely that “zapping” isn’t quite sufficient, depending on how generously you interpret that word. But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper. Obviously, paper is slower and less accurate. But you seem to be implying a more fundamental difference between those two methods of data storage. Why is that?
I like type theory. Let X be what I’m sitting on. Type of X is “chair”, type of “chair” is “category”, a painting of X is a representation of X, it is not a representation of “chair”. Representations of “chair”, in the same sense that painting represents X might not exist. Somehow I’m quite comfortable saying that an object of type Y is what represents Y. “Instantiates” might be the best word (curiously though, google uses “represent” to define it). Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
I have said that actual experience is useful to pin down the meaning s of words referring to exerpeince.
Not at all. That there is a difference betewen belief and knowledge is very standard.
There’s an extensive literature of arguments to the contrary,
It is the idea that you can learn aout the inward or 1st person by purely outward or 3rd person means that is contentious.
No, I am saying that my first person is me, and your first person is you. SO my first person information is my experience, not someone else’s report of their experience.
Well, you said that the two R words mean the same hting , when by established usage, they don’t That looks like a source of confusion to me.
I assure you that none of the beliefs I state here were generated by flipping a coin. They are all to some extent justified. That’s why the question is weird—did you expect me to answer “no”?
There is extensive literature of arguments in favor of god or homeopathy. Doesn’t make those things real. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments (likewise for god and homeopathy). However you seem to have read quite a bit more, and you haven’t raised my confidence in the value of that literature so far.
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences. I’m not saying that this hypothesis is true, I’m only saying that you don’t know it to be false. And if it did happen to be true, then your internal reasoning about your experiences would not be much different from my reasoning about your experiences written on paper (aside from the low precision of our language). Curiously, a physical experiment is more likely to test this hypothesis, than your internal reasoning.
It is a potential source of confusion, but that doesn’t mean it’s causing any right now. Maybe if we talked about representations such as paintings, it would cause some. Regardless, I’ll try to use the words you prefer. Debating their differences and similarities is very orthogonal to our main topic.
You said there was a “lack” of arguments to the contrary, and I pointed out that there wasn’t.
Then why didn’t you say lack of good arguments? And why didn’t you say what is wrong with them?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
“Magic” is not a helpful phrase.
You need to distinguish ontology and epistemology. Experiences and memories and so on have their physical correlates—ontology—but that does not mean you can comprehend them—epistemology. We might be able to find ways of translating between correlates and experience, but only if we don’t ignore experience as an epistemology. But, again, taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
Experience is experience, not reasoning about experience.
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with? Please explain your thought process, I really want to know. You see, productive debate requires some amount of generosity. I may not be polite, but I don’t think you’re illiterate or insane, and I don’t think I nitpick about things this obvious.
Maybe this is a symptom that you’re tired of the whole thread? You know you can stop whenever you want, right?
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves. That’s where my “baseless hypothesis” comes from.
To clarify, the hypothesis isn’t a direct response to something you said, it’s a new angle I want to look at, to help me understand what you’re talking about.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation. I realize that this might not be obvious. Though I feel that this is a natural use of the word.
That’s fine. There are some things that I’d want to pick on, although I’m not sure which of them are significant. But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Did you mean “consciousness”? To put is bluntly, if you haven’t; heard of MR, there is probably a lot you don’t know about the subject.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
You yourself are blocking off the possibility of understanding consciousness , subjectivity and experience by refusing to allow them as prima-facie, pre-theoretic phenomena.
You say that we must start with reality, but we cannot: (an accruate map of reality) is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist.
I get that you hate, hate, hate dualism or anything else that threatens physical monism, but you can’t prove physical monism by begging the question against it. You are doing it no favours.
Nobody else has a problem with robot pain as a meaningful possibility. You do because you have removed the fist person from your definitions.
Heh. That’s fair.
If having experiences is an important part of consciousness, then I’d expect you to reason about them, what induces them, their components, their similarities and differences. This “consciousness in general” phrasing is extremely weird.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
Of course, I mean your methodology starts..
I’m not sure that changes anything.
Can you argue your point? I can argue mine.
You say that we must start with reality, but we cannot: (an accruate map of,) reality is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
The question “where did you start” has some bad assumptions. Of course at first we all have to start from the same naive point. If we did arbitrarily start from different unrelated assumptions, expecting to agree on anything would be weird.
So, what happened is that I started from naive assumptions, and arrived at physicalism. Then when I ask myself a new question, I start from where I last stopped—discarding all of my progress would be weird.
You may think that dropping an initial assumption is inherently wrong, but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary. You might be able to convince me that I do need to keep some similar assumption for technical reasons, but that wouldn’t solve the “robot pain” problem.
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name. But when I drop consciousness, my life gets easier. How does that work?
There is a difference between a working hypothesis and an unfalsifiable dogma. It seems to you that there is nothing to explain about consciousness because you only accept 3rd-person empirical data, because of your ontology.
Could explain what assumption you are dropping, and why, without using the word magical.
I’d prefer if you settled on one claim.
That would be the problem for which there is no evidence except your say-so.
You can function practically without a concept of gravity, as people before Newton did. What you can get away with theoretically depends on what you are trying to explain. Perhaps there is a gravity sceptic out there somewhere insisting that “falling object” is a meaningless term, and that gravity is magic.
Is my position less falsifiable than yours? No, most statements about consciousness are unfalsifiable. I think that’s a strong hint that it’s a flawed concept.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head. I dropped it because I found that physicalism explains everything better. “Better” doesn’t mean that I have all the answers about anything, it just means that the answers consciousness gives are even worse.
I don’t understand what your problem with “magical” is?
Well, I suppose an assumption could be unnecessary without being meaningless, so the words aren’t identical, but I do refer to the same thing, when I use them in this context. I also recall explaining how a “meaningless” statement can be considered “false”. The question is, why are you so uncomfortable with paraphrasing? Do you feel that there are some substantial differences? Honestly, I mostly do this to clarify what I mean, not to obscure it.
The “robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do. That’s a pretty big problem, regardless of what I say. Now, when I ask if this or that idea solves “robot pain” problem, I’m not asking if it produces an actual test, I just ask for a smallest hint that maybe the test could exist.
That’s ridiculous. The mathematical law of gravity was written down by Newton, but the concept of gravity, in the sense that “things fall down”, is something most animals have. Do you literally think that nobody noticed gravity before Newton?
That’s not the problem.
The assumption is more that consciousness is something that needs explaining,
That’s wrong. If you can put a truth-value on a sentence , it is meaningful.
I think it is better to express yourself using words that mean what you are trying to express.
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
I think ti is likely that your entire argument is based on vagueness and semantic confusion,
There is a real problem of not being able to test for a pain sensation directly.
Why did it take you so long too express it that way? Perhaps the problem is this:
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”. Perhaps you have to use vagueness and confusion to make the invalid inference seem valid.
Wow, so you agree with me here? Is it not a problem to you at all, or just not “the” problem?
Invisible unicorns are immeasurable. They do not exist. The assumption that they do exist is unnecessary. The statement “invisible unicorns are purple” is meaningless. The words aren’t all exactly the same, but that doesn’t mean they aren’t all appropriate.
A long long time ago you wrote: You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”. So I assumed you understood that immeasurability is relevant here. Did you then forget?
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you
No, still not from that.
You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.
Which issues exactly?
Why not? Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist? Does “‘robot pain’ is meaningless” follow from the same better?
Meaningfulness, existence, etc.
Huh? It’s perfectly good as a standalone stament , it’s just that it doens’t have much to do with meaning or measurabiltiy.
Not really, because you haven’t explained why meaning should depend on measurability.
It is evident that this is a major source of our disagreement. Can you define “meaningless” for me, as you understand it? In particular, how it applies to grammatically correct statements.
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods.
Useless for communication.
Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).
Where is this going? You can’t stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
A bit too vague. Can I clarify that as “Useless for communication, because it transfers no information”? Even though that’s a bit too strict.
What is stopping me from assigning them truth values? I’m sure you meant, “meaningless statements cannot be proven or disproven”. But “proof” is a problematic concept. You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument. Anyway, isn’t (1.) enough?
It’s still entirely about meaning, measurability and existence. I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not and what it could follow from. Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”. Or maybe “invisible unicorns cannot be detected” does not follow from “we have no arguments suggesting that maybe ‘invisible unicorns’ could be something detectable”?
The fact that you can’t understand them.
If you cant understand a statement as exerting the existence of something, it isn’t meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
Because it needs premises along the lines of “what is not measurable is meaningless” and “what is meaningless is false”, but you have not been able to argue for either (except by gerrymandered definitions).
There’s an important difference between stipulating something to be indetectable … in any way, forever … and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is “true” in some way that has nothing to do with reality.
I’m trying to understand your definitions and how they’re different from mine.
I see that for you “meaningless” is a very narrow concept. But does that agree with your stated definition? In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Also, can you offer a concrete meaningless statement yourself? Preferably one in the form “X exists”.
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.
Well, you used it,.
Its’ bad because there’s nothign inside the box. It’s just a apriori argument.
I can also use”ftoy ljhbxd drgfjh”. Is that not meaningless either? Seriously, if you have no arguments, then don’t respond.
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You’re supposed to be able to check that somehow.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
The implicit argument is that meaning/communication is not restricted to literal truth.
What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam’s razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Yes, the unicorns don’t have to be undetectable be definition. They’re just undetectable by all methods that I’m aware of. If “invisible unicorns” have too much undetectability in the title, we can call them “ghost unicorns”. But, of course, if you do detect some unicorns, I’ll say that they aren’t the unicorns I’m talking about and that you’re just redefining this profound problem to suit you. Obviously this isn’t a perfect analogue for your “robot pain”, but I think it’s alright.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like “colourless green”, or category error, like “sleeping idea”.
Very low finite rather than infinitessimal or zero.
I don’t see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don’t feel pain. I don’t see how that can be valid.
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that “bitter purple” (or something) was a category error, and your answer was very underwhelming.
I say that “sleeping idea” is meaningless, because I don’t have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, “is this idea sleeping” is answered with “no”. It’s just that I honestly don’t have such a restriction. I use the exact same explanation for the meaninglessness of both “fgdghffgfc” and “robot pain”.
The question “is green colorless” has a perfectly good answer (“no, green is green”), unless you don’t think that colors can have colors (in that case it’s a category error too). But I’m nitpicking.
Here you treat detectability as just some random property of a thing. I’m saying that if you don’t know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can’t possibly say that it exists.
My “unicorn ghost” example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I’d have to make up an argument why this isn’t the unicorn I was talking about. “Robot pain” has no such flaws—it is devoid of any traces of meaningfulness.
300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.
I doubt that’s a good thing. It hasn’t been very productive so far.
“Seriously, if you have no arguments, then don’t respond.”
People who live in glass houses shouldn’t throw stones.
I means “does not have a meaning.”
In general, it doesn’t apply to grammatically correct sentences, and definitely not to statements. It’s possible that you will find something grammatically correct which is meaningless, but it would have to be severely contorted.
If you can ask the question, “How do you know?”, then the thing has a meaning. I will show you an example of something meaningless:
faheuh fr dhwuidfh d dhwudhdww
Note that there is no question of knowing or not knowing anything. When you can ask how you know something or don’t know it, then the thing has a meaning.
I’m sure you can see how unhelpful this is.
No.
It only explains the “-less” suffix. It’s fine as a dictionary definition, but that’s obviously not what I asked for. I need you to explain “meaning” as well.
You need no such thing, and as I said, we won’t be continuing the discussion of language until you show it has something to do with consciousness.
Noam Chomsky wrote “Colorless green ideas sleep furiously” in 1955.
Ideas don’t sleep, so they don’t sleep furiously. The sentence is false, not meaningless.
This topic has been discussed, fairly extensively.
Yes. No one has shown that it is meaningless and it pretty obviously is not.
That’s a definitions argument, isn’t it? Under some ideas of what “meaning”, well, means, such sentences are meaningful; under others they are not.
The problem with that is that if the word “meaning” has several meanings you will have a situation like this:
Suppose the word “meaning” has two meanings, A & B. But then we can ask what the word “meanings” means in the previous sentence: does it mean A, or B? If you answer that it means A, then the word “meaning” might have two meanings in the A sense, but five meanings in the B sense. But then we can ask what the word “meanings” means in the previous statement. And it might turn out that if the word “meanings” is taken in the B sense, the statement (about 2 and 5) is only true if we take the fourth meaning of the B sense, while in the 3rd sense, it has 7 meanings in the A sense, and 2 meanings in the B sense. And so on, ad infinitum.
All of that means that we have to accept a basic sense of meaning which comes before all the others if we want to talk about meaning at all. And in that basic sense, statements like that obviously have a meaning, whereas ones like “shirwho h wehjoeihqw dhfufh sjs” do not.
Your comment boils down to “It’s complicated, but I’m obviously right”. It’s not a very convincing argument.
Meaning is complicated. It is a function of at least four variables: the speaker, the listener, the message, and the context. It’s also well-trodden ground over which herds of philosophers regularly stampede and everything with the tag of “obviously” has been smashed into tiny little pieces by now.
You’re right about the “I’m obviously right” part, but not the rest. It boils down to “you have to start somewhere.” You can’t start out with many meanings of “meaning”, otherwise you don’t know what you mean by “meanings” in the sentence “I am starting out with many meanings of meaning.” You have to start with one meaning, and in that case you can know what you mean when you say “I am starting with one meaning of meaning.”
“eventually I found them unnecessary and unattractive”
It is typically considered unnecessary and unattractive to assert that the Emperor is naked.
There’s that word again.
Do you prefer “naive”? Not exactly the same thing, but similar.
The chair you are sitting on is a realisation; Van Gogh’s painting of his chair at Arles is a representation. You can’t sit on it.
That’s very vaguely phrased. There’s are questions of whether pain has phenomenal qualities, whether it is totally reducible to physical behaviour, and whether it is multiply realisable. If pain doesn’t have phenomenal properties, how do you decide which set of brain states get labelled as pain states?
But the concern is that you have no way of coming to know the answers to those questions. You have predetermined that everything must be treated as physics from the outset, so you will ineveitably get out the answer you put in. You are not treating the identity of pain with brain states as a falsifiable hypothesis.
There are uncontentious examples of multiply realisable things. Everything in computer science is MR—all algorithms, data structures , whatever. For the purposes of AI research, intelligence is assumed to be MR. There is no implication that MR things are things that “exist apart” from their realisations. So I don’t know where you are getting that from.
I would have to believe pain is MR to believe that; but the objection cannot be that nothing is MR. You are apparently being inconsistent about MR.
Colour and taste are different categories, therefore category error.
No, I’m treating the identity of pain with the memories thoughts and behaviors that express pain, as unfalsifiable. In other words, I loosely define pain “the thing that makes you say ouch”. That’s how definitions work—the theory that the thing I’m sitting on is a chair is also unfalsifiable. At that point the identity of pain with brain states is in principle falsifiable, you just induce the same state in two brains and observe only one saying ouch. Obviously, there are various difficulties with that exact scheme, it’s just a general sketch of how causality can be falsified.
I don’t recall suggesting that something isn’t MR. I don’t know why you think that MR is a problem for me. Like I said, there are multiple realizations of pain the same way that there are multiple realizations of chair.
Is that supposed to be a novel theory, or a dictionary definition?
Your suggesting pain can’t be instantiated in robots..
Definition, as I state right in the next sentence, and then confirm in the one after that. Is my text that unreadable?
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question. In the same way that wondering if some table can also be a chair is a stupid question. If you feel that you need an answer, feel free to choose arbitrarily. But then, if you think that having an answer helps you somehow, you’re probably doing something very wrong.
In the case of a simulated human brain, it might seem more natural to call those states “pain”, but then if you don’t, nobody will be able to prove you wrong.
The question asked dictionary definition.
Uncontroversially, you can prove something, or at least obtain a high standard of justification using falsifiable empiricism. Uncontroversially, you also can achieve a good level of justification using armchair reasoning based on valid deductions from standard definitions.
The use of nonstandard—stipulated, gerrymanderd, tendentious—is much dicier. You yourself made the comparison to compatiblism. In shades of gray terms, the redefinition manoeuvre isn’t completely beyond the pale, but it is nowhere near the gold standard of epistemology either—compatibliism, the “wretched subtefuge” remains somewhat contentious. The objection is is that compatibilists have changed the subject, are not in fact talking about free will.
And how are you justifying that suggestion? By appeal to personal intuition, which is also low grade epistemology.
It’s actually possible to answer that kind of question in a reasonably rigorous and formal way...you can show that a certain concept is leads to contradiction. But then such arguments are only convincing if they start from definitions that bear some relation to what a word usually means.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
For everyone else, pain is a feeling, a sensation, a phenomenal mode, a quale. You have left all that out of your definition , which is like definining a chair as something you cannot possibly sit on.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
Definitions aren’t handed from god in stone tablets. I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Oddly, I don’t feel like I’m doing the same thing compatibilists do. At least in my own head I explicitly have multiple versions of definitions (i.e. “if we define pain as <...> then <...>”). But I do worry if that’s always reflected in my text.
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I feel like I’ve asked you, though I’m not sure. This is an important point though. If we can’t agree even on that much, then we have some serious problems.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain. The word “similar”, of course, causes it’s own problems (how similar does it need to be?) but that’s nothing in comparison.
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course). It’s your definition that’s weird.
Dictionary definitions generally reflect popular usage. They are sometimes revised in terms of scientific discoveries—water is no longer defined as a basic element—but that requires more epistemic weigh than someone’s intuitive hunch.
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
Who are you communicating to when you use your own definitions?
It’s not relevant to anything. I thunk there can be meaningless statements, and I continue to think yo have no evidence that “robot pain” is one of them.
Says you. Why should I believe that?
Are you abndoning the position that “robot in pain” is meanngless in all cases?
I never said “all cases”, that would be ridiculous, the problems with “robot pain” depends on how the words are defined. With a strict physical definition the problem is easy, with a weaker physical definition, we have the usual classification problem, and with your definition the phrase is meaningless.
I don’t think I’ve ever tried to prove anything by redefining any words. There is some sort of miscommunication going on here. What I may do is try to convince you that my definitions are better, while matching common usage.
You’re asking this as though I maliciously misinterpreted what you mean by consciousness. Is that how you see this? What I tried to do is understand your definition to the best of my ability, and point out the problems in those. When talking about other definitions, I explicitly said things like “In this view pain is …” or “If you defined consciousness as …”. Was it actually unclear which definition I was talking about where, for all this time?
Solve it , then.
Prove that.
But using them proves nothing?
I am wondering who you communicate with when you use a private language>
Well, if you define pain exactly as “the state that follows damage and precedes the ‘ouch’” then you would damage the robot, observe it say ouch, and then proclaim that it experiences pain. It’s that simple. The fact that you asked, suggests that there’s something you’re seriously misunderstanding. But I can’t explain it if I don’t know what it is.
I feel like we’ve talked about this. In fact, here: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvhm
Remember when you offered a stupid proof that “purple is bitter” is category error, and then never replied to my response to it? Gosh, that was a while ago, and apparently we didn’t move an inch.
To summarize, I believe that the phrase is meaningless, because instead of showing to me how meaningful it is, you repeatedly ask me stupid questions. At least, that’s one additional data point.
Yes, definitions do not generally prove statements.
Considering that I provide you with the alternate definitions and explicitly state which definition I’m using where, I’m communicating with you.
Your solution is unconvincing because it can be fulfilled by code that is too simple to be to be convincing. If you change the definition of pain to remove the the subjective, felt aspect, then the resulting problem is easy to solve...but it’s not the original problem. It’s not that I can’t understand you, it’s that it’s hard to believe anyone could pull such a fraudulent manoeuvre.
Meaninglessness is not the default. Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
If definitions do not prove statements , you have no proof that robot pain is easy.
If you redefine pain, you are not making statements about pain in my language. Your schmain might be a trivially easy thing to understand, but it’s not what I asked about.
What the hell? I’m not just annoyed because of how accusatory this sounds, I’m annoyed because it apparently took you a week of talking about alternative definitions to realize that I am, at times, talking about alternative definitions. Are you not paying attention at all?
Well, it should be. I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic). Obviously, you can do whatever you want, but I need you to explain to me, how it makes sense to you.
Sure, in a similar way that people discussing god or homeopathy bothers me. It’s not exactly bad to discuss anything, but not all questions are worth the time spent on them either.
I did say “generally”. Definitions do prove statements about those definitions. That is “define X as Y” proves that “X is Y”. Of course, there are meaningful statements presented in the form “X is Y”, but in those cases, we already have X well defined as Z and the statement is really a shorthand for “Z is Y”. I guess I’m trying to convince you that in your case the definition Z does not exist, so making up a new one is the next best thing.
Yes, that’s because your language is broken.
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
That can’t possible work, as entirelyuseless has explained.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
The ordinary definition for pain clearly does exist, if that is what you mean.
Prove it.
No, pain is of ethical concern because you don’t like it. You don’t have to involve consciousness here. You involve it, because you want to.
Homeopathy is meaningful. God is meaningful only some of the time. But I didn’t mean to imply that they are analogues. They’re just other bad ideas that get way too much attention.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Is that a fact or an opinion?
“highly unpleasant physical sensation caused by illness or injury.”
have you got an exact definition of “concept”?
Requiring extreme precision in all things tends to bite you.
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
“You don’t have to involve consciousness here”—has two meanings:
one is “the concept of preference is simpler than the concept of consciousness”, which I would like to call a fact, although there are some problems with preference too. another is “consciousness is generally not necessary to explain morality”, which is more of an opinion.
Of course, now I’ll say that I need “sensation” defined.
I’d say it’s one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don’t want to bother. That’s because I have never considered “Is X a concept” to be an interesting question. And, frankly, I use the cord “concept” arbitrarily.
It’s you who thinks that “Can X feel pain” is an interesting question. At that point proper definitions become necessary. I don’t think I’m being extreme at all.
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;’t want to talk about consciousness.
Of course, I’ll need “defined” defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology. It’s a form of question-begging.
You used the word , surely you meant something by it.
Proper as in proper scotsman?
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
What?
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
You keep saying it s a broken concept.
That anything should feel like anything,
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes. I consider that “talking about consciousness”. What else is there to say about it?
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Yes, if I had actually said that. By the way, matter exists in you universe too.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If you want to know what “pain” means, sit on a thumbtack.
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
As i have previously pointed out, you cannot assume meaninglessness as a default.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
Are there classes of conscious entity?
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
Where exactly?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
Then you should consider all statements meaningless, without exception, since all of your arguments are made out of statements, and there cannot be an infinite regress of arguments.
That’s cute.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.
I don’t really know why you derive from this that all statements are meaningless. Maybe we disagree about what “meaningless” means? Wikipedia nicely explains that “A meaningless statement posits nothing of substance with which one could agree or disagree”. It’s easy for me to see that “undetectable purple unicorns exist” is a meaningless statement, and yet I have no problems with “it’s raining outside”.
How do you argue why “undetectable purple unicorns exist” is a meaningless statement? Maybe you think that it isn’t, and that we should debate whether they really exist?
“Please give an example of a subjective experience, other than consciousness, that has no physical evidence.”
All subjective experiences, including consciousness, are correlated with objective descriptions. E.g. a person who is awake can be described in ways objectively distinct from a person who is asleep. So there is always evidence for subjective experience. But that does not reduce the meaning of having a subjective experience to some objective description.
So for example “I am conscious” does not signify any objective description, but is correlated with various objective descriptions. Likewise, “I currently seem to see a blue object,” does not signify any objective description, but it is correlated with various objective descriptions.
Evidence is only evidence, in the full sense of the term, if you can interpret it,
The things are correlated. For example, every time I am awake and conscious, I have a relatively undamaged brain. So if someone else has an undamaged brain and does not appear to be sleeping, that is evidence that they are conscious.