Is that supposed to be a novel theory, or a dictionary definition?
Definition, as I state right in the next sentence
The question asked dictionary definition.
Uncontroversially, you can prove something, or at least obtain a high standard of justification using falsifiable empiricism. Uncontroversially, you also can achieve a good level of justification using armchair reasoning based on valid deductions from standard definitions.
The use of nonstandard—stipulated, gerrymanderd, tendentious—is much dicier. You yourself made the comparison to compatiblism. In shades of gray terms, the redefinition manoeuvre isn’t completely beyond the pale, but it is nowhere near the gold standard of epistemology either—compatibliism, the “wretched subtefuge” remains somewhat contentious. The objection is is that compatibilists have changed the subject, are not in fact talking about free will.
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question.
And how are you justifying that suggestion? By appeal to personal intuition, which is also low grade epistemology.
It’s actually possible to answer that kind of question in a reasonably rigorous and formal way...you can show that a certain concept is leads to contradiction. But then such arguments are only convincing if they start from definitions that bear some relation to what a word usually means.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
For everyone else, pain is a feeling, a sensation, a phenomenal mode, a quale. You have left all that out of your definition , which is like definining a chair as something you cannot possibly sit on.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
the redefinition manoeuvre
Definitions aren’t handed from god in stone tablets. I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Oddly, I don’t feel like I’m doing the same thing compatibilists do. At least in my own head I explicitly have multiple versions of definitions (i.e. “if we define pain as <...> then <...>”). But I do worry if that’s always reflected in my text.
And how are you justifying that suggestion?
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I feel like I’ve asked you, though I’m not sure. This is an important point though. If we can’t agree even on that much, then we have some serious problems.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain. The word “similar”, of course, causes it’s own problems (how similar does it need to be?) but that’s nothing in comparison.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course). It’s your definition that’s weird.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
Dictionary definitions generally reflect popular usage. They are sometimes revised in terms of scientific discoveries—water is no longer defined as a basic element—but that requires more epistemic weigh than someone’s intuitive hunch.
Definitions aren’t handed from god in stone tablets
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Who are you communicating to when you use your own definitions?
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I
It’s not relevant to anything. I thunk there can be meaningless statements, and I continue to think yo have no evidence that “robot pain” is one of them.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain.
Says you. Why should I believe that?
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course).
Are you abndoning the position that “robot in pain” is meanngless in all cases?
Are you abndoning the position that “robot in pain” is meanngless in all cases?
I never said “all cases”, that would be ridiculous, the problems with “robot pain” depends on how the words are defined. With a strict physical definition the problem is easy, with a weaker physical definition, we have the usual classification problem, and with your definition the phrase is meaningless.
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
I don’t think I’ve ever tried to prove anything by redefining any words. There is some sort of miscommunication going on here. What I may do is try to convince you that my definitions are better, while matching common usage.
Who are you communicating to when you use your own definitions?
You’re asking this as though I maliciously misinterpreted what you mean by consciousness. Is that how you see this? What I tried to do is understand your definition to the best of my ability, and point out the problems in those. When talking about other definitions, I explicitly said things like “In this view pain is …” or “If you defined consciousness as …”. Was it actually unclear which definition I was talking about where, for all this time?
Well, if you define pain exactly as “the state that follows damage and precedes the ‘ouch’” then you would damage the robot, observe it say ouch, and then proclaim that it experiences pain. It’s that simple. The fact that you asked, suggests that there’s something you’re seriously misunderstanding. But I can’t explain it if I don’t know what it is.
Remember when you offered a stupid proof that “purple is bitter” is category error, and then never replied to my response to it? Gosh, that was a while ago, and apparently we didn’t move an inch.
To summarize, I believe that the phrase is meaningless, because instead of showing to me how meaningful it is, you repeatedly ask me stupid questions. At least, that’s one additional data point.
But using them proves nothing?
Yes, definitions do not generally prove statements.
I am wondering who you communicate with when you use a private language
Considering that I provide you with the alternate definitions and explicitly state which definition I’m using where, I’m communicating with you.
Your solution is unconvincing because it can be fulfilled by code that is too simple to be to be convincing. If you change the definition of pain to remove the the subjective, felt aspect, then the resulting problem is easy to solve...but it’s not the original problem. It’s not that I can’t understand you, it’s that it’s hard to believe anyone could pull such a fraudulent manoeuvre.
Meaninglessness is not the default. Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
If definitions do not prove statements , you have no proof that robot pain is easy.
If you redefine pain, you are not making statements about pain in my language. Your schmain might be a trivially easy thing to understand, but it’s not what I asked about.
What the hell? I’m not just annoyed because of how accusatory this sounds, I’m annoyed because it apparently took you a week of talking about alternative definitions to realize that I am, at times, talking about alternative definitions. Are you not paying attention at all?
Meaninglessness is not the default.
Well, it should be. I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic). Obviously, you can do whatever you want, but I need you to explain to me, how it makes sense to you.
Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
Sure, in a similar way that people discussing god or homeopathy bothers me. It’s not exactly bad to discuss anything, but not all questions are worth the time spent on them either.
If definitions do not prove statements , you have no proof that robot pain is easy.
I did say “generally”. Definitions do prove statements about those definitions. That is “define X as Y” proves that “X is Y”. Of course, there are meaningful statements presented in the form “X is Y”, but in those cases, we already have X well defined as Z and the statement is really a shorthand for “Z is Y”. I guess I’m trying to convince you that in your case the definition Z does not exist, so making up a new one is the next best thing.
If you redefine pain, you are not making statements about pain in my language.
I am, at times, talking about alternative definition
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
Meaninglessness is not the default.
Well, it should be
That can’t possible work, as entirelyuseless has explained.
Sure, in a similar way that people discussing god or homeopathy bothers me.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
in your case the definition Z does not exist, so making up a new one is the next best thing.
The ordinary definition for pain clearly does exist, if that is what you mean.
Robot pain is of ethical concern because pain hurts.
No, pain is of ethical concern because you don’t like it. You don’t have to involve consciousness here. You involve it, because you want to.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
Homeopathy is meaningful. God is meaningful only some of the time. But I didn’t mean to imply that they are analogues. They’re just other bad ideas that get way too much attention.
The ordinary definition for pain clearly does exist, if that is what you mean.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
“You don’t have to involve consciousness here”—has two meanings: one is “the concept of preference is simpler than the concept of consciousness”, which I would like to call a fact, although there are some problems with preference too.
another is “consciousness is generally not necessary to explain morality”, which is more of an opinion.
“highly unpleasant physical sensation caused by illness or injury.”
Of course, now I’ll say that I need “sensation” defined.
have you got an exact definition of “concept”?
Requiring extreme precision in all things tends to bite you.
I’d say it’s one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don’t want to bother. That’s because I have never considered “Is X a concept” to be an interesting question. And, frankly, I use the cord “concept” arbitrarily.
It’s you who thinks that “Can X feel pain” is an interesting question. At that point proper definitions become necessary. I don’t think I’m being extreme at all.
Obviously, anything can be of ethical concert, if you really want it to be
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
is “the concept of preference is simpler than the concept of consciousness”, w
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
“consciousness is generally not necessary to explain morality”, which is more of an opinion.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;’t want to talk about consciousness.
Of course, now I’ll say that I need “sensation” defined.
Of course, I’ll need “defined” defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology. It’s a form of question-begging.
. That’s because I have never considered “Is X a concept” to be an interesting question.
You used the word , surely you meant something by it.
At that point proper definitions become necessary.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
and also don;’t want to talk about consciousness.
What?
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
I’ll need “defined” defined
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
You used the word , surely you meant something by it.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as in proper scotsman?
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
and also don;’t want to talk about consciousness.
What?
You keep saying it s a broken concept.
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain?
That anything should feel like anything,
Proper as in proper scotsman?
Proper as not circular.
Circular as in
“Everything is made of matter.
matter is what everything is made of.” ?
Yes. I consider that “talking about consciousness”. What else is there to say about it?
That anything should feel like anything,
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes, if I had actually said that. By the way, matter exists in you universe too.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If it refers to something else, then I’ll need you to paraphrase.
If you want to know what “pain” means, sit on a thumbtack.
You can say “torture is wrong”, but that has no implications about the physical world
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
They only need to know about robot pain if “robot pain” is a phrase that describes something.
As i have previously pointed out, you cannot assume meaninglessness as a default.
morality, which has many of the same problems as consciousness, and is even less defensible.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
That is a start, but we can’t gather data from entities that cannot speak
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
Are there classes of conscious entity?
Morality or objective morality? They are different.
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
but I don’t necessarily understand what it would mean for a different kind of mind.
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
It seems you are no longer ruling out a science of other minds
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
I’ve already told you what it would mean
Where exactly?
Is the first half of the conversation meaningful and the second half meaningless?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic).
Then you should consider all statements meaningless, without exception, since all of your arguments are made out of statements, and there cannot be an infinite regress of arguments.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.
it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
I don’t really know why you derive from this that all statements are meaningless. Maybe we disagree about what “meaningless” means? Wikipedia nicely explains that “A meaningless statement posits nothing of substance with which one could agree or disagree”. It’s easy for me to see that “undetectable purple unicorns exist” is a meaningless statement, and yet I have no problems with “it’s raining outside”.
How do you argue why “undetectable purple unicorns exist” is a meaningless statement? Maybe you think that it isn’t, and that we should debate whether they really exist?
The question asked dictionary definition.
Uncontroversially, you can prove something, or at least obtain a high standard of justification using falsifiable empiricism. Uncontroversially, you also can achieve a good level of justification using armchair reasoning based on valid deductions from standard definitions.
The use of nonstandard—stipulated, gerrymanderd, tendentious—is much dicier. You yourself made the comparison to compatiblism. In shades of gray terms, the redefinition manoeuvre isn’t completely beyond the pale, but it is nowhere near the gold standard of epistemology either—compatibliism, the “wretched subtefuge” remains somewhat contentious. The objection is is that compatibilists have changed the subject, are not in fact talking about free will.
And how are you justifying that suggestion? By appeal to personal intuition, which is also low grade epistemology.
It’s actually possible to answer that kind of question in a reasonably rigorous and formal way...you can show that a certain concept is leads to contradiction. But then such arguments are only convincing if they start from definitions that bear some relation to what a word usually means.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
For everyone else, pain is a feeling, a sensation, a phenomenal mode, a quale. You have left all that out of your definition , which is like definining a chair as something you cannot possibly sit on.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
Definitions aren’t handed from god in stone tablets. I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Oddly, I don’t feel like I’m doing the same thing compatibilists do. At least in my own head I explicitly have multiple versions of definitions (i.e. “if we define pain as <...> then <...>”). But I do worry if that’s always reflected in my text.
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I feel like I’ve asked you, though I’m not sure. This is an important point though. If we can’t agree even on that much, then we have some serious problems.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain. The word “similar”, of course, causes it’s own problems (how similar does it need to be?) but that’s nothing in comparison.
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course). It’s your definition that’s weird.
Dictionary definitions generally reflect popular usage. They are sometimes revised in terms of scientific discoveries—water is no longer defined as a basic element—but that requires more epistemic weigh than someone’s intuitive hunch.
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
Who are you communicating to when you use your own definitions?
It’s not relevant to anything. I thunk there can be meaningless statements, and I continue to think yo have no evidence that “robot pain” is one of them.
Says you. Why should I believe that?
Are you abndoning the position that “robot in pain” is meanngless in all cases?
I never said “all cases”, that would be ridiculous, the problems with “robot pain” depends on how the words are defined. With a strict physical definition the problem is easy, with a weaker physical definition, we have the usual classification problem, and with your definition the phrase is meaningless.
I don’t think I’ve ever tried to prove anything by redefining any words. There is some sort of miscommunication going on here. What I may do is try to convince you that my definitions are better, while matching common usage.
You’re asking this as though I maliciously misinterpreted what you mean by consciousness. Is that how you see this? What I tried to do is understand your definition to the best of my ability, and point out the problems in those. When talking about other definitions, I explicitly said things like “In this view pain is …” or “If you defined consciousness as …”. Was it actually unclear which definition I was talking about where, for all this time?
Solve it , then.
Prove that.
But using them proves nothing?
I am wondering who you communicate with when you use a private language>
Well, if you define pain exactly as “the state that follows damage and precedes the ‘ouch’” then you would damage the robot, observe it say ouch, and then proclaim that it experiences pain. It’s that simple. The fact that you asked, suggests that there’s something you’re seriously misunderstanding. But I can’t explain it if I don’t know what it is.
I feel like we’ve talked about this. In fact, here: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvhm
Remember when you offered a stupid proof that “purple is bitter” is category error, and then never replied to my response to it? Gosh, that was a while ago, and apparently we didn’t move an inch.
To summarize, I believe that the phrase is meaningless, because instead of showing to me how meaningful it is, you repeatedly ask me stupid questions. At least, that’s one additional data point.
Yes, definitions do not generally prove statements.
Considering that I provide you with the alternate definitions and explicitly state which definition I’m using where, I’m communicating with you.
Your solution is unconvincing because it can be fulfilled by code that is too simple to be to be convincing. If you change the definition of pain to remove the the subjective, felt aspect, then the resulting problem is easy to solve...but it’s not the original problem. It’s not that I can’t understand you, it’s that it’s hard to believe anyone could pull such a fraudulent manoeuvre.
Meaninglessness is not the default. Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
If definitions do not prove statements , you have no proof that robot pain is easy.
If you redefine pain, you are not making statements about pain in my language. Your schmain might be a trivially easy thing to understand, but it’s not what I asked about.
What the hell? I’m not just annoyed because of how accusatory this sounds, I’m annoyed because it apparently took you a week of talking about alternative definitions to realize that I am, at times, talking about alternative definitions. Are you not paying attention at all?
Well, it should be. I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic). Obviously, you can do whatever you want, but I need you to explain to me, how it makes sense to you.
Sure, in a similar way that people discussing god or homeopathy bothers me. It’s not exactly bad to discuss anything, but not all questions are worth the time spent on them either.
I did say “generally”. Definitions do prove statements about those definitions. That is “define X as Y” proves that “X is Y”. Of course, there are meaningful statements presented in the form “X is Y”, but in those cases, we already have X well defined as Z and the statement is really a shorthand for “Z is Y”. I guess I’m trying to convince you that in your case the definition Z does not exist, so making up a new one is the next best thing.
Yes, that’s because your language is broken.
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
That can’t possible work, as entirelyuseless has explained.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
The ordinary definition for pain clearly does exist, if that is what you mean.
Prove it.
No, pain is of ethical concern because you don’t like it. You don’t have to involve consciousness here. You involve it, because you want to.
Homeopathy is meaningful. God is meaningful only some of the time. But I didn’t mean to imply that they are analogues. They’re just other bad ideas that get way too much attention.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Is that a fact or an opinion?
“highly unpleasant physical sensation caused by illness or injury.”
have you got an exact definition of “concept”?
Requiring extreme precision in all things tends to bite you.
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
“You don’t have to involve consciousness here”—has two meanings:
one is “the concept of preference is simpler than the concept of consciousness”, which I would like to call a fact, although there are some problems with preference too. another is “consciousness is generally not necessary to explain morality”, which is more of an opinion.
Of course, now I’ll say that I need “sensation” defined.
I’d say it’s one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don’t want to bother. That’s because I have never considered “Is X a concept” to be an interesting question. And, frankly, I use the cord “concept” arbitrarily.
It’s you who thinks that “Can X feel pain” is an interesting question. At that point proper definitions become necessary. I don’t think I’m being extreme at all.
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;’t want to talk about consciousness.
Of course, I’ll need “defined” defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology. It’s a form of question-begging.
You used the word , surely you meant something by it.
Proper as in proper scotsman?
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
What?
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
You keep saying it s a broken concept.
That anything should feel like anything,
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes. I consider that “talking about consciousness”. What else is there to say about it?
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Yes, if I had actually said that. By the way, matter exists in you universe too.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If you want to know what “pain” means, sit on a thumbtack.
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
As i have previously pointed out, you cannot assume meaninglessness as a default.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
Are there classes of conscious entity?
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
Where exactly?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
Then you should consider all statements meaningless, without exception, since all of your arguments are made out of statements, and there cannot be an infinite regress of arguments.
That’s cute.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.
I don’t really know why you derive from this that all statements are meaningless. Maybe we disagree about what “meaningless” means? Wikipedia nicely explains that “A meaningless statement posits nothing of substance with which one could agree or disagree”. It’s easy for me to see that “undetectable purple unicorns exist” is a meaningless statement, and yet I have no problems with “it’s raining outside”.
How do you argue why “undetectable purple unicorns exist” is a meaningless statement? Maybe you think that it isn’t, and that we should debate whether they really exist?