That the brain is not quiescent when experiencing pain is an objective fact. But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Please check out multiple realisability.
Because of that, none can genuinely tell whether an advanced robot has genuine qualia. That includes you, although you are inclined to think that your subjective intuitions are objective knowledge.
But the question of “do robots feel pain”, is as interesting and meaningful as “are tables also chairs”.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Sure, but what does that have to do with anything? Does “objective” mean “well understood” to you?
multiple realisability
There are multiple representations of pain the same way that there are multiple representations of chair.
It is ridiculous how much of this debate is about the basic problem of classification, rather than anything to do with brains. Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument. Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
your subjective intuitions
Calling my reasoning, even if not fully formal, “subjective intuitions” seems rude. I’m not sure if there is some point you’re trying to express with that.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
Not sure where you see me talking about intelligence. But intelligence is far more well defined and measurable than consciousness. Multiple realizability has nothing to do with that.
But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Sure, but what does that have to do with anything?
We do, on the other hand, know subjecively what pain feels like..
Does “objective” mean “well understood” to you?
That’s not the point. The point is that if we have words referring to subjective sensations, like “purple” and “bitter”, we can distinguish them subjectively. But if we discard out subjective insight into them, as you are proposing, and replace them with vague objective descriptions—vague, because no one knows precise descriptions of the full gamut of atomic configurations which implement pain. -- then you take a step backwards. You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter. Basing semantics on objective facts, or “reality” as you call it. only works if you know which fact is which. You are promoting something which sounds good, but doesn’t work—as a research program. Of course it works just fine at getting applause from an audience of dualism-haters.
.multiple realisability
There are multiple representations
Are you talking about realisations or representations?
Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument.
No one has made that argument. The point is not that it is not ultimately true that subjetive states are brain states, it is that rejecting the subjective entirely, at this stage, is not useful. Quite the reverse. Consciousness is the only thing we know from the inside—why throw that away?
Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
If we know what casues or predicts something, then we can sometimes reproduce it in radically different ways. That is the basis of artificial intelligence: realising intelligence in a medium other than a brain.
But you are saying that pain cannot be realised by a robot, and your reasons for sayign that are entirely non-empirical.
We might be able to refine the concept of consciousness as part of a research programme, but research programmes have to start with folk concepts.
We do, on the other hand, know subjecively what pain feels like..
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires. But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not. It’s as if you assume something is real, just because it comes out of people’s mouths.
realisations or representations
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
No one has made that argument.
Parts of my text are referring to the arguments I saw in wikipedia under “multiple realizaility”. But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. In particular, you have to believe this to even ask whether robots feel pain.
and your reasons for sayign that are entirely non-empirical
I’m still waiting for your empirical reasons why “purple is not bitter”, or better yet, “purple is not a chair”, if you feel the concept of bitterness is too subjective.
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
But not much of an argument for using semantics grounded in (physical) reality. Doing so does not buy you maximum precision in absolute terms, and , what is worse, the alternative , of grounding terms for types of experience is 1st person experience, can give you more precision.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires.
You may believe that, but do you know it?
But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not.
The difference is that I accept the possibility that first person evidence could falsify 3rd person theory.
It’s as if you assume something is real, just because it comes out of people’s mouths.
I’m not taking 1st person to mean 3rd person reports of (someone elses) 1st person experience.
Doing so does not buy you maximum precision in absolute terms
What sort of precision are you talking about? More generally, you’re repeatedly said that the concept of consciousness is very useful. I don’t think I’ve seen that usefulness. I suspect that elaborating here is your best bet to convince me of anything. Although even if you did convince me of the usefulness of the term, that wouldn’t help the “robot pain” problem much.
You may believe that, but do you know it?
That’s a slightly weird question. Is it somehow different from “why do you believe that” ? I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary. It’s very likely that “zapping” isn’t quite sufficient, depending on how generously you interpret that word. But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
I’m not taking 1st person to mean 3rd person reports of 1st person experience.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper. Obviously, paper is slower and less accurate. But you seem to be implying a more fundamental difference between those two methods of data storage. Why is that?
A realisation of type X has type X, a representation of type X has type “representation”.
I like type theory. Let X be what I’m sitting on. Type of X is “chair”, type of “chair” is “category”, a painting of X is a representation of X, it is not a representation of “chair”. Representations of “chair”, in the same sense that painting represents X might not exist. Somehow I’m quite comfortable saying that an object of type Y is what represents Y. “Instantiates” might be the best word (curiously though, google uses “represent” to define it). Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
More generally, you’re repeatedly said that the concept of consciousness is very useful
I have said that actual experience is useful to pin down the meaning s of words referring to exerpeince.
You may believe that, but do you know it?
That’s a slightly weird question#
Not at all. That there is a difference betewen belief and knowledge is very standard.
I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary.
There’s an extensive literature of arguments to the contrary,
But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
It is the idea that you can learn aout the inward or 1st person by purely outward or 3rd person means that is contentious.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper.
No, I am saying that my first person is me, and your first person is you. SO my first person information is my experience, not someone else’s report of their experience.
Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
Well, you said that the two R words mean the same hting , when by established usage, they don’t That looks like a source of confusion to me.
Not at all. That there is a difference betewen belief and knowledge is very standard.
I assure you that none of the beliefs I state here were generated by flipping a coin. They are all to some extent justified. That’s why the question is weird—did you expect me to answer “no”?
There’s an extensive literature of arguments to the contrary
There is extensive literature of arguments in favor of god or homeopathy. Doesn’t make those things real. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments (likewise for god and homeopathy). However you seem to have read quite a bit more, and you haven’t raised my confidence in the value of that literature so far.
my first person information is my experience, not someone else’s report of their experience.
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences. I’m not saying that this hypothesis is true, I’m only saying that you don’t know it to be false. And if it did happen to be true, then your internal reasoning about your experiences would not be much different from my reasoning about your experiences written on paper (aside from the low precision of our language). Curiously, a physical experiment is more likely to test this hypothesis, than your internal reasoning.
That looks like a source of confusion to me.
It is a potential source of confusion, but that doesn’t mean it’s causing any right now. Maybe if we talked about representations such as paintings, it would cause some. Regardless, I’ll try to use the words you prefer. Debating their differences and similarities is very orthogonal to our main topic.
There’s an extensive literature of arguments to the contrary
There is extensive literature of arguments in favor of god or homeopathy.
You said there was a “lack” of arguments to the contrary, and I pointed out that there wasn’t.
. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments
Then why didn’t you say lack of good arguments? And why didn’t you say what is wrong with them?
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences.
“Magic” is not a helpful phrase.
You need to distinguish ontology and epistemology. Experiences and memories and so on have their physical correlates—ontology—but that does not mean you can comprehend them—epistemology. We might be able to find ways of translating between correlates and experience, but only if we don’t ignore experience as an epistemology. But, again, taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
internal reasoning about your experiences
Experience is experience, not reasoning about experience.
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with? Please explain your thought process, I really want to know. You see, productive debate requires some amount of generosity. I may not be polite, but I don’t think you’re illiterate or insane, and I don’t think I nitpick about things this obvious.
Maybe this is a symptom that you’re tired of the whole thread? You know you can stop whenever you want, right?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves. That’s where my “baseless hypothesis” comes from.
To clarify, the hypothesis isn’t a direct response to something you said, it’s a new angle I want to look at, to help me understand what you’re talking about.
“Magic” is not a helpful phrase.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation. I realize that this might not be obvious. Though I feel that this is a natural use of the word.
taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
That’s fine. There are some things that I’d want to pick on, although I’m not sure which of them are significant. But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with?
Did you mean “consciousness”? To put is bluntly, if you haven’t; heard of MR, there is probably a lot you don’t know about the subject.
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation.
You yourself are blocking off the possibility of understanding consciousness , subjectivity and experience by refusing to allow them as prima-facie, pre-theoretic phenomena.
You say that we must start with reality, but we cannot: (an accruate map of reality) is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist.
I get that you hate, hate, hate dualism or anything else that threatens physical monism, but you can’t prove physical monism by begging the question against it. You are doing it no favours.
But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Nobody else has a problem with robot pain as a meaningful possibility. You do because you have removed the fist person from your definitions.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
If having experiences is an important part of consciousness, then I’d expect you to reason about them, what induces them, their components, their similarities and differences. This “consciousness in general” phrasing is extremely weird.
Starting there means discarding any other kind of prima-facie evidence.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
You say that we must start with reality, but we cannot: (an accruate map of,) reality is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
The question “where did you start” has some bad assumptions. Of course at first we all have to start from the same naive point. If we did arbitrarily start from different unrelated assumptions, expecting to agree on anything would be weird.
So, what happened is that I started from naive assumptions, and arrived at physicalism. Then when I ask myself a new question, I start from where I last stopped—discarding all of my progress would be weird.
You may think that dropping an initial assumption is inherently wrong, but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary. You might be able to convince me that I do need to keep some similar assumption for technical reasons, but that wouldn’t solve the “robot pain” problem.
The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name. But when I drop consciousness, my life gets easier. How does that work?
I start from where I last stopped—discarding all of my progress would be weird.
There is a difference between a working hypothesis and an unfalsifiable dogma. It seems to you that there is nothing to explain about consciousness because you only accept 3rd-person empirical data, because of your ontology.
You may think that dropping an initial assumption is inherently wrong,
Could explain what assumption you are dropping, and why, without using the word magical.
but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary.
I’d prefer if you settled on one claim.
the “robot pain” problem.
That would be the problem for which there is no evidence except your say-so.
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name.
You can function practically without a concept of gravity, as people before Newton did. What you can get away with theoretically depends on what you are trying to explain. Perhaps there is a gravity sceptic out there somewhere insisting that “falling object” is a meaningless term, and that gravity is magic.
There is a difference between a working hypothesis and an unfalsifiable dogma.
Is my position less falsifiable than yours? No, most statements about consciousness are unfalsifiable. I think that’s a strong hint that it’s a flawed concept.
Could explain what assumption you are dropping, and why, without using the word magical.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head. I dropped it because I found that physicalism explains everything better. “Better” doesn’t mean that I have all the answers about anything, it just means that the answers consciousness gives are even worse.
I don’t understand what your problem with “magical” is?
I’d prefer if you settled on one claim.
Well, I suppose an assumption could be unnecessary without being meaningless, so the words aren’t identical, but I do refer to the same thing, when I use them in this context. I also recall explaining how a “meaningless” statement can be considered “false”. The question is, why are you so uncomfortable with paraphrasing? Do you feel that there are some substantial differences? Honestly, I mostly do this to clarify what I mean, not to obscure it.
That would be the problem for which there is no evidence except your say-so.
The “robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do. That’s a pretty big problem, regardless of what I say. Now, when I ask if this or that idea solves “robot pain” problem, I’m not asking if it produces an actual test, I just ask for a smallest hint that maybe the test could exist.
You can function practically without a concept of gravity, as people before Newton did.
That’s ridiculous. The mathematical law of gravity was written down by Newton, but the concept of gravity, in the sense that “things fall down”, is something most animals have. Do you literally think that nobody noticed gravity before Newton?
most statements about consciousness are unfalsifiable
That’s not the problem.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head.
The assumption is more that consciousness is something that needs explaining,
I also recall explaining how a “meaningless” statement can be considered “false”.
That’s wrong. If you can put a truth-value on a sentence , it is meaningful.
The question is, why are you so uncomfortable with paraphrasing?
I think it is better to express yourself using words that mean what you are trying to express.
Do you feel that there are some substantial differences?
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
Honestly, I mostly do this to clarify what I mean, not to obscure it.
I think ti is likely that your entire argument is based on vagueness and semantic confusion,
robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do.
There is a real problem of not being able to test for a pain sensation directly.
Why did it take you so long too express it that way? Perhaps the problem is this:
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”. Perhaps you have to use vagueness and confusion to make the invalid inference seem valid.
Wow, so you agree with me here? Is it not a problem to you at all, or just not “the” problem?
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
Invisible unicorns are immeasurable. They do not exist. The assumption that they do exist is unnecessary. The statement “invisible unicorns are purple” is meaningless. The words aren’t all exactly the same, but that doesn’t mean they aren’t all appropriate.
Why did it take you so long too express it that way?
A long long time ago you wrote: You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”. So I assumed you understood that immeasurability is relevant here. Did you then forget?
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”.
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
So I assumed you understood that immeasurability is relevant here
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”.
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
No, still not from that.
You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.
but you have brought in a bunch of different issues without explaining how they interrelate
Which issues exactly?
No, still not from that.
Why not? Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist? Does “‘robot pain’ is meaningless” follow from the same better?
but you have brought in a bunch of different issues without explaining how they interrelate
Which issues exactly
Meaningfulness, existence, etc.
Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist?
Huh? It’s perfectly good as a standalone stament , it’s just that it doens’t have much to do with meaning or measurabiltiy.
Does “‘robot pain’ is meaningless” follow from the [we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific] better?
Not really, because you haven’t explained why meaning should depend on measurability.
It is evident that this is a major source of our disagreement. Can you define “meaningless” for me, as you understand it? In particular, how it applies to grammatically correct statements.
It’s perfectly good as a standalone stament
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods.
. Can you define “meaningless” for me, as you understand it? In
Useless for communication.
Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods
Where is this going? You can’t stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
A bit too vague. Can I clarify that as “Useless for communication, because it transfers no information”? Even though that’s a bit too strict.
Meaningless statements cannot have truth values assigned to them.
What is stopping me from assigning them truth values? I’m sure you meant, “meaningless statements cannot be proven or disproven”. But “proof” is a problematic concept. You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument. Anyway, isn’t (1.) enough?
Where is this going?
It’s still entirely about meaning, measurability and existence. I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not and what it could follow from. Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”. Or maybe “invisible unicorns cannot be detected” does not follow from “we have no arguments suggesting that maybe ‘invisible unicorns’ could be something detectable”?
What is stopping me from assigning them truth values?
The fact that you can’t understand them.
You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument.
If you cant understand a statement as exerting the existence of something, it isn’t meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not
Because it needs premises along the lines of “what is not measurable is meaningless” and “what is meaningless is false”, but you have not been able to argue for either (except by gerrymandered definitions).
Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”
There’s an important difference between stipulating something to be indetectable … in any way, forever … and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is “true” in some way that has nothing to do with reality.
What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I’m trying to understand your definitions and how they’re different from mine.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
I see that for you “meaningless” is a very narrow concept. But does that agree with your stated definition? In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Also, can you offer a concrete meaningless statement yourself? Preferably one in the form “X exists”.
What happens if a robot pain detector is invented tomorrow?
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.
In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Well, you used it,.
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad.
Its’ bad because there’s nothign inside the box. It’s just a apriori argument.
I can also use”ftoy ljhbxd drgfjh”. Is that not meaningless either? Seriously, if you have no arguments, then don’t respond.
What happens if a robot pain detector is invented tomorrow?
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You’re supposed to be able to check that somehow.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
If you have no arguments, then don’t respond.
The implicit argument is that meaning/communication is not restricted to literal truth.
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow?
What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam’s razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Now you imply that they possible could be detected, in which case I withdraw my original claim
Yes, the unicorns don’t have to be undetectable be definition. They’re just undetectable by all methods that I’m aware of. If “invisible unicorns” have too much undetectability in the title, we can call them “ghost unicorns”. But, of course, if you do detect some unicorns, I’ll say that they aren’t the unicorns I’m talking about and that you’re just redefining this profound problem to suit you. Obviously this isn’t a perfect analogue for your “robot pain”, but I think it’s alright.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
I asked you before to propose a meaningless statement of your own.
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like “colourless green”, or category error, like “sleeping idea”.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
Very low finite rather than infinitessimal or zero.
I don’t see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don’t feel pain. I don’t see how that can be valid.
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that “bitter purple” (or something) was a category error, and your answer was very underwhelming.
I say that “sleeping idea” is meaningless, because I don’t have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, “is this idea sleeping” is answered with “no”. It’s just that I honestly don’t have such a restriction. I use the exact same explanation for the meaninglessness of both “fgdghffgfc” and “robot pain”.
a contradiction, like “colourless green”
The question “is green colorless” has a perfectly good answer (“no, green is green”), unless you don’t think that colors can have colors (in that case it’s a category error too). But I’m nitpicking.
starts with your not knowing something, how to detect robot pain
Here you treat detectability as just some random property of a thing. I’m saying that if you don’t know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can’t possibly say that it exists.
My “unicorn ghost” example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I’d have to make up an argument why this isn’t the unicorn I was talking about. “Robot pain” has no such flaws—it is devoid of any traces of meaningfulness.
300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.
Can you define “meaningless” for me, as you understand it?
I means “does not have a meaning.”
In particular, how it applies to grammatically correct statements.
In general, it doesn’t apply to grammatically correct sentences, and definitely not to statements. It’s possible that you will find something grammatically correct which is meaningless, but it would have to be severely contorted.
How do you know?
If you can ask the question, “How do you know?”, then the thing has a meaning. I will show you an example of something meaningless:
faheuh fr dhwuidfh d dhwudhdww
Note that there is no question of knowing or not knowing anything. When you can ask how you know something or don’t know it, then the thing has a meaning.
It only explains the “-less” suffix. It’s fine as a dictionary definition, but that’s obviously not what I asked for. I need you to explain “meaning” as well.
The problem with that is that if the word “meaning” has several meanings you will have a situation like this:
Suppose the word “meaning” has two meanings, A & B. But then we can ask what the word “meanings” means in the previous sentence: does it mean A, or B? If you answer that it means A, then the word “meaning” might have two meanings in the A sense, but five meanings in the B sense. But then we can ask what the word “meanings” means in the previous statement. And it might turn out that if the word “meanings” is taken in the B sense, the statement (about 2 and 5) is only true if we take the fourth meaning of the B sense, while in the 3rd sense, it has 7 meanings in the A sense, and 2 meanings in the B sense. And so on, ad infinitum.
All of that means that we have to accept a basic sense of meaning which comes before all the others if we want to talk about meaning at all. And in that basic sense, statements like that obviously have a meaning, whereas ones like “shirwho h wehjoeihqw dhfufh sjs” do not.
we have to accept a basic sense … And in that basic sense, statements like that obviously have a meaning
Your comment boils down to “It’s complicated, but I’m obviously right”. It’s not a very convincing argument.
Meaning is complicated. It is a function of at least four variables: the speaker, the listener, the message, and the context. It’s also well-trodden ground over which herds of philosophers regularly stampede and everything with the tag of “obviously” has been smashed into tiny little pieces by now.
Your comment boils down to “It’s complicated, but I’m obviously right”.
You’re right about the “I’m obviously right” part, but not the rest. It boils down to “you have to start somewhere.” You can’t start out with many meanings of “meaning”, otherwise you don’t know what you mean by “meanings” in the sentence “I am starting out with many meanings of meaning.” You have to start with one meaning, and in that case you can know what you mean when you say “I am starting with one meaning of meaning.”
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
The chair you are sitting on is a realisation; Van Gogh’s painting of his chair at Arles is a representation. You can’t sit on it.
But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. I
That’s very vaguely phrased. There’s are questions of whether pain has phenomenal qualities, whether it is totally reducible to physical behaviour, and whether it is multiply realisable. If pain doesn’t have phenomenal properties, how do you decide which set of brain states get labelled as pain states?
But the concern is that you have no way of coming to know the answers to those questions. You have predetermined that everything must be treated as physics from the outset, so you will ineveitably get out the answer you put in. You are not
treating the identity of pain with brain states as a falsifiable hypothesis.
There are uncontentious examples of multiply realisable things. Everything in computer science is MR—all algorithms, data structures , whatever. For the purposes of AI research, intelligence is assumed to be MR. There is no implication that MR things are things that “exist apart” from their realisations. So I don’t know where you are getting that from.
In particular, you have to believe this to even ask whether robots feel pain.
I would have to believe pain is MR to believe that; but the objection cannot be that nothing is MR. You are apparently being inconsistent about MR.
“purple is not bitter
Colour and taste are different categories, therefore category error.
You are not treating the identity of pain with brain states as a falsifiable hypothesis.
No, I’m treating the identity of pain with the memories thoughts and behaviors that express pain, as unfalsifiable. In other words, I loosely define pain “the thing that makes you say ouch”. That’s how definitions work—the theory that the thing I’m sitting on is a chair is also unfalsifiable. At that point the identity of pain with brain states is in principle falsifiable, you just induce the same state in two brains and observe only one saying ouch. Obviously, there are various difficulties with that exact scheme, it’s just a general sketch of how causality can be falsified.
There are uncontentious examples of multiply realisable things.
I don’t recall suggesting that something isn’t MR. I don’t know why you think that MR is a problem for me. Like I said, there are multiple realizations of pain the same way that there are multiple realizations of chair.
Is that supposed to be a novel theory, or a dictionary definition?
Definition, as I state right in the next sentence, and then confirm in the one after that. Is my text that unreadable?
Your suggesting pain can’t be instantiated in robots..
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question. In the same way that wondering if some table can also be a chair is a stupid question. If you feel that you need an answer, feel free to choose arbitrarily. But then, if you think that having an answer helps you somehow, you’re probably doing something very wrong.
In the case of a simulated human brain, it might seem more natural to call those states “pain”, but then if you don’t, nobody will be able to prove you wrong.
Is that supposed to be a novel theory, or a dictionary definition?
Definition, as I state right in the next sentence
The question asked dictionary definition.
Uncontroversially, you can prove something, or at least obtain a high standard of justification using falsifiable empiricism. Uncontroversially, you also can achieve a good level of justification using armchair reasoning based on valid deductions from standard definitions.
The use of nonstandard—stipulated, gerrymanderd, tendentious—is much dicier. You yourself made the comparison to compatiblism. In shades of gray terms, the redefinition manoeuvre isn’t completely beyond the pale, but it is nowhere near the gold standard of epistemology either—compatibliism, the “wretched subtefuge” remains somewhat contentious. The objection is is that compatibilists have changed the subject, are not in fact talking about free will.
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question.
And how are you justifying that suggestion? By appeal to personal intuition, which is also low grade epistemology.
It’s actually possible to answer that kind of question in a reasonably rigorous and formal way...you can show that a certain concept is leads to contradiction. But then such arguments are only convincing if they start from definitions that bear some relation to what a word usually means.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
For everyone else, pain is a feeling, a sensation, a phenomenal mode, a quale. You have left all that out of your definition , which is like definining a chair as something you cannot possibly sit on.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
the redefinition manoeuvre
Definitions aren’t handed from god in stone tablets. I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Oddly, I don’t feel like I’m doing the same thing compatibilists do. At least in my own head I explicitly have multiple versions of definitions (i.e. “if we define pain as <...> then <...>”). But I do worry if that’s always reflected in my text.
And how are you justifying that suggestion?
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I feel like I’ve asked you, though I’m not sure. This is an important point though. If we can’t agree even on that much, then we have some serious problems.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain. The word “similar”, of course, causes it’s own problems (how similar does it need to be?) but that’s nothing in comparison.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course). It’s your definition that’s weird.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
Dictionary definitions generally reflect popular usage. They are sometimes revised in terms of scientific discoveries—water is no longer defined as a basic element—but that requires more epistemic weigh than someone’s intuitive hunch.
Definitions aren’t handed from god in stone tablets
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Who are you communicating to when you use your own definitions?
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I
It’s not relevant to anything. I thunk there can be meaningless statements, and I continue to think yo have no evidence that “robot pain” is one of them.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain.
Says you. Why should I believe that?
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course).
Are you abndoning the position that “robot in pain” is meanngless in all cases?
Are you abndoning the position that “robot in pain” is meanngless in all cases?
I never said “all cases”, that would be ridiculous, the problems with “robot pain” depends on how the words are defined. With a strict physical definition the problem is easy, with a weaker physical definition, we have the usual classification problem, and with your definition the phrase is meaningless.
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
I don’t think I’ve ever tried to prove anything by redefining any words. There is some sort of miscommunication going on here. What I may do is try to convince you that my definitions are better, while matching common usage.
Who are you communicating to when you use your own definitions?
You’re asking this as though I maliciously misinterpreted what you mean by consciousness. Is that how you see this? What I tried to do is understand your definition to the best of my ability, and point out the problems in those. When talking about other definitions, I explicitly said things like “In this view pain is …” or “If you defined consciousness as …”. Was it actually unclear which definition I was talking about where, for all this time?
Well, if you define pain exactly as “the state that follows damage and precedes the ‘ouch’” then you would damage the robot, observe it say ouch, and then proclaim that it experiences pain. It’s that simple. The fact that you asked, suggests that there’s something you’re seriously misunderstanding. But I can’t explain it if I don’t know what it is.
Remember when you offered a stupid proof that “purple is bitter” is category error, and then never replied to my response to it? Gosh, that was a while ago, and apparently we didn’t move an inch.
To summarize, I believe that the phrase is meaningless, because instead of showing to me how meaningful it is, you repeatedly ask me stupid questions. At least, that’s one additional data point.
But using them proves nothing?
Yes, definitions do not generally prove statements.
I am wondering who you communicate with when you use a private language
Considering that I provide you with the alternate definitions and explicitly state which definition I’m using where, I’m communicating with you.
Your solution is unconvincing because it can be fulfilled by code that is too simple to be to be convincing. If you change the definition of pain to remove the the subjective, felt aspect, then the resulting problem is easy to solve...but it’s not the original problem. It’s not that I can’t understand you, it’s that it’s hard to believe anyone could pull such a fraudulent manoeuvre.
Meaninglessness is not the default. Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
If definitions do not prove statements , you have no proof that robot pain is easy.
If you redefine pain, you are not making statements about pain in my language. Your schmain might be a trivially easy thing to understand, but it’s not what I asked about.
What the hell? I’m not just annoyed because of how accusatory this sounds, I’m annoyed because it apparently took you a week of talking about alternative definitions to realize that I am, at times, talking about alternative definitions. Are you not paying attention at all?
Meaninglessness is not the default.
Well, it should be. I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic). Obviously, you can do whatever you want, but I need you to explain to me, how it makes sense to you.
Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
Sure, in a similar way that people discussing god or homeopathy bothers me. It’s not exactly bad to discuss anything, but not all questions are worth the time spent on them either.
If definitions do not prove statements , you have no proof that robot pain is easy.
I did say “generally”. Definitions do prove statements about those definitions. That is “define X as Y” proves that “X is Y”. Of course, there are meaningful statements presented in the form “X is Y”, but in those cases, we already have X well defined as Z and the statement is really a shorthand for “Z is Y”. I guess I’m trying to convince you that in your case the definition Z does not exist, so making up a new one is the next best thing.
If you redefine pain, you are not making statements about pain in my language.
I am, at times, talking about alternative definition
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
Meaninglessness is not the default.
Well, it should be
That can’t possible work, as entirelyuseless has explained.
Sure, in a similar way that people discussing god or homeopathy bothers me.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
in your case the definition Z does not exist, so making up a new one is the next best thing.
The ordinary definition for pain clearly does exist, if that is what you mean.
Robot pain is of ethical concern because pain hurts.
No, pain is of ethical concern because you don’t like it. You don’t have to involve consciousness here. You involve it, because you want to.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
Homeopathy is meaningful. God is meaningful only some of the time. But I didn’t mean to imply that they are analogues. They’re just other bad ideas that get way too much attention.
The ordinary definition for pain clearly does exist, if that is what you mean.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
“You don’t have to involve consciousness here”—has two meanings: one is “the concept of preference is simpler than the concept of consciousness”, which I would like to call a fact, although there are some problems with preference too.
another is “consciousness is generally not necessary to explain morality”, which is more of an opinion.
“highly unpleasant physical sensation caused by illness or injury.”
Of course, now I’ll say that I need “sensation” defined.
have you got an exact definition of “concept”?
Requiring extreme precision in all things tends to bite you.
I’d say it’s one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don’t want to bother. That’s because I have never considered “Is X a concept” to be an interesting question. And, frankly, I use the cord “concept” arbitrarily.
It’s you who thinks that “Can X feel pain” is an interesting question. At that point proper definitions become necessary. I don’t think I’m being extreme at all.
Obviously, anything can be of ethical concert, if you really want it to be
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
is “the concept of preference is simpler than the concept of consciousness”, w
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
“consciousness is generally not necessary to explain morality”, which is more of an opinion.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;’t want to talk about consciousness.
Of course, now I’ll say that I need “sensation” defined.
Of course, I’ll need “defined” defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology. It’s a form of question-begging.
. That’s because I have never considered “Is X a concept” to be an interesting question.
You used the word , surely you meant something by it.
At that point proper definitions become necessary.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
and also don;’t want to talk about consciousness.
What?
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
I’ll need “defined” defined
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
You used the word , surely you meant something by it.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as in proper scotsman?
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
and also don;’t want to talk about consciousness.
What?
You keep saying it s a broken concept.
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain?
That anything should feel like anything,
Proper as in proper scotsman?
Proper as not circular.
Circular as in
“Everything is made of matter.
matter is what everything is made of.” ?
Yes. I consider that “talking about consciousness”. What else is there to say about it?
That anything should feel like anything,
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes, if I had actually said that. By the way, matter exists in you universe too.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If it refers to something else, then I’ll need you to paraphrase.
If you want to know what “pain” means, sit on a thumbtack.
You can say “torture is wrong”, but that has no implications about the physical world
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
They only need to know about robot pain if “robot pain” is a phrase that describes something.
As i have previously pointed out, you cannot assume meaninglessness as a default.
morality, which has many of the same problems as consciousness, and is even less defensible.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
That is a start, but we can’t gather data from entities that cannot speak
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
Are there classes of conscious entity?
Morality or objective morality? They are different.
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
but I don’t necessarily understand what it would mean for a different kind of mind.
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
It seems you are no longer ruling out a science of other minds
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
I’ve already told you what it would mean
Where exactly?
Is the first half of the conversation meaningful and the second half meaningless?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic).
Then you should consider all statements meaningless, without exception, since all of your arguments are made out of statements, and there cannot be an infinite regress of arguments.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.
it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
I don’t really know why you derive from this that all statements are meaningless. Maybe we disagree about what “meaningless” means? Wikipedia nicely explains that “A meaningless statement posits nothing of substance with which one could agree or disagree”. It’s easy for me to see that “undetectable purple unicorns exist” is a meaningless statement, and yet I have no problems with “it’s raining outside”.
How do you argue why “undetectable purple unicorns exist” is a meaningless statement? Maybe you think that it isn’t, and that we should debate whether they really exist?
That the brain is not quiescent when experiencing pain is an objective fact. But no one knows precise descriptions of the full gamut of atomic configurations which implement pain.
Please check out multiple realisability.
Because of that, none can genuinely tell whether an advanced robot has genuine qualia. That includes you, although you are inclined to think that your subjective intuitions are objective knowledge.
You accept multiple realisability for intelligence, but not for consciousness. That is arbitrary.
Sure, but what does that have to do with anything? Does “objective” mean “well understood” to you?
There are multiple representations of pain the same way that there are multiple representations of chair.
It is ridiculous how much of this debate is about the basic problem of classification, rather than anything to do with brains. Flawed reasoning starts with a postulate that “Pain” exists and then asks, what physical states correspond to it. And when told that “pain is the activity in region X”, it somehow feels that “activity in Y could also be described as pain”, is a counter argument. Good reasoning starts with noticing that people say “ouch” when they stub their toes, or that subbing a toe has a very distinct feeling, and then asks, what causes/predicts these actions/differences, and then wonders, how could we best classify these.
Calling my reasoning, even if not fully formal, “subjective intuitions” seems rude. I’m not sure if there is some point you’re trying to express with that.
Not sure where you see me talking about intelligence. But intelligence is far more well defined and measurable than consciousness. Multiple realizability has nothing to do with that.
We do, on the other hand, know subjecively what pain feels like..
That’s not the point. The point is that if we have words referring to subjective sensations, like “purple” and “bitter”, we can distinguish them subjectively. But if we discard out subjective insight into them, as you are proposing, and replace them with vague objective descriptions—vague, because no one knows precise descriptions of the full gamut of atomic configurations which implement pain. -- then you take a step backwards. You can’t distingusih a brain-scan of someone seeing purple from a brainscan of someone tasting bitter. Basing semantics on objective facts, or “reality” as you call it. only works if you know which fact is which. You are promoting something which sounds good, but doesn’t work—as a research program. Of course it works just fine at getting applause from an audience of dualism-haters.
Are you talking about realisations or representations?
No one has made that argument. The point is not that it is not ultimately true that subjetive states are brain states, it is that rejecting the subjective entirely, at this stage, is not useful. Quite the reverse. Consciousness is the only thing we know from the inside—why throw that away?
If we know what casues or predicts something, then we can sometimes reproduce it in radically different ways. That is the basis of artificial intelligence: realising intelligence in a medium other than a brain.
But you are saying that pain cannot be realised by a robot, and your reasons for sayign that are entirely non-empirical.
We might be able to refine the concept of consciousness as part of a research programme, but research programmes have to start with folk concepts.
I know that the experience of stubbing my toe is called pain, and I know that what I’m sitting on is called a chair. But I don’t know the “precise descriptions of the full gamut of atomic configurations which implement” them in either case. This is very normal.
You seem to be under impression that I advocate certain methods of examining brains over others. I don’t know where you got that? I do believe that everything that could be learned about a brain could be learned by zapping one set of neurons and seeing if another set fires. But if it’s more practical for you to show the brain something purple, and ask it to rate how bitter that felt, from 1 to 5, I have no problems with it. This method, while less direct, can be more useful (especially depending on the exact questions you want to answer). The problem, as I understand, is that you believe these two methods to be radically different, when they are not. It’s as if you assume something is real, just because it comes out of people’s mouths.
I’m not assigning any different meanings to those words, at least not in this context. Are you? Interestingly, both words are pretty awkward to use here. And maybe I can agree that “realization” is a little better.
Parts of my text are referring to the arguments I saw in wikipedia under “multiple realizaility”. But the idea that Pain definitely is a thing that exists, rather than just a label for a set of things that your brain sometimes does, that are in some ways similar, is something I do find in your posts. In particular, you have to believe this to even ask whether robots feel pain.
I’m still waiting for your empirical reasons why “purple is not bitter”, or better yet, “purple is not a chair”, if you feel the concept of bitterness is too subjective.
But not much of an argument for using semantics grounded in (physical) reality. Doing so does not buy you maximum precision in absolute terms, and , what is worse, the alternative , of grounding terms for types of experience is 1st person experience, can give you more precision.
You may believe that, but do you know it?
The difference is that I accept the possibility that first person evidence could falsify 3rd person theory.
I’m not taking 1st person to mean 3rd person reports of (someone elses) 1st person experience.
What sort of precision are you talking about? More generally, you’re repeatedly said that the concept of consciousness is very useful. I don’t think I’ve seen that usefulness. I suspect that elaborating here is your best bet to convince me of anything. Although even if you did convince me of the usefulness of the term, that wouldn’t help the “robot pain” problem much.
That’s a slightly weird question. Is it somehow different from “why do you believe that” ? I believe it thanks to some rudimentary understanding of how brains and sensory organs work, and a lack of arguments to the contrary. It’s very likely that “zapping” isn’t quite sufficient, depending on how generously you interpret that word. But the idea that something cannot be learned through physical experiment, demands a lot of serious evidence, to say the least.
If I can parse you correctly, you seem to be saying that a thought or memory is more true, in some sense, while stored in the brain, then if written down on paper. Obviously, paper is slower and less accurate. But you seem to be implying a more fundamental difference between those two methods of data storage. Why is that?
I like type theory. Let X be what I’m sitting on. Type of X is “chair”, type of “chair” is “category”, a painting of X is a representation of X, it is not a representation of “chair”. Representations of “chair”, in the same sense that painting represents X might not exist. Somehow I’m quite comfortable saying that an object of type Y is what represents Y. “Instantiates” might be the best word (curiously though, google uses “represent” to define it). Of course, the choice is quite arbitrary here. I don’t see any confusion coming from it.
I have said that actual experience is useful to pin down the meaning s of words referring to exerpeince.
Not at all. That there is a difference betewen belief and knowledge is very standard.
There’s an extensive literature of arguments to the contrary,
It is the idea that you can learn aout the inward or 1st person by purely outward or 3rd person means that is contentious.
No, I am saying that my first person is me, and your first person is you. SO my first person information is my experience, not someone else’s report of their experience.
Well, you said that the two R words mean the same hting , when by established usage, they don’t That looks like a source of confusion to me.
I assure you that none of the beliefs I state here were generated by flipping a coin. They are all to some extent justified. That’s why the question is weird—did you expect me to answer “no”?
There is extensive literature of arguments in favor of god or homeopathy. Doesn’t make those things real. Obviously, I was referring to a lack of arguments I’d find compelling. It should also be obvious that I have read most of those arguments (likewise for god and homeopathy). However you seem to have read quite a bit more, and you haven’t raised my confidence in the value of that literature so far.
What about your own report of your experience? Is that qualitatively different from your memory of the experience?
Here’s a baseless hypothesis—for you to reason about your experiences, the experiences first have to be stored in your (short term) memory. This assumes that memory is something relatively simple and objective, compared to the magic of experiences. I’m not saying that this hypothesis is true, I’m only saying that you don’t know it to be false. And if it did happen to be true, then your internal reasoning about your experiences would not be much different from my reasoning about your experiences written on paper (aside from the low precision of our language). Curiously, a physical experiment is more likely to test this hypothesis, than your internal reasoning.
It is a potential source of confusion, but that doesn’t mean it’s causing any right now. Maybe if we talked about representations such as paintings, it would cause some. Regardless, I’ll try to use the words you prefer. Debating their differences and similarities is very orthogonal to our main topic.
You said there was a “lack” of arguments to the contrary, and I pointed out that there wasn’t.
Then why didn’t you say lack of good arguments? And why didn’t you say what is wrong with them?
You seem to have trouble grasping that experience is experience, as well as that the first person is the first person.
“Magic” is not a helpful phrase.
You need to distinguish ontology and epistemology. Experiences and memories and so on have their physical correlates—ontology—but that does not mean you can comprehend them—epistemology. We might be able to find ways of translating between correlates and experience, but only if we don’t ignore experience as an epistemology. But, again, taking 1st person experience seriously as a means to understand cosnciousness does not presuppose a dualistic ontology because epistemology and ontology are different.
Experience is experience, not reasoning about experience.
Because I had faith you’d figure it out (and I like brevity). I don’t know how you failed that. Do you think I don’t know that “conscience” is a topic philosophy already deals with? Please explain your thought process, I really want to know. You see, productive debate requires some amount of generosity. I may not be polite, but I don’t think you’re illiterate or insane, and I don’t think I nitpick about things this obvious.
Maybe this is a symptom that you’re tired of the whole thread? You know you can stop whenever you want, right?
That doesn’t answer the question. If we talk about conscience and experiences, that is presumably because we want to understand something about them through reason. But if we’re reasoning about experiences, we’re probably dealing with memories of experiences, rather than the experiences themselves. That’s where my “baseless hypothesis” comes from.
To clarify, the hypothesis isn’t a direct response to something you said, it’s a new angle I want to look at, to help me understand what you’re talking about.
Oh, I just use “magic” to refer to immeasurable, irreproducible things, that can’t be understood through experimentation. I realize that this might not be obvious. Though I feel that this is a natural use of the word.
That’s fine. There are some things that I’d want to pick on, although I’m not sure which of them are significant. But before that, can you elaborate on what, if anything, that has to do with our “robot pain” problem?
Did you mean “consciousness”? To put is bluntly, if you haven’t; heard of MR, there is probably a lot you don’t know about the subject.
We are reasoning about consciousness in general, and experience in general. That’s not at all the same as comparing the taste of today’s coffee to yesterdays.
You yourself are blocking off the possibility of understanding consciousness , subjectivity and experience by refusing to allow them as prima-facie, pre-theoretic phenomena.
You say that we must start with reality, but we cannot: (an accruate map of reality) is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist.
I get that you hate, hate, hate dualism or anything else that threatens physical monism, but you can’t prove physical monism by begging the question against it. You are doing it no favours.
Nobody else has a problem with robot pain as a meaningful possibility. You do because you have removed the fist person from your definitions.
Heh. That’s fair.
If having experiences is an important part of consciousness, then I’d expect you to reason about them, what induces them, their components, their similarities and differences. This “consciousness in general” phrasing is extremely weird.
I didn’t start here, I wasn’t born yesterday and fully formed, I started from the same magical assumptions that everyone starts with, and eventually I found them unnecessary and unattractive. Unless you meant something else by “start”.
Of course, I mean your methodology starts..
I’m not sure that changes anything.
Can you argue your point? I can argue mine.
You say that we must start with reality, but we cannot: (an accruate map of,) reality is the end point of a process of explanation. We start with pima-facie evidence, we build theories, we test them, and eventually we end up with a map of reality. What you call “reality” is a subset of empirical evidence that has certain qualities . of being public, objective, measurable and so on. Starting there means discarding any other kind of prima-facie evidence. The problem being that discarding subjective, private experience at the outset is equivalent to stating that consciousness does not exist
The question “where did you start” has some bad assumptions. Of course at first we all have to start from the same naive point. If we did arbitrarily start from different unrelated assumptions, expecting to agree on anything would be weird.
So, what happened is that I started from naive assumptions, and arrived at physicalism. Then when I ask myself a new question, I start from where I last stopped—discarding all of my progress would be weird.
You may think that dropping an initial assumption is inherently wrong, but it’s quite normal, not only in physical sciences, but also in math. Note that I’m not contradicting consciousness, I just find it meaningless or, if you prefer, unnecessary. You might be able to convince me that I do need to keep some similar assumption for technical reasons, but that wouldn’t solve the “robot pain” problem.
You see, if I started from the assumption that gravity does not exist, my life would be very hard and I’d eventually have to introduce some concept that’s like gravity but by another name. But when I drop consciousness, my life gets easier. How does that work?
There is a difference between a working hypothesis and an unfalsifiable dogma. It seems to you that there is nothing to explain about consciousness because you only accept 3rd-person empirical data, because of your ontology.
Could explain what assumption you are dropping, and why, without using the word magical.
I’d prefer if you settled on one claim.
That would be the problem for which there is no evidence except your say-so.
You can function practically without a concept of gravity, as people before Newton did. What you can get away with theoretically depends on what you are trying to explain. Perhaps there is a gravity sceptic out there somewhere insisting that “falling object” is a meaningless term, and that gravity is magic.
Is my position less falsifiable than yours? No, most statements about consciousness are unfalsifiable. I think that’s a strong hint that it’s a flawed concept.
The assumption that “consciousness” is a meaningful (but supposedly poorly understood) concept that explains something happening either in the outside world or in my own head. I dropped it because I found that physicalism explains everything better. “Better” doesn’t mean that I have all the answers about anything, it just means that the answers consciousness gives are even worse.
I don’t understand what your problem with “magical” is?
Well, I suppose an assumption could be unnecessary without being meaningless, so the words aren’t identical, but I do refer to the same thing, when I use them in this context. I also recall explaining how a “meaningless” statement can be considered “false”. The question is, why are you so uncomfortable with paraphrasing? Do you feel that there are some substantial differences? Honestly, I mostly do this to clarify what I mean, not to obscure it.
The “robot pain” problem is the problem where you think that maybe robots could feel pain, but you have not even a shred of an idea how to test if they do. That’s a pretty big problem, regardless of what I say. Now, when I ask if this or that idea solves “robot pain” problem, I’m not asking if it produces an actual test, I just ask for a smallest hint that maybe the test could exist.
That’s ridiculous. The mathematical law of gravity was written down by Newton, but the concept of gravity, in the sense that “things fall down”, is something most animals have. Do you literally think that nobody noticed gravity before Newton?
That’s not the problem.
The assumption is more that consciousness is something that needs explaining,
That’s wrong. If you can put a truth-value on a sentence , it is meaningful.
I think it is better to express yourself using words that mean what you are trying to express.
Yes. “Meaningless” , “immeasurable”, “unnecessary” and “non existent” all mean different things.
I think ti is likely that your entire argument is based on vagueness and semantic confusion,
There is a real problem of not being able to test for a pain sensation directly.
Why did it take you so long too express it that way? Perhaps the problem is this:
Expressed in plain terms “robots do not feel pain” does not follow from “we do not know how to measure robot pain”. Perhaps you have to use vagueness and confusion to make the invalid inference seem valid.
Wow, so you agree with me here? Is it not a problem to you at all, or just not “the” problem?
Invisible unicorns are immeasurable. They do not exist. The assumption that they do exist is unnecessary. The statement “invisible unicorns are purple” is meaningless. The words aren’t all exactly the same, but that doesn’t mean they aren’t all appropriate.
A long long time ago you wrote: You seem to have taken the (real enough) issue of not knowing how to tell if a robot feels pain, and turned into a problem with the word “pain”. So I assumed you understood that immeasurability is relevant here. Did you then forget?
No, but it follows from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable, unless we redefine pain to mean something a lot more specific”.
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you
No, still not from that.
You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.
Which issues exactly?
Why not? Is this still about how you’re uncomfortable saying that invisible unicorns don’t exist? Does “‘robot pain’ is meaningless” follow from the same better?
Meaningfulness, existence, etc.
Huh? It’s perfectly good as a standalone stament , it’s just that it doens’t have much to do with meaning or measurabiltiy.
Not really, because you haven’t explained why meaning should depend on measurability.
It is evident that this is a major source of our disagreement. Can you define “meaningless” for me, as you understand it? In particular, how it applies to grammatically correct statements.
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I’m talking about are not just undetectable by light, they’re also undetectable by all other methods.
Useless for communication.
Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).
Where is this going? You can’t stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
A bit too vague. Can I clarify that as “Useless for communication, because it transfers no information”? Even though that’s a bit too strict.
What is stopping me from assigning them truth values? I’m sure you meant, “meaningless statements cannot be proven or disproven”. But “proof” is a problematic concept. You may prefer “for meaningless statements there are no arguments in favor or against them”, but for statements “X exists”, Occam’s razor is often a good counter-argument. Anyway, isn’t (1.) enough?
It’s still entirely about meaning, measurability and existence. I want you to decide whether “there is an invisible/undetectable unicorn in your room” is meaningless or false.
This started when you said that “robots don’t feel pain” does not follow from “we have no arguments suggesting that maybe ‘robot pain’ could be something measurable”. I’m trying to understand why not and what it could follow from. Does “invisible unicorns do not exist” not follow from “invisible unicorns cannot be detected in any way?”. Or maybe “invisible unicorns cannot be detected” does not follow from “we have no arguments suggesting that maybe ‘invisible unicorns’ could be something detectable”?
The fact that you can’t understand them.
If you cant understand a statement as exerting the existence of something, it isn’t meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
Because it needs premises along the lines of “what is not measurable is meaningless” and “what is meaningless is false”, but you have not been able to argue for either (except by gerrymandered definitions).
There’s an important difference between stipulating something to be indetectable … in any way, forever … and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is “true” in some way that has nothing to do with reality.
I’m trying to understand your definitions and how they’re different from mine.
I see that for you “meaningless” is a very narrow concept. But does that agree with your stated definition? In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Also, can you offer a concrete meaningless statement yourself? Preferably one in the form “X exists”.
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.
Well, you used it,.
Its’ bad because there’s nothign inside the box. It’s just a apriori argument.
I can also use”ftoy ljhbxd drgfjh”. Is that not meaningless either? Seriously, if you have no arguments, then don’t respond.
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You’re supposed to be able to check that somehow.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
The implicit argument is that meaning/communication is not restricted to literal truth.
What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam’s razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Yes, the unicorns don’t have to be undetectable be definition. They’re just undetectable by all methods that I’m aware of. If “invisible unicorns” have too much undetectability in the title, we can call them “ghost unicorns”. But, of course, if you do detect some unicorns, I’ll say that they aren’t the unicorns I’m talking about and that you’re just redefining this profound problem to suit you. Obviously this isn’t a perfect analogue for your “robot pain”, but I think it’s alright.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like “colourless green”, or category error, like “sleeping idea”.
Very low finite rather than infinitessimal or zero.
I don’t see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don’t feel pain. I don’t see how that can be valid.
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that “bitter purple” (or something) was a category error, and your answer was very underwhelming.
I say that “sleeping idea” is meaningless, because I don’t have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, “is this idea sleeping” is answered with “no”. It’s just that I honestly don’t have such a restriction. I use the exact same explanation for the meaninglessness of both “fgdghffgfc” and “robot pain”.
The question “is green colorless” has a perfectly good answer (“no, green is green”), unless you don’t think that colors can have colors (in that case it’s a category error too). But I’m nitpicking.
Here you treat detectability as just some random property of a thing. I’m saying that if you don’t know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can’t possibly say that it exists.
My “unicorn ghost” example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I’d have to make up an argument why this isn’t the unicorn I was talking about. “Robot pain” has no such flaws—it is devoid of any traces of meaningfulness.
300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.
I doubt that’s a good thing. It hasn’t been very productive so far.
“Seriously, if you have no arguments, then don’t respond.”
People who live in glass houses shouldn’t throw stones.
I means “does not have a meaning.”
In general, it doesn’t apply to grammatically correct sentences, and definitely not to statements. It’s possible that you will find something grammatically correct which is meaningless, but it would have to be severely contorted.
If you can ask the question, “How do you know?”, then the thing has a meaning. I will show you an example of something meaningless:
faheuh fr dhwuidfh d dhwudhdww
Note that there is no question of knowing or not knowing anything. When you can ask how you know something or don’t know it, then the thing has a meaning.
I’m sure you can see how unhelpful this is.
No.
It only explains the “-less” suffix. It’s fine as a dictionary definition, but that’s obviously not what I asked for. I need you to explain “meaning” as well.
You need no such thing, and as I said, we won’t be continuing the discussion of language until you show it has something to do with consciousness.
Noam Chomsky wrote “Colorless green ideas sleep furiously” in 1955.
Ideas don’t sleep, so they don’t sleep furiously. The sentence is false, not meaningless.
This topic has been discussed, fairly extensively.
Yes. No one has shown that it is meaningless and it pretty obviously is not.
That’s a definitions argument, isn’t it? Under some ideas of what “meaning”, well, means, such sentences are meaningful; under others they are not.
The problem with that is that if the word “meaning” has several meanings you will have a situation like this:
Suppose the word “meaning” has two meanings, A & B. But then we can ask what the word “meanings” means in the previous sentence: does it mean A, or B? If you answer that it means A, then the word “meaning” might have two meanings in the A sense, but five meanings in the B sense. But then we can ask what the word “meanings” means in the previous statement. And it might turn out that if the word “meanings” is taken in the B sense, the statement (about 2 and 5) is only true if we take the fourth meaning of the B sense, while in the 3rd sense, it has 7 meanings in the A sense, and 2 meanings in the B sense. And so on, ad infinitum.
All of that means that we have to accept a basic sense of meaning which comes before all the others if we want to talk about meaning at all. And in that basic sense, statements like that obviously have a meaning, whereas ones like “shirwho h wehjoeihqw dhfufh sjs” do not.
Your comment boils down to “It’s complicated, but I’m obviously right”. It’s not a very convincing argument.
Meaning is complicated. It is a function of at least four variables: the speaker, the listener, the message, and the context. It’s also well-trodden ground over which herds of philosophers regularly stampede and everything with the tag of “obviously” has been smashed into tiny little pieces by now.
You’re right about the “I’m obviously right” part, but not the rest. It boils down to “you have to start somewhere.” You can’t start out with many meanings of “meaning”, otherwise you don’t know what you mean by “meanings” in the sentence “I am starting out with many meanings of meaning.” You have to start with one meaning, and in that case you can know what you mean when you say “I am starting with one meaning of meaning.”
“eventually I found them unnecessary and unattractive”
It is typically considered unnecessary and unattractive to assert that the Emperor is naked.
There’s that word again.
Do you prefer “naive”? Not exactly the same thing, but similar.
The chair you are sitting on is a realisation; Van Gogh’s painting of his chair at Arles is a representation. You can’t sit on it.
That’s very vaguely phrased. There’s are questions of whether pain has phenomenal qualities, whether it is totally reducible to physical behaviour, and whether it is multiply realisable. If pain doesn’t have phenomenal properties, how do you decide which set of brain states get labelled as pain states?
But the concern is that you have no way of coming to know the answers to those questions. You have predetermined that everything must be treated as physics from the outset, so you will ineveitably get out the answer you put in. You are not treating the identity of pain with brain states as a falsifiable hypothesis.
There are uncontentious examples of multiply realisable things. Everything in computer science is MR—all algorithms, data structures , whatever. For the purposes of AI research, intelligence is assumed to be MR. There is no implication that MR things are things that “exist apart” from their realisations. So I don’t know where you are getting that from.
I would have to believe pain is MR to believe that; but the objection cannot be that nothing is MR. You are apparently being inconsistent about MR.
Colour and taste are different categories, therefore category error.
No, I’m treating the identity of pain with the memories thoughts and behaviors that express pain, as unfalsifiable. In other words, I loosely define pain “the thing that makes you say ouch”. That’s how definitions work—the theory that the thing I’m sitting on is a chair is also unfalsifiable. At that point the identity of pain with brain states is in principle falsifiable, you just induce the same state in two brains and observe only one saying ouch. Obviously, there are various difficulties with that exact scheme, it’s just a general sketch of how causality can be falsified.
I don’t recall suggesting that something isn’t MR. I don’t know why you think that MR is a problem for me. Like I said, there are multiple realizations of pain the same way that there are multiple realizations of chair.
Is that supposed to be a novel theory, or a dictionary definition?
Your suggesting pain can’t be instantiated in robots..
Definition, as I state right in the next sentence, and then confirm in the one after that. Is my text that unreadable?
No, I’m suggesting that whether we use “pain” to describe the robot’s states associated with robot’s behaviors similar to human expression pain is a stupid question. In the same way that wondering if some table can also be a chair is a stupid question. If you feel that you need an answer, feel free to choose arbitrarily. But then, if you think that having an answer helps you somehow, you’re probably doing something very wrong.
In the case of a simulated human brain, it might seem more natural to call those states “pain”, but then if you don’t, nobody will be able to prove you wrong.
The question asked dictionary definition.
Uncontroversially, you can prove something, or at least obtain a high standard of justification using falsifiable empiricism. Uncontroversially, you also can achieve a good level of justification using armchair reasoning based on valid deductions from standard definitions.
The use of nonstandard—stipulated, gerrymanderd, tendentious—is much dicier. You yourself made the comparison to compatiblism. In shades of gray terms, the redefinition manoeuvre isn’t completely beyond the pale, but it is nowhere near the gold standard of epistemology either—compatibliism, the “wretched subtefuge” remains somewhat contentious. The objection is is that compatibilists have changed the subject, are not in fact talking about free will.
And how are you justifying that suggestion? By appeal to personal intuition, which is also low grade epistemology.
It’s actually possible to answer that kind of question in a reasonably rigorous and formal way...you can show that a certain concept is leads to contradiction. But then such arguments are only convincing if they start from definitions that bear some relation to what a word usually means.
Using the standard definition of “pain” , it is easy to see what the sentence “the robot is in pain” means. It means “the robot is experiencing a sensation similar to the sensation I feel when I stub my toe”.
Presumably, the fact that “robot in pain” seems weird to you is something to do with your weird definition of pain. But insisting on speaking a language that no one else speaks is not proving anything.
For everyone else, pain is a feeling, a sensation, a phenomenal mode, a quale. You have left all that out of your definition , which is like definining a chair as something you cannot possibly sit on.
Oh. No then. I think this whole debate is about what the dictionary definition should be.
Definitions aren’t handed from god in stone tablets. I feel comfortable offering my own definitions, especially in a case such as “pain”, where definition through behaviors matches common usage quite well.
Oddly, I don’t feel like I’m doing the same thing compatibilists do. At least in my own head I explicitly have multiple versions of definitions (i.e. “if we define pain as <...> then <...>”). But I do worry if that’s always reflected in my text.
Do you agree that “can some tables be chairs” is, in any sense, a stupid question? I feel like I’ve asked you, though I’m not sure. This is an important point though. If we can’t agree even on that much, then we have some serious problems.
Yes, but the “robot is experiencing” part is exactly as problematic as the whole “robot pain” you’re trying to explain. The word “similar”, of course, causes it’s own problems (how similar does it need to be?) but that’s nothing in comparison.
No, my definition of pain (“the thing that makes you say ouch” one) is very simple and makes the “robot pain” problem very easy (the actual answer depends on the robot, of course). It’s your definition that’s weird.
Dictionary definitions generally reflect popular usage. They are sometimes revised in terms of scientific discoveries—water is no longer defined as a basic element—but that requires more epistemic weigh than someone’s intuitive hunch.
They aren’t , but that is not sufficient t show that you can prove things buy redefining words.
Who are you communicating to when you use your own definitions?
It’s not relevant to anything. I thunk there can be meaningless statements, and I continue to think yo have no evidence that “robot pain” is one of them.
Says you. Why should I believe that?
Are you abndoning the position that “robot in pain” is meanngless in all cases?
I never said “all cases”, that would be ridiculous, the problems with “robot pain” depends on how the words are defined. With a strict physical definition the problem is easy, with a weaker physical definition, we have the usual classification problem, and with your definition the phrase is meaningless.
I don’t think I’ve ever tried to prove anything by redefining any words. There is some sort of miscommunication going on here. What I may do is try to convince you that my definitions are better, while matching common usage.
You’re asking this as though I maliciously misinterpreted what you mean by consciousness. Is that how you see this? What I tried to do is understand your definition to the best of my ability, and point out the problems in those. When talking about other definitions, I explicitly said things like “In this view pain is …” or “If you defined consciousness as …”. Was it actually unclear which definition I was talking about where, for all this time?
Solve it , then.
Prove that.
But using them proves nothing?
I am wondering who you communicate with when you use a private language>
Well, if you define pain exactly as “the state that follows damage and precedes the ‘ouch’” then you would damage the robot, observe it say ouch, and then proclaim that it experiences pain. It’s that simple. The fact that you asked, suggests that there’s something you’re seriously misunderstanding. But I can’t explain it if I don’t know what it is.
I feel like we’ve talked about this. In fact, here: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvhm
Remember when you offered a stupid proof that “purple is bitter” is category error, and then never replied to my response to it? Gosh, that was a while ago, and apparently we didn’t move an inch.
To summarize, I believe that the phrase is meaningless, because instead of showing to me how meaningful it is, you repeatedly ask me stupid questions. At least, that’s one additional data point.
Yes, definitions do not generally prove statements.
Considering that I provide you with the alternate definitions and explicitly state which definition I’m using where, I’m communicating with you.
Your solution is unconvincing because it can be fulfilled by code that is too simple to be to be convincing. If you change the definition of pain to remove the the subjective, felt aspect, then the resulting problem is easy to solve...but it’s not the original problem. It’s not that I can’t understand you, it’s that it’s hard to believe anyone could pull such a fraudulent manoeuvre.
Meaninglessness is not the default. Other member’s of your language community are willing to discuss things like robot pain. Does that bother you?
If definitions do not prove statements , you have no proof that robot pain is easy.
If you redefine pain, you are not making statements about pain in my language. Your schmain might be a trivially easy thing to understand, but it’s not what I asked about.
What the hell? I’m not just annoyed because of how accusatory this sounds, I’m annoyed because it apparently took you a week of talking about alternative definitions to realize that I am, at times, talking about alternative definitions. Are you not paying attention at all?
Well, it should be. I will consider all statements meaningless unless I can argue otherwise (or I don’t really care about the topic). Obviously, you can do whatever you want, but I need you to explain to me, how it makes sense to you.
Sure, in a similar way that people discussing god or homeopathy bothers me. It’s not exactly bad to discuss anything, but not all questions are worth the time spent on them either.
I did say “generally”. Definitions do prove statements about those definitions. That is “define X as Y” proves that “X is Y”. Of course, there are meaningful statements presented in the form “X is Y”, but in those cases, we already have X well defined as Z and the statement is really a shorthand for “Z is Y”. I guess I’m trying to convince you that in your case the definition Z does not exist, so making up a new one is the next best thing.
Yes, that’s because your language is broken.
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
That can’t possible work, as entirelyuseless has explained.
God and homeopathy are meaningful, which is why people are able to mount arguments against them,
The ordinary definition for pain clearly does exist, if that is what you mean.
Prove it.
No, pain is of ethical concern because you don’t like it. You don’t have to involve consciousness here. You involve it, because you want to.
Homeopathy is meaningful. God is meaningful only some of the time. But I didn’t mean to imply that they are analogues. They’re just other bad ideas that get way too much attention.
What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
Is that a fact or an opinion?
“highly unpleasant physical sensation caused by illness or injury.”
have you got an exact definition of “concept”?
Requiring extreme precision in all things tends to bite you.
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:
“pain is of ethical concern because you don’t like it” is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
“You don’t have to involve consciousness here”—has two meanings:
one is “the concept of preference is simpler than the concept of consciousness”, which I would like to call a fact, although there are some problems with preference too. another is “consciousness is generally not necessary to explain morality”, which is more of an opinion.
Of course, now I’ll say that I need “sensation” defined.
I’d say it’s one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don’t want to bother. That’s because I have never considered “Is X a concept” to be an interesting question. And, frankly, I use the cord “concept” arbitrarily.
It’s you who thinks that “Can X feel pain” is an interesting question. At that point proper definitions become necessary. I don’t think I’m being extreme at all.
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;’t want to talk about consciousness.
Of course, I’ll need “defined” defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology. It’s a form of question-begging.
You used the word , surely you meant something by it.
Proper as in proper scotsman?
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
What?
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
You keep saying it s a broken concept.
That anything should feel like anything,
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes. I consider that “talking about consciousness”. What else is there to say about it?
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Yes, if I had actually said that. By the way, matter exists in you universe too.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If you want to know what “pain” means, sit on a thumbtack.
That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
That is a start, but we can’t gather data from entities that cannot speak , and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
As i have previously pointed out, you cannot assume meaninglessness as a default.
Morality or objective morality? They are different.
Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
Are there classes of conscious entity?
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don’t feel pain?
I’ve already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.
Consider a scenario where two people are discussing something of dubious detectability.
Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.
Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
Where exactly?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
Then you should consider all statements meaningless, without exception, since all of your arguments are made out of statements, and there cannot be an infinite regress of arguments.
That’s cute.
Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.
In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.
I don’t really know why you derive from this that all statements are meaningless. Maybe we disagree about what “meaningless” means? Wikipedia nicely explains that “A meaningless statement posits nothing of substance with which one could agree or disagree”. It’s easy for me to see that “undetectable purple unicorns exist” is a meaningless statement, and yet I have no problems with “it’s raining outside”.
How do you argue why “undetectable purple unicorns exist” is a meaningless statement? Maybe you think that it isn’t, and that we should debate whether they really exist?