There is nothing wrong with wanting to be something you are not. But you should also want to have accurate beliefs about yourself. And being a sort of person who prefers beer over charity doesn’t make you a bad person. And I have no idea how to you can change your true preferences, even if you want to.
tadasdatys
I think the problem isn’t that your actions are inconsistent with your beliefs, it’s that you have some false beliefs about yourself. You may believe that “death is bad”, “charity is good”, and even “I want to be a person who would give to charity instead of buying a beer”. But it does not follow that you believe “giving to charity is more important to me than buying a beer”.
This explanation is more desirable, because if actions don’t follow from beliefs, then you have to explain what they follow from instead.
It seems you are no longer ruling out a science of other minds
No, by “mind” I just mean any sort of information processing machine. I would have said “brain”, but you used a more general “entity”, so I went with “mind”. The question of what is and isn’t a mind is not very interesting to me.
I’ve already told you what it would mean
Where exactly?
Is the first half of the conversation meaningful and the second half meaningless?
First of all, the meaningfulness of words depends on the observer. “Robot pain” is perfectly meaningful to people with precise definitions of “pain”. So, in the worst case, the “thing” remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can’t make a detector if you don’t already know what exactly you’re trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.
It’s also possible that the “thing” was meaningful to everyone to begin with. I don’t know what “dubious detectability” is. My bar for meaningfulness isn’t as high as you may think, though. “Robot pain” has to fail very hard so as not to pass it.
The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don’t expect this sort of poking to cause problems that I couldn’t patch, even in the worst case.
category error, like “sleeping idea”
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that “bitter purple” (or something) was a category error, and your answer was very underwhelming.
I say that “sleeping idea” is meaningless, because I don’t have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, “is this idea sleeping” is answered with “no”. It’s just that I honestly don’t have such a restriction. I use the exact same explanation for the meaninglessness of both “fgdghffgfc” and “robot pain”.
a contradiction, like “colourless green”
The question “is green colorless” has a perfectly good answer (“no, green is green”), unless you don’t think that colors can have colors (in that case it’s a category error too). But I’m nitpicking.
starts with your not knowing something, how to detect robot pain
Here you treat detectability as just some random property of a thing. I’m saying that if you don’t know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can’t possibly say that it exists.
My “unicorn ghost” example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I’d have to make up an argument why this isn’t the unicorn I was talking about. “Robot pain” has no such flaws—it is devoid of any traces of meaningfulness.
That is a start, but we can’t gather data from entities that cannot speak
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I’m confident much can be said, even if I can’t explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought “X feels a like Y”, then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don’t necessarily understand what it would mean for a different kind of mind.
and we don’t know how to arrive at general rules that apply accross different classes of conscious entity.
Are there classes of conscious entity?
Morality or objective morality? They are different.
You cut off the word “objective” from my sentence yourself. Yes, I mean “objective morality”. If “morality” means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you’re not talking about “objective morality”, you can no longer be confident that those rules make any sense. You can’t say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.
We can’t compare experiences qua experiences using a physicalist model, because we don’t have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.
They only need to know about robot pain if “robot pain” is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn’t make it a real thing or an interesting philosophical question.
It’s interesting that you didn’t reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Now you imply that they possible could be detected, in which case I withdraw my original claim
Yes, the unicorns don’t have to be undetectable be definition. They’re just undetectable by all methods that I’m aware of. If “invisible unicorns” have too much undetectability in the title, we can call them “ghost unicorns”. But, of course, if you do detect some unicorns, I’ll say that they aren’t the unicorns I’m talking about and that you’re just redefining this profound problem to suit you. Obviously this isn’t a perfect analogue for your “robot pain”, but I think it’s alright.
So, what you’re saying, is that you don’t know if “ghost unicorns” exist? Why would Occam’s razor not apply here? How would you evaluate the likelihood that they exist?
I doubt that’s a good thing. It hasn’t been very productive so far.
Well, you used it,.
I can also use”ftoy ljhbxd drgfjh”. Is that not meaningless either? Seriously, if you have no arguments, then don’t respond.
What happens if a robot pain detector is invented tomorrow?
Let me answer that differently. You said invisible unicorns don’t exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You’re supposed to be able to check that somehow.
You keep saying it s a broken concept.
Yes. I consider that “talking about consciousness”. What else is there to say about it?
That anything should feel like anything,
If “like” refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I’ll need you to paraphrase.
Circular as in
“Everything is made of matter. matter is what everything is made of.” ?
Yes, if I had actually said that. By the way, matter exists in you universe too.
Yes: it’s relevant because “tortruing robots is wrong” is a test case of whether your definitons are solving the problem or changing the subject.
Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say “torture is wrong”, but that has no implications about the physical world. What happens if I torture someone?
Sure, and if X really is the best approximation of Y that Bob can understand, then again Alice is not dishonest. Although I’m not sure what “approximation” means exactly.
But there is also a case where Alice tells Bob that “X is true”, not because X is somehow close to Y, but because, supposedly, X and Y both imply some Z. This is again a very different case. I think this is just pure and simple lying. That is, the vast majority of lies ever told fall into this category (for example, Z could be “you shouldn’t jail me”, X could be “I didn’t kill anyone” and Y could be “sure, I killed someone, but I promise I won’t do it again”).
In general, the problem is that you didn’t give specific examples, so I don’t really know what case you’re referring to.
Case 1: Alice tells Bob that “X is true”, Bob then interprets this as “Y is true”
Case 2: Alice tells Bob that “X is true”, because Bob would be too stupid to understand it if she said “Y is true”. Now Bob believes that “X is true”.
These two cases are very different. You spend the first half of your post in case 1, and then suddenly jump to case 2 for the other half.
<...> then perhaps telling a lie in a way that you know will communicate a true concept is not a lie.
This is fair.
There are certain truths which literally cannot be spoken to some people.
But this is a completely different case. Lies told to stupid people are still lies, the stupid people don’t understand the truth behind them, and you have communicated nothing. You could argue that those lies are somehow justified, but there is no parallel between lying to stupid people and things like “You’re the best”.
Well, I can imagine a post on SSC with 5 statements about the next week, where other users would reply with probabilities of each becoming true, and arguments for that. Then, after the week, you could count the scores and name the winners in the OP. It would probably get a positive reaction. Why not give it a try?
I’m not sure what the 5 statements should be though. I think it must be “next week” not “next year”, because you can’t enjoy a game if you’ve forgotten you’re playing it. Also, for it to be a game, it has to be repeatable, but if you start predicting the most important events of the year, you’ll run out very fast. On the other hand, weekly events tend to be unimportant random fluctuations. I think that’s a big problem with the whole idea.
One possible solution could be to do experiments rather than predict natural events, i.e. “On day X I will try to do Y. Will it work?”.
There are way too many “shoulds” in this post. If anyone can have fun predicting important events at all, then it would probably be people in this forum. Can we make something like this happen? Would we actually want to participate? I’m not sure that I do.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality
Yes, I said it’s not a fact, and I don’t want to talk about morality because it’s a huge tangent. Do you feel that morality is relevant to our general discussion?
and also don;’t want to talk about consciousness.
What?
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I failing to explain? That “pain hurts”? Give concrete examples.
I’ll need “defined” defined
In this case, “definition” of a category is text that can be used to tell which objects belong to that category and which don’t. No, I don’t see how silly this is.
You are happy to use 99% of the words in English, and you only complain about the ones that don’t fit your apriori ontology.
I only complain about the words when your definition is obviously different from mine. It’s actually perfectly fine not to have a word well defined. It’s only a problem if you then assume that the word identifies some natural category.
You used the word , surely you meant something by it.
Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That’s how language works. You first asked about definitions after I used the phrase “other poorly defined concepts”. Here “concept” could mean “category”.
Proper as in proper scotsman?
Proper as not circular. I assume that, if you actually offered definitions, you’d define consciousness in terms of having experiences, and then define experiences in terms of being conscious.
It’s obvious—we need buzzfeed to create a “which celebrities will get divorced this year” quiz (with prizes?). There is no way people will be interested in predicting next year’s GDP.
There is a common mistake in modeling humans, to think that they are simple. Assuming that “human chose a goal X” implies “human will take actions that optimally reach X” would be silly. Likewise assuming that humans can accurately observe their own internal state is silly. Humans have a series of flaws and limitations that obscure the simple abstractions of goal and belief. However, saying that goals and beliefs do not exist is a bit much. They are still useful in many cases and for many people.
By the way, it sounds a little like you’re referring to so some particular set of beliefs. I think naming them explicitly would add clarity.
What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don’t.
I’m trying to understand your definitions and how they’re different from mine.
I think it is false by occam;’s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam’s razor or anything else to it.
I see that for you “meaningless” is a very narrow concept. But does that agree with your stated definition? In what way is “there is an invisible/undetectable unicorn in your room” not “useless for communication”?
Also, can you offer a concrete meaningless statement yourself? Preferably one in the form “X exists”.
What happens if a robot pain detector is invented tomorrow?
I can give you a robot pain detector today. It only works on robots though. The detector always says “no”. The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.
You’re right. Instead it means that he doesn’t have the willpower required to become a doctor. Presumably, this is something he didn’t know before he started school.