I had actually been wondering about this recently. People define a psychopath as someone with no empathy, and then jump to “therefore, they have no morals.” But it doesn’t seem impossible to value something or someone as a terminal value without empathizing with them. I don’t see why you couldn’t even be a psychopath and an extreme rational altruist, though you might not enjoy it. Is the word “psychopath” being used two different ways (meaning a non-empathic person and meaning a complete monster), or am I missing a connection that makes these the same thing?
Nate_Gabriel
Well, it doesn’t establish that induction is always valid, so I guess we might not really be disagreeing. But, pragmatically, everyone basically has to assume that it usually works, or is likely to work in whatever the particular case is. I think it’s a good enough heuristic to be called a rational principle that people already have down.
I’m sure there are philosophers who say they don’t, but I guarantee you they act as if they do. Even if they don’t know anything about electronics, they’d still expect the light to come on when they flip the switch.
Standard young-Earther responses, taken from when I was a young-Earth creationist.
Round Earth: Yes. You sort of have to stretch to interpret the Bible as saying the Earth is round or flat, so it’s not exactly a contradiction. Things like “the four corners of the Earth” are obvious metaphor.
Animals on the boat: The “kinds” of animals (Hebrew “baramin”) don’t correspond exactly to what we call species. There are fewer animals in the ark than 2*(number of modern species); this is considered to be a sufficient answer even though it probably isn’t. I don’t know exactly what level of generality the baramin are supposed to be; I guess it depends on how much evolution the particular creationist is willing to accept. They’ll typically use the example of dogs and wolves being the same “kind,” but if that’s the level of similarity we’re talking about then there’ll still be an awful lot of kinds.
Amount of water: The Earth used to be a lot smoother. Shallower oceans, lower mountains, etc. So it could be covered with a more reasonable amount of water. We know this because in the genealogies some guy named his son after the fact that “in his day the Earth was divided.” (The word for divided, Peleg, means earthquake or cataclysm or something. This verse also doubles as tectonic plates being moved around.)
I don’t agree with these, but thought that to avoid strawmanning I should post the l responses that I would have used. Not that they’re much better than the straw version, but this is the kind of thing that would have been said by at least one YEC.
Ideally, how people feel about things would be based in real-world consequences, and a chance of someone being not dead is usually strictly better than the alternative. But I can see how for a small enough chance of resurrection, it could possibly be outweighed by other people holding on to it. I still hope that isn’t what’s going on in this case, though. That would require people to be feeling “I’d rather have this person permanently dead, because at least then I know where I stand.”
That’s...that’s terrible. That it would feel worse to have a chance of resurrection than to have closure. It sounds depressingly plausible that that’s people’s true rejection, but I hope it’s not.
Religion doesn’t have the same problem, and in my experience it’s because of the certainty. People believe themselves to be absolutely certain in their belief in the afterlife. So there’s no closure problem, because they simply know that they’ll see the person again. If you could convince people that cryonics would definitely result in them being resurrected together with their loved ones, then I’d expect this particular problem to go away.
And I’m not sure it’s a mistake. If you’re getting your information in a context where you know it’s meant completely literally and nothing else (e.g., Omega, lawyers, Spock), then yes, it would be wrong. In normal conversation, people may (sometimes but not always; it’s infuriating) use “if” to mean “if and only if.” As for this particular case, somervta is probably completely right. But I don’t think it’s conducive to communication to accuse people of bias for following Grice’s maxims.
Algebra.
Of each other, I think it means.
Of the set of all possible actions that you haven’t denied doing, you’ve only done a minuscule percentage of them.
Of the times that you deny having done something, you lie some non-trivial percent of the time.
Therefore, your denial is evidence of guilt.
This post almost convinced me. I was thinking about it in terms of a similar algorithm, “one-box unless the number is obviously composite.” Your argument convinced me that you should probably one-box even if Omega’s number is, say, six. (Even leaving aside the fact that I’d probably mess up more than one in a thousand questions that easy.) For the reasons you said, I tentatively think that this algorithm is not actually one-boxing and is suboptimal.
But the algorithm “one-box unless the numbers are the same” is different. If you were playing the regular Newcomb game, and someone credibly offered you $2M if you two-box, you’d take it. More to the point, you presumably agree that you should take it. If so, you are now operating on an algorithm of “one-box unless someone offers you more money.”
In this case, it’s just like they are offering you more money: if you two-box, it’s composite 99.9% of the time, and you get $2M.
The one thing we know about Omega is that it picks composites iff it predicts you will two-box. In the Meanmega example, it picks the numbers so that you two-box whenever it can, that just means whenever the lottery number is composite. So in all those cases, you get $2M. That you would have gotten anyway. Huh. And $1M from one-boxing if the lottery number is prime. Whereas, if you one-box, you get $1M 99.9% of the time, plus a lot of money from the lottery anyway. OK, so you’re completely right. I might have to think about this more.
Assuming Manfred is completely right, how many non-identical numbers should it take before you decide you’re not dealing with Meanmega and can start two-boxing when they’re the same?
As cool as that term sounds, I’m not sure I like it. I think it’s too strongly reinforcing of ideas like superiority of rationalists over non-rationalists. Even in cases where rationalists are just better at things, it seems like it’s encouraging thinking of Us and Them to an unnecessary degree.
Also, assuming there is a good enough reason to convince me that the term should be used, why is transhumanism-and-polyamory the set of powers defining the non-muggles? LessWrong isn’t that overwhelmingly poly, is it?
Not very tempted, actually. In this hypothetical, since I’m not feeling empathy the murder wouldn’t make me feel bad and I get money. But who says I have to decide based on how stuff makes me feel?
I might feel absolutely nothing for this stranger and still think “Having the money would be nice, but I guess that would lower net utility. I’ll forego the money because utilitarianism says so.” That’s pretty much exactly what I think when donating to the AMF, and I don’t see why a psychopath couldn’t have that same thought.
I guess the question I’m getting at is, can you care about someone else and their utility function without feeling empathy for them? I think you can, and saying you can’t just boils down to saying that ethics are determined by emotions.