Just from knowing that A and B are some randomly chosen fruits, I can’t make a value judgment and declare that I prefer A to B, because states of knowledge are identical. But if it’s known that A=apple and B=kiwi, then I can well prefer A to B. Likewise, it’s not possible to have a preference between two people based on identical states of knowledge about them, but it’s possible to do so if we know more. People generally prefer themselves to relatives and friends to similar strangers to distant strangers.
People generally prefer themselves to relatives and friends to similar strangers to distant strangers.
The trolley problem isn’t about what people generally prefer, but about exploring moral principles and intuitions with a thought experiment. Moral systems, with the exceptions I noted, generally do not prefer the agent. Beyond the agent, they usually do prefer close people to distant ones (Christianity being an exception here), but in the trolley problem, all of the people are distant to the agent.
I already pointed out that most moral principles do not specially favour the agent, while most people’s preferences do. Nobody wants to be the one who dies that others may live, yet some people have made that decision. Whatever moral principles and intuitions are, therefore, they are something different from “what people prefer”.
But I am fairly sure you know all this already, and I am at a loss to see where you are going with this.
Nobody wants to be the one who dies that others may live, yet some people have made that decision.
Was that a good decision (not a rhetorical question)? Who judges? I understand that aggregated preference of humanity has a neutral point of view, and so in any given situation prefers lives of 5 given normal people to life of 1 given normal person. But is there any good reason to be interested in this valuation in making your own decisions?
Note that having preference for your own life over lives of others could still lead to decisions similar to those you’d expect from a neutral-point-of-view preference. Through logical correlation of decisions made by different people, your decision to follow a given principle makes other people follow it in similar situations, which might benefit you enough for the causal effect of (say) losing your own life to be overweighted by that acausal effect of having your life saved counterfactually. This would be exactly the case where one personally prefers to die so that others may live (so that others could’ve died so that you could’ve lived). It’s not all about preference, even perfectly selfish agents would choose to self-sacrifice, given some assumptions.
That was a normative, not descriptive note. If all people acted according to a better decision theory, their actions would (presumably—I still don’t have good understanding of this) look like having a neutral point of view, despite their preferences remaining self-centered. Of course, if we have most people act as they actually do, then any given person won’t have enough acausal control over others.
Fair enough. The only small note I’d like to add is that the phrase “if all people acted according to a [sufficiently] better decision theory” does not seem to quite convey how distant from reality—or just realism—such a proposition is. It’s less in the ballpark of “if everyone had IQ 230″ than in that of “if everyone uploaded and then took the time to thoroughly grok and rewrite their own code”.
I don’t think that’s true, as people can be as simple (in given situations) as they wish to be, thus allowing others to model them, if that’s desirable. If you are precommited to choosing option A no matter what, it doesn’t matter that you have a brain with hundred billion neurons, you can be modeled as easily as a constant answer.
You cannot precommit “no matter what” in real life. If you are an agent at all—if your variable appears in the problem—that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious—possibly the rulemaker’s tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. “X sounded like a good idea at the time”, even if X is carjacking a bulldozer.)
Just from knowing that A and B are some randomly chosen fruits, I can’t make a value judgment and declare that I prefer A to B, because states of knowledge are identical. But if it’s known that A=apple and B=kiwi, then I can well prefer A to B. Likewise, it’s not possible to have a preference between two people based on identical states of knowledge about them, but it’s possible to do so if we know more. People generally prefer themselves to relatives and friends to similar strangers to distant strangers.
The trolley problem isn’t about what people generally prefer, but about exploring moral principles and intuitions with a thought experiment. Moral systems, with the exceptions I noted, generally do not prefer the agent. Beyond the agent, they usually do prefer close people to distant ones (Christianity being an exception here), but in the trolley problem, all of the people are distant to the agent.
What’s that about, if not what people prefer?
I already pointed out that most moral principles do not specially favour the agent, while most people’s preferences do. Nobody wants to be the one who dies that others may live, yet some people have made that decision. Whatever moral principles and intuitions are, therefore, they are something different from “what people prefer”.
But I am fairly sure you know all this already, and I am at a loss to see where you are going with this.
Was that a good decision (not a rhetorical question)? Who judges? I understand that aggregated preference of humanity has a neutral point of view, and so in any given situation prefers lives of 5 given normal people to life of 1 given normal person. But is there any good reason to be interested in this valuation in making your own decisions?
Note that having preference for your own life over lives of others could still lead to decisions similar to those you’d expect from a neutral-point-of-view preference. Through logical correlation of decisions made by different people, your decision to follow a given principle makes other people follow it in similar situations, which might benefit you enough for the causal effect of (say) losing your own life to be overweighted by that acausal effect of having your life saved counterfactually. This would be exactly the case where one personally prefers to die so that others may live (so that others could’ve died so that you could’ve lived). It’s not all about preference, even perfectly selfish agents would choose to self-sacrifice, given some assumptions.
Acausal relationships between human agents are astronomically overestimated on LW.
That was a normative, not descriptive note. If all people acted according to a better decision theory, their actions would (presumably—I still don’t have good understanding of this) look like having a neutral point of view, despite their preferences remaining self-centered. Of course, if we have most people act as they actually do, then any given person won’t have enough acausal control over others.
Fair enough. The only small note I’d like to add is that the phrase “if all people acted according to a [sufficiently] better decision theory” does not seem to quite convey how distant from reality—or just realism—such a proposition is. It’s less in the ballpark of “if everyone had IQ 230″ than in that of “if everyone uploaded and then took the time to thoroughly grok and rewrite their own code”.
I don’t think that’s true, as people can be as simple (in given situations) as they wish to be, thus allowing others to model them, if that’s desirable. If you are precommited to choosing option A no matter what, it doesn’t matter that you have a brain with hundred billion neurons, you can be modeled as easily as a constant answer.
You cannot precommit “no matter what” in real life. If you are an agent at all—if your variable appears in the problem—that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious—possibly the rulemaker’s tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. “X sounded like a good idea at the time”, even if X is carjacking a bulldozer.)
This is not a problem of IQ.