Another way of saying this (I think—Vladimir_M can correct me):
You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.
I don’t mean to imply that the kind of person who would kill the fat man would also kill for profit. The only observation that’s necessary for my argument is that killing the fat man—by which I mean actually doing so, not merely saying you’d do so—indicates that the decision algorithms in your brain are sufficiently remote from the human standard that you can no longer be trusted to behave in normal, cooperative, and non-dangerous ways. (Which is then correctly perceived by others when they consider you scary.)
Now, to be more precise, there are actually two different issues there. The first is whether pushing the fat man is compatible with otherwise cooperative and benevolent behavior within the human mind-space. (I’d say even if it is, the latter is highly improbable given the former.) The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds. That’s an extremely deep and complicated problem of game and decision theory, which is absolutely crucial for the future problems of artificial minds and human self-modification, but has little bearing on the contemporary problems of ideology, ethics, etc.
It seems like you can make similar arguments for virtue ethics and acausal trade.
If another agent is able to simulate you well, then it helps them to coordinate with you by knowing what you will do without communicating. When you’re not able to have a good prediction of what other people will do, it takes waaay more computation to figure out how to get what you want, and if its compatible with them getting what they want.
By making yourself easily simulated, you open yourself up to ambient control, and by not being easily simulated you’re difficult to trust. Lawful Stupid seems to happen when you have too many rules enforced too inflexibly, and often (in literature) other characters can take advantage of that really easily.
The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds.
But we normally seem to see “one death as a tragedy, a million as a statistic” due to scope insensitivity, availability bias etc.
Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?
Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?
During the 20th century some societies have attempted to implement more-or-less that policy. The results certainly justify the adjective barbaric.
But most of the people remained relatively normal throughout. So virtue ethics needs a huge patch to approximate consequentialism.
You are providing a consequentialist argument for a base of virtue ethics plus making sure no one makes abstract decisions, but I don’t see how preventing people from making abstract decisions emerges naturally from virtue ethics at all.
I agree with your comment in one sense and was trying to imply it, as the bad results are not prevented by virtue ethics alone. On the other hand, you have provided a consequentialist argument that I think valid and was hinting towards.
Spreading this meme, even by a believing virtue ethicist, would seem to reduce the lifespan of fat men with bounties on their heads much faster than it would spare the crowds tied to the train tracks.
U: “Ooo look, a way to rationalize killing for profit!”
VE: “No no no, the message is that you shouldn’t kill the fat man in either ca-”
U: “Shush you!”
Of course, one may want to simply be the sort who tells the truth, consequences to fat men be damned.
You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.
Another way of saying this (I think—Vladimir_M can correct me):
You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.
I don’t mean to imply that the kind of person who would kill the fat man would also kill for profit. The only observation that’s necessary for my argument is that killing the fat man—by which I mean actually doing so, not merely saying you’d do so—indicates that the decision algorithms in your brain are sufficiently remote from the human standard that you can no longer be trusted to behave in normal, cooperative, and non-dangerous ways. (Which is then correctly perceived by others when they consider you scary.)
Now, to be more precise, there are actually two different issues there. The first is whether pushing the fat man is compatible with otherwise cooperative and benevolent behavior within the human mind-space. (I’d say even if it is, the latter is highly improbable given the former.) The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds. That’s an extremely deep and complicated problem of game and decision theory, which is absolutely crucial for the future problems of artificial minds and human self-modification, but has little bearing on the contemporary problems of ideology, ethics, etc.
It seems like you can make similar arguments for virtue ethics and acausal trade.
If another agent is able to simulate you well, then it helps them to coordinate with you by knowing what you will do without communicating. When you’re not able to have a good prediction of what other people will do, it takes waaay more computation to figure out how to get what you want, and if its compatible with them getting what they want.
By making yourself easily simulated, you open yourself up to ambient control, and by not being easily simulated you’re difficult to trust. Lawful Stupid seems to happen when you have too many rules enforced too inflexibly, and often (in literature) other characters can take advantage of that really easily.
But we normally seem to see “one death as a tragedy, a million as a statistic” due to scope insensitivity, availability bias etc.
Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?
During the 20th century some societies have attempted to implement more-or-less that policy. The results certainly justify the adjective barbaric.
But most of the people remained relatively normal throughout. So virtue ethics needs a huge patch to approximate consequentialism.
You are providing a consequentialist argument for a base of virtue ethics plus making sure no one makes abstract decisions, but I don’t see how preventing people from making abstract decisions emerges naturally from virtue ethics at all.
I agree with your comment in one sense and was trying to imply it, as the bad results are not prevented by virtue ethics alone. On the other hand, you have provided a consequentialist argument that I think valid and was hinting towards.
Spreading this meme, even by a believing virtue ethicist, would seem to reduce the lifespan of fat men with bounties on their heads much faster than it would spare the crowds tied to the train tracks.
U: “Ooo look, a way to rationalize killing for profit!”
VE: “No no no, the message is that you shouldn’t kill the fat man in either ca-”
U: “Shush you!”
Of course, one may want to simply be the sort who tells the truth, consequences to fat men be damned.
This seems obviously false.