For example, considering the trolley problem, what is more evil—doing nothing, and thereby letting five people die, or acting to change the course, thereby killing one person? Notice the use of the passive voice there, which may help to illustrate why it isn’t immediately obvious that letting something bad happen is more evil than causing something bad to happen.
I judge a person’s morality by whether or not their existence makes the world better or worse, not by whether or not their existence, compared to an ideal alternative, makes the world better or worse. If you are a switch operator at the trolley fork, and your job is literally to make these kinds of decisions, then you are replacing “the everyman”—your morality is compared to that, and if the everyman would act to save five lives by sacrificing one, then you do achieve some evil for not doing so. (The everyman, for these purposes, is kind of the statistical smear of alternatives, supposing you did not exist; or whoever would have your job if you didn’t exist.) You aren’t just there, you are substituting for the everyman, who would otherwise have your job; your existence is substituting in for the statistical default.
However, if you’re just a bystander who randomly happens to be near the switch, then there is zero moral judgment regardless of what anyone else would do, because the statistical default is an empty space where you stand; if you kill multiple people by saving one, then you’ve probably succeeded at making the world a more evil place, assuming you didn’t kill some great villain.
Granted the situation is more complicated than this; I’m supposing all potential victims of the trolley are equally responsible for the situation. If the one guy is there to repair the tracks, and is supposed to be there, and the multiple people are there because they want to get you to switch the trolley because they want to kill the one guy, well, the moral calculus gets a lot more complicated.
Note that I am leaving it up to the person making the decision what the correct moral calculus is; the important thing is that morality should properly be viewed as contextual; alternatively, as a kind of opportunity-cost-externality-of-existence, as opposed to a simple value, not that the correct morality is some particular kind of morality. That is, the rules in this case are in fact deontology-compatible; if it doesn’t seem so at first, well, remembering that law is deontological is nature, consider that many aspects of law are based on a comparison to what a reasonable person would say or do in that situation (consider medical malpractice, for example—a doctor doesn’t get in trouble for doing what any other doctor would do in a given situation).
For evil maximization purposes, the same kind of evaluations apply. If you are the switch operator, regardless of which choice you think is the evil one, you don’t get “evil points” for doing what the default alternative, or a reasonable statistical approximation, would do in that situation; you didn’t actually make the situation worse. A villain is not merely the absence of a hero; evil is not merely the absence of good. You don’t get to be a supervillain merely by ignoring starving people overseas; you don’t get credit for the evil things you didn’t do.
No, if you want to truly be evil, you have to make the situation worse: You have to switch the trolley to kill more people. A true supervillain doesn’t just take credit for the way the world already is.
Contextual Evil
Evil is not just a goodness minimization problem.
For example, considering the trolley problem, what is more evil—doing nothing, and thereby letting five people die, or acting to change the course, thereby killing one person? Notice the use of the passive voice there, which may help to illustrate why it isn’t immediately obvious that letting something bad happen is more evil than causing something bad to happen.
I judge a person’s morality by whether or not their existence makes the world better or worse, not by whether or not their existence, compared to an ideal alternative, makes the world better or worse. If you are a switch operator at the trolley fork, and your job is literally to make these kinds of decisions, then you are replacing “the everyman”—your morality is compared to that, and if the everyman would act to save five lives by sacrificing one, then you do achieve some evil for not doing so. (The everyman, for these purposes, is kind of the statistical smear of alternatives, supposing you did not exist; or whoever would have your job if you didn’t exist.) You aren’t just there, you are substituting for the everyman, who would otherwise have your job; your existence is substituting in for the statistical default.
However, if you’re just a bystander who randomly happens to be near the switch, then there is zero moral judgment regardless of what anyone else would do, because the statistical default is an empty space where you stand; if you kill multiple people by saving one, then you’ve probably succeeded at making the world a more evil place, assuming you didn’t kill some great villain.
Granted the situation is more complicated than this; I’m supposing all potential victims of the trolley are equally responsible for the situation. If the one guy is there to repair the tracks, and is supposed to be there, and the multiple people are there because they want to get you to switch the trolley because they want to kill the one guy, well, the moral calculus gets a lot more complicated.
Note that I am leaving it up to the person making the decision what the correct moral calculus is; the important thing is that morality should properly be viewed as contextual; alternatively, as a kind of opportunity-cost-externality-of-existence, as opposed to a simple value, not that the correct morality is some particular kind of morality. That is, the rules in this case are in fact deontology-compatible; if it doesn’t seem so at first, well, remembering that law is deontological is nature, consider that many aspects of law are based on a comparison to what a reasonable person would say or do in that situation (consider medical malpractice, for example—a doctor doesn’t get in trouble for doing what any other doctor would do in a given situation).
For evil maximization purposes, the same kind of evaluations apply. If you are the switch operator, regardless of which choice you think is the evil one, you don’t get “evil points” for doing what the default alternative, or a reasonable statistical approximation, would do in that situation; you didn’t actually make the situation worse. A villain is not merely the absence of a hero; evil is not merely the absence of good. You don’t get to be a supervillain merely by ignoring starving people overseas; you don’t get credit for the evil things you didn’t do.
No, if you want to truly be evil, you have to make the situation worse: You have to switch the trolley to kill more people. A true supervillain doesn’t just take credit for the way the world already is.