>And now the philosopher comes and presents their “thought experiment”—setting up a scenario in which, by
>stipulation, the only possible way to save five innocent lives is to murder one innocent person, and this murder is
>certain to save the five lives. “There’s a train heading to run over five innocent people, who you can’t possibly
>warn to jump out of the way, but you can push one innocent person into the path of the train, which will stop the
>train. These are your only options; what do you do?”
If you are looking out for yourself, it’s an easy decision, at least in the United States. There is no legal requirement to save lives, but dealing with the legal consequences of putting the innocent guy in front of the train is likely to be a real pain in the ass. Therefore, do nothing.
I agree that this isn’t the thought experiment that was originally proposed. If we take inventory of the questions available, we have:
If I’m a real person with real human desires, sit there and let the 5 guys get run over, as I suggest above. If I’m an AI that is uniformly compassionate and immune from social consequences to my actions, and there’s no compelling reason to value the one above the five, then I’d probably kill one to save five.
* If I’m a person with human desires who is pretending to be perfectly compassionate, then there’s a problem to solve. In this case I prefer to unask the question by stopping the pretense.
>And now the philosopher comes and presents their “thought experiment”—setting up a scenario in which, by
>stipulation, the only possible way to save five innocent lives is to murder one innocent person, and this murder is
>certain to save the five lives. “There’s a train heading to run over five innocent people, who you can’t possibly
>warn to jump out of the way, but you can push one innocent person into the path of the train, which will stop the
>train. These are your only options; what do you do?”
If you are looking out for yourself, it’s an easy decision, at least in the United States. There is no legal requirement to save lives, but dealing with the legal consequences of putting the innocent guy in front of the train is likely to be a real pain in the ass. Therefore, do nothing.
I agree that this isn’t the thought experiment that was originally proposed. If we take inventory of the questions available, we have:
If I’m a real person with real human desires, sit there and let the 5 guys get run over, as I suggest above.
If I’m an AI that is uniformly compassionate and immune from social consequences to my actions, and there’s no compelling reason to value the one above the five, then I’d probably kill one to save five.
* If I’m a person with human desires who is pretending to be perfectly compassionate, then there’s a problem to solve. In this case I prefer to unask the question by stopping the pretense.