I would reject the offer based upon the assumption that EY should be able to find or purchase more suitable assassins, and thus I was being tested or manipulated in some ridiculous fashion.
However, it would significantly raise my estimation that those people may need to die (+>10%).
Let’s say that you’ve got military training but are currently deeply in debt and unemployed, that you know EY knows about those factors, and that inside the door of the van you spot three other people who you recognize as having similar skills and similar predicaments.
...that is so absurd that I would accept it as strong evidence that this reality is a computer simulation being tweaked for interestingness. I’d get in the car, lest I disappear for being too boring.
“What would you do if you were a completely different person?”
The me-that-is-not-me would accept the offer, based upon the evidence that three others of a similar cluster in person-space also agreed, are recognized by the me-that-is-not-me, making it likely that they have worked together previously on such extra-judicial excursions, and that the me-that-is-not-me apparently has very poor decision-making capabilities, at least to the point of the inability to find decent employment, to avoid debt, or to avoid the military.
The point is, if you were asked to do something obviously immoral, but that could conceivably be justified, and that nobody else could do for you. Maybe some atrocity related to your job.
I do not consider such hypotheticals useful.
Me neither, honestly, but it’s popular enough around here I thought I’d give it a shot.
“Obviously immoral” and “conceivably justifiable” are mutually exclusive by my definitions. I would plug the act into my standard moral function, which apparently answers the question “is there a single point of moral failure” with “no,” at least for me.
What I mean is, something which would under normal circumstances be bad, but given very specific conditions would be the best way to prevent something even worse, and further, that demonstrating those conditions would be difficult.
I would reject the offer based upon the assumption that EY should be able to find or purchase more suitable assassins, and thus I was being tested or manipulated in some ridiculous fashion.
However, it would significantly raise my estimation that those people may need to die (+>10%).
Let’s say that you’ve got military training but are currently deeply in debt and unemployed, that you know EY knows about those factors, and that inside the door of the van you spot three other people who you recognize as having similar skills and similar predicaments.
...that is so absurd that I would accept it as strong evidence that this reality is a computer simulation being tweaked for interestingness. I’d get in the car, lest I disappear for being too boring.
Something like this, then.
Ping me after the Singularity, we’ll produce the SIAI Hit Squad video game.
More likely the situation would turn out much more mundane. And with more rooftop chases.
A crack commando unit sent to prison by a military court for a crime they didn’t commit?
“What would you do if you were a completely different person?”
The me-that-is-not-me would accept the offer, based upon the evidence that three others of a similar cluster in person-space also agreed, are recognized by the me-that-is-not-me, making it likely that they have worked together previously on such extra-judicial excursions, and that the me-that-is-not-me apparently has very poor decision-making capabilities, at least to the point of the inability to find decent employment, to avoid debt, or to avoid the military.
I do not consider such hypotheticals useful.
The point is, if you were asked to do something obviously immoral, but that could conceivably be justified, and that nobody else could do for you. Maybe some atrocity related to your job.
Me neither, honestly, but it’s popular enough around here I thought I’d give it a shot.
“Obviously immoral” and “conceivably justifiable” are mutually exclusive by my definitions. I would plug the act into my standard moral function, which apparently answers the question “is there a single point of moral failure” with “no,” at least for me.
What I mean is, something which would under normal circumstances be bad, but given very specific conditions would be the best way to prevent something even worse, and further, that demonstrating those conditions would be difficult.
Yes, that’s what I understood it to mean, and I view it as a trolley problem with error bars and “leadership influence” in the form of being from EY.