If You force the outcome to be soly on Your decision alone and if Your decision is clear, free and consistent with a specific philosophy, then You must be judge acc. to this philosophy.
Which philosophy is valid in a Least Convenient Possible World?
If everything I do to “humanely” help the patients without commiting murder to the strange ris futile
AND and if none of the patients would be willing to do a self-sacrifice to save the others
AND if the sole and only decision to this situation would lie on me,
then (my clearly idealized) I would teach the donor all the neccesary skills to kill and harvest me to save the others.
If not even that is allowed, then yes—a utalitarianistic murder of the stranger would be legit, beacuse You have trully checked for all options, to freely and through selff-sacrifice try to save the patiens—without success.
Only when You eliminate all humane options can You turn to the “inhumane” (I use thet term loosely—in this case, at the end, it was a humane sollution) - if that brings out more utility/less global suffering/more global pleasure and freedom.
But again—this is not a realistic option. Realistically it is almost certain that a humane approach would become viable before that.
In the Least Convenient Possible World the stranger says “Hell no!”
Now what?
If You force the outcome to be soly on Your decision alone and if Your decision is clear, free and consistent with a specific philosophy, then You must be judge acc. to this philosophy.
Which philosophy is valid in a Least Convenient Possible World?
If everything I do to “humanely” help the patients without commiting murder to the strange ris futile AND and if none of the patients would be willing to do a self-sacrifice to save the others AND if the sole and only decision to this situation would lie on me, then (my clearly idealized) I would teach the donor all the neccesary skills to kill and harvest me to save the others.
If not even that is allowed, then yes—a utalitarianistic murder of the stranger would be legit, beacuse You have trully checked for all options, to freely and through selff-sacrifice try to save the patiens—without success.
Only when You eliminate all humane options can You turn to the “inhumane” (I use thet term loosely—in this case, at the end, it was a humane sollution) - if that brings out more utility/less global suffering/more global pleasure and freedom.
But again—this is not a realistic option. Realistically it is almost certain that a humane approach would become viable before that.