There is a disease—painful, but not usually life threatening—that is rapidly becoming a pandemic. Medical science is not going to be able to cure the disease for the next several decades, which means that many millions of people will have to endure it, and a few dozen will probably die. You can find a cure for the disease, but to do so you’ll have to perform agonizing, ultimately lethal, experiments on a young and healthy human subject.
I note the answer to this seems particularly straightforward if the few dozen who would probably die would also have been young and healthy at the time. Even more convenient if the subject is a volunteer, and/or if the experimentor (possibly with a staff of non-sentient robot record-keepers and essay-compilers, rather than humans?) did them on himself/herself/themself(?).
(I personally have an extremely strong desire to survive eternally, but I understand there are (/have historically been) people who would willingly risk death or even die for certain in order to save others. Perhaps if sacrificing myself was the only way to save my sister, say, though that’s a somewhat unfair situation to suggest as relevant. Again, tempting to just use a less-egocentric volunteer instead if available.) (Results-based reasoning, rather than idealistic/cautious action-based reasoning. Particularly given public backlash, I can understand why a governmental body would choose to keep its hands as clean as possible instead and allow a massive tragedy rather than staining their hands with a sin. Hmm.)
If sacrifice of myself was necessary to (hope to?) save the person mentioned, I hope that I would {be consistent with my current perception of my likely actions} and go through with it, though I do not claim complete certainty of my actions.
If those that would die from the hypothetical disease were the soon-to-die-anyway (very elderly/infirm), I would likely choose to spend my time on more significant areas of research (life extension, more-fatal/-painful diseases).
If all other significant areas had been dealt with or were being adequately dealt with, perhaps rendering the disease the only remaining ailment that humanity suffered from, I might carry out the research for the sake of completeness. I might also wait a few decades depending on whether or not it would be fixed even without doing so.
A problem here is that the more inconvenient I make one decision, the more convenient I make the other. If I jump ahead to a hypothetical cases where the choices were completely balanced either way, I might just flip a coin, since I presumably wouldn’t care which one I took.
Then again, the stacking could be chosen such that no matter which I took it would be emotionally devastating… that though conveniently (hah) comprises such a slim fraction of all possibilities that I gain by assuming there to always be some aspect that would make a difference or which could be exploited in some way, given that if there were then I could find it and make a sound decision and that if the weren’t my position would not in fact change (by the nature of the balanced setup).
Stepping back and considering the least-convenient consideration argument, I notice that its primary function may be to get people to accept two options as both conceivable in different circumstances, rather than rejecting one on technicalities. If I already acknowledge that I would be inclined to make a different choice depending on different circumstances, am I freed from that application I wonder?
Here’s another word problem for you.
There is a disease—painful, but not usually life threatening—that is rapidly becoming a pandemic. Medical science is not going to be able to cure the disease for the next several decades, which means that many millions of people will have to endure it, and a few dozen will probably die. You can find a cure for the disease, but to do so you’ll have to perform agonizing, ultimately lethal, experiments on a young and healthy human subject.
Do you do it?
I note the answer to this seems particularly straightforward if the few dozen who would probably die would also have been young and healthy at the time. Even more convenient if the subject is a volunteer, and/or if the experimentor (possibly with a staff of non-sentient robot record-keepers and essay-compilers, rather than humans?) did them on himself/herself/themself(?).
(I personally have an extremely strong desire to survive eternally, but I understand there are (/have historically been) people who would willingly risk death or even die for certain in order to save others. Perhaps if sacrificing myself was the only way to save my sister, say, though that’s a somewhat unfair situation to suggest as relevant. Again, tempting to just use a less-egocentric volunteer instead if available.) (Results-based reasoning, rather than idealistic/cautious action-based reasoning. Particularly given public backlash, I can understand why a governmental body would choose to keep its hands as clean as possible instead and allow a massive tragedy rather than staining their hands with a sin. Hmm.)
Assume the least-convenient possible world. It’s not like this one is fair either...
Indeed. nods
If sacrifice of myself was necessary to (hope to?) save the person mentioned, I hope that I would {be consistent with my current perception of my likely actions} and go through with it, though I do not claim complete certainty of my actions.
If those that would die from the hypothetical disease were the soon-to-die-anyway (very elderly/infirm), I would likely choose to spend my time on more significant areas of research (life extension, more-fatal/-painful diseases).
If all other significant areas had been dealt with or were being adequately dealt with, perhaps rendering the disease the only remaining ailment that humanity suffered from, I might carry out the research for the sake of completeness. I might also wait a few decades depending on whether or not it would be fixed even without doing so.
A problem here is that the more inconvenient I make one decision, the more convenient I make the other. If I jump ahead to a hypothetical cases where the choices were completely balanced either way, I might just flip a coin, since I presumably wouldn’t care which one I took.
Then again, the stacking could be chosen such that no matter which I took it would be emotionally devastating… that though conveniently (hah) comprises such a slim fraction of all possibilities that I gain by assuming there to always be some aspect that would make a difference or which could be exploited in some way, given that if there were then I could find it and make a sound decision and that if the weren’t my position would not in fact change (by the nature of the balanced setup).
Stepping back and considering the least-convenient consideration argument, I notice that its primary function may be to get people to accept two options as both conceivable in different circumstances, rather than rejecting one on technicalities. If I already acknowledge that I would be inclined to make a different choice depending on different circumstances, am I freed from that application I wonder?