Alice has a quite nice life, and believes in heaven. Alice thinks that when she dies, she will go to heaven (Which is really nice) and so wants to kill herself. You know that heaven doesn’t exist. You have a choice of
1) Let Alice choose life or death, based on her own preferences and beliefs.(death)
2) Choose what Alice would choose if she had the same preferences but your more accurate beliefs. (life)
Bob has a nasty life, (and its going to stay that way). Bob would choose oblivion if he thought it was an option, but Bob believes that when he dies, he goes to hell. You cal
1) Let Bob choose based on his own preferances and beliefs (life)
2) Choose for Bob based on your beliefs and his preferences. (death)
These situations feel like they should be analogous, but my moral intuitions say 2 for Alice, and 1 for Bob.
Suggest that if there are things they want to do before they die, they should probably do them. (Perhaps give more specific suggestions based on their interests, or things that lots of people like but don’t try.)
Introduce Alice and Bob. (Perhaps one has a more effective approach to life, or there are things they could both learn from each other.)
Investigate/help investigate to see if the premise is incorrect. Perhaps Alice’s life isn’t so nice. Perhaps there are ways Bob’s life could be improved (perhaps risky ways*).
*In the Sequences, lotteries were described as ‘taxes on hope’. Perhaps they can be improved upon; by
decreasing the payout and increasing the probability
using temporary (and thus exploratory) rather than permanent payouts (see below)
seeing if there’s low hanging fruit in domains other than money. (Winning a lot of money might be cool. So might winning a really nice car, or digital/non-rivalrous goods.)
This seems like responding to a trolley problem with a discussion of how to activate the emergency breaks. In the real world, it would be good advice, but it totally misses the point. The point is to investigate morality on toy problems before bringing in real world complications.
Just a thought, maybe it’s a useful perspective. It seems kind of like a game. You choose whether or not to insert your beliefs and they choose their preferences. In this case it just turns out that you prefer life in both cases. What would you do if you didn’t know whether or not you had an Alice/Bob and had to choose your move ahead of time?
Here is a moral dilemma.
Alice has a quite nice life, and believes in heaven. Alice thinks that when she dies, she will go to heaven (Which is really nice) and so wants to kill herself. You know that heaven doesn’t exist. You have a choice of
1) Let Alice choose life or death, based on her own preferences and beliefs.(death)
2) Choose what Alice would choose if she had the same preferences but your more accurate beliefs. (life)
Bob has a nasty life, (and its going to stay that way). Bob would choose oblivion if he thought it was an option, but Bob believes that when he dies, he goes to hell. You cal
1) Let Bob choose based on his own preferances and beliefs (life)
2) Choose for Bob based on your beliefs and his preferences. (death)
These situations feel like they should be analogous, but my moral intuitions say 2 for Alice, and 1 for Bob.
Some suggestions:
Suggest that if there are things they want to do before they die, they should probably do them. (Perhaps give more specific suggestions based on their interests, or things that lots of people like but don’t try.)
Introduce Alice and Bob. (Perhaps one has a more effective approach to life, or there are things they could both learn from each other.)
Investigate/help investigate to see if the premise is incorrect. Perhaps Alice’s life isn’t so nice. Perhaps there are ways Bob’s life could be improved (perhaps risky ways*).
*In the Sequences, lotteries were described as ‘taxes on hope’. Perhaps they can be improved upon; by
decreasing the payout and increasing the probability
using temporary (and thus exploratory) rather than permanent payouts (see below)
seeing if there’s low hanging fruit in domains other than money. (Winning a lot of money might be cool. So might winning a really nice car, or digital/non-rivalrous goods.)
This seems like responding to a trolley problem with a discussion of how to activate the emergency breaks. In the real world, it would be good advice, but it totally misses the point. The point is to investigate morality on toy problems before bringing in real world complications.
Just a thought, maybe it’s a useful perspective. It seems kind of like a game. You choose whether or not to insert your beliefs and they choose their preferences. In this case it just turns out that you prefer life in both cases. What would you do if you didn’t know whether or not you had an Alice/Bob and had to choose your move ahead of time?