May I ask what reasoning/evidence lead you to that conclusion? I’m sort of viewing it as a trolley problem: I can either kill my immortal self, or I can terminate 28 other lives that much sooner than they would have.
(I’m also realizing my conclusion is probably “I don’t do THAT much charitable to begin with, so let’s just go ahead and sign up, and we can re-route the insurance payoff if we suddenly become more philanthropic in the future”)
Look at it in terms of years gained instead of lives lost.
Saving 28 lives gives them each 50 years at best until they die, assuming none of them gain immortality. That’s 1400 man-years gained. Granting immortality to one person is infinity years (in theory); if you live longer than 1400 years then you’ve done the morally right thing by betting on yourself.
Additionally, money spent on cryonics isn’t thrown into a hole. A significant portion is spent on making cryonics more effective and cheaper for others to buy. Rich Americans have to buy it while it’s expensive as much as possible, so that those 28 unfortunates can ever have a chance at immortality.
The game theory makes it non-obvious. Consider the benefits of living in a society where people are discouraged from doing this kind of abstract consequentialist reasoning.
May I ask what reasoning/evidence lead you to that conclusion?
Evidence is a wrong question, and reasoning not much better. Unless, of course, you mean “evidence and reasoning about my own arbitrary preferences”. In which case my personal testimony is strong evidence and even stronger for me given that I know I am not lying.
I prefer immortality over saving 28 lives immediately. I also like the colour “blue”.
What epistemic algorithms would you run to discover more about your arbitrary preferences and to make sure you were interpreting them correctly? (Assuming you don’t have access to an FAI.) For example, what kinds of reflection/introspection or empiricism would you do, given your current level of wisdom/intelligence and a lot of time?
It’s a good question, and ruling out the FAI takes away my favourite strategy!
One thing I consider is how my verbal expressions of preference will tend to be biased. For example if I went around saying “I’d willingly give up immortality to prevent 28 strangers from starving” then I would triple check my belief to see if it was an actual preference and not a pure PR soundbite. More generally I try to bring the question down to the crude level of “what do I want?”, eliminating distracting thoughts about how things ‘should’ be. I visualize possible futures and simply pick the one I like more.
Another question I like to ask myself (and frequently find myself asked by other people while immersed in SIAI affiliated culture) is “what if an FAI or Omega told you that your actual extrapolated preference was X?”. If I find myself seriously doubting the FAI then that is rather significant evidence. (And also not an unreasonable position. The doubt is correctly directed at the method of extrapolating preferences instilled by the programmers or the Omega postulator.)
May I ask what reasoning/evidence lead you to that conclusion? I’m sort of viewing it as a trolley problem: I can either kill my immortal self, or I can terminate 28 other lives that much sooner than they would have.
(I’m also realizing my conclusion is probably “I don’t do THAT much charitable to begin with, so let’s just go ahead and sign up, and we can re-route the insurance payoff if we suddenly become more philanthropic in the future”)
Look at it in terms of years gained instead of lives lost.
Saving 28 lives gives them each 50 years at best until they die, assuming none of them gain immortality. That’s 1400 man-years gained. Granting immortality to one person is infinity years (in theory); if you live longer than 1400 years then you’ve done the morally right thing by betting on yourself.
Additionally, money spent on cryonics isn’t thrown into a hole. A significant portion is spent on making cryonics more effective and cheaper for others to buy. Rich Americans have to buy it while it’s expensive as much as possible, so that those 28 unfortunates can ever have a chance at immortality.
The game theory makes it non-obvious. Consider the benefits of living in a society where people are discouraged from doing this kind of abstract consequentialist reasoning.
Evidence is a wrong question, and reasoning not much better. Unless, of course, you mean “evidence and reasoning about my own arbitrary preferences”. In which case my personal testimony is strong evidence and even stronger for me given that I know I am not lying.
I prefer immortality over saving 28 lives immediately. I also like the colour “blue”.
What epistemic algorithms would you run to discover more about your arbitrary preferences and to make sure you were interpreting them correctly? (Assuming you don’t have access to an FAI.) For example, what kinds of reflection/introspection or empiricism would you do, given your current level of wisdom/intelligence and a lot of time?
It’s a good question, and ruling out the FAI takes away my favourite strategy!
One thing I consider is how my verbal expressions of preference will tend to be biased. For example if I went around saying “I’d willingly give up immortality to prevent 28 strangers from starving” then I would triple check my belief to see if it was an actual preference and not a pure PR soundbite. More generally I try to bring the question down to the crude level of “what do I want?”, eliminating distracting thoughts about how things ‘should’ be. I visualize possible futures and simply pick the one I like more.
Another question I like to ask myself (and frequently find myself asked by other people while immersed in SIAI affiliated culture) is “what if an FAI or Omega told you that your actual extrapolated preference was X?”. If I find myself seriously doubting the FAI then that is rather significant evidence. (And also not an unreasonable position. The doubt is correctly directed at the method of extrapolating preferences instilled by the programmers or the Omega postulator.)