Question: “Would you pay $1 to stop the torture of 1,000 African children whom you will never meet and who will never impact your life other than through this dollar?”
If the answer is yes then you care about people with whom you have no causal connection. Why does it matter if this lack of connection is due to time rather than space?
The example I read somewhere is: You have a terminal disease and you know you’re going to die in two weeks. Would you press a button that gives you $10 now but will kill one billion people in one month?
If the being making the offer has done sufficient legwork to convince me of the causal connection then I get a better warm fuzzy per dollar return than anything else going.
Question: “Would you pay $1 to stop the torture of 1,000 African children whom you will never meet and who will never impact your life other than through this dollar?”
If the answer is yes then you care about people with whom you have no causal connection. Why does it matter if this lack of connection is due to time rather than space?
What if they answer “no” ? How would you convince them that “yes” is the better answer ?
Edit: see also here
If he answered no I would stop interacting with him. See here.
The example I read somewhere is: You have a terminal disease and you know you’re going to die in two weeks. Would you press a button that gives you $10 now but will kill one billion people in one month?
If the being making the offer has done sufficient legwork to convince me of the causal connection then I get a better warm fuzzy per dollar return than anything else going.
Mutatis mutandis the survival of sentient life, then.
I am confused.
“Can you not get warm fuzzies from assurances as to what occurs after your death?”
Ah, I see. Yes. Since other people care about what happens after I die, such assurances are useful for signalling that I am a useful ally.