Assume that humanity managed to create a friendly AI (FAI). Given the enormous amount of resources that each human is poised to consume until the dark era of the universe, wouldn’t the same arguments that now suggest that we should contribute money to existential risk charities then suggest that we should donate our resources to the friendly AI? Our resources could enable it to find a way to either travel back in time, leave the universe or hack the matrix. Anything that could avert the end of the universe and allow the FAI to support many more agents has effectively infinite expected utility.
Not really. According to my utility function, the difference in utility between FAI and extinction is enormous, and the marginal effect of my resources on the outcome is large. I think we’re in the range of ordinary probabilities, not tiny ones, and will be until we get much farther into diminishing returns. (The “it’s still important even if the probabilities are tiny” thing is a secondary argument that’s not necessary for the conclusion.) On the other hand, the difference in utility between FAI that stays in the universe and FAI that spreads into other universes, from my perspective, is smaller, and the marginal effect of my resources on whether that happened would be extremely tiny.
Not really. According to my utility function, the difference in utility between FAI and extinction is enormous, and the marginal effect of my resources on the outcome is large. I think we’re in the range of ordinary probabilities, not tiny ones, and will be until we get much farther into diminishing returns. (The “it’s still important even if the probabilities are tiny” thing is a secondary argument that’s not necessary for the conclusion.) On the other hand, the difference in utility between FAI that stays in the universe and FAI that spreads into other universes, from my perspective, is smaller, and the marginal effect of my resources on whether that happened would be extremely tiny.