These must come the other way around—we must first think about what we anticipate, and our level of optimism must flow from that.
Not always—minds with the right preference produce surprising outcomes that couldn’t be anticipated, of more or less anticipated good quality. (Expected Creative Surprises)
It does apply, the argument you attacked is wrong for a different reason. Amusingly, I see your original comment, and the follow-up arguments for incorrectness of the previous arguments as all wrong (under assumptions not widely accepted though). Let’s break it up:
(3) is wrong because the general pattern of reasoning from how good the postulated outcome is to its plausibility is valid. (2) is wrong because it’s not in fact too optimistic, quite the opposite. And (1) is wrong because it’s not optimistic enough. If your concepts haven’t broken down when the world is optimized for a magical concept of preference, it’s not optimized strongly enough. “Revival” and “quality of life” are status quo natural categories which are unlikely to survive strong optimization according to the whole of human preference in a recognizable form.
Do you think that if someone frozen in the near future is revived, that’s likely to happen after a friendly-AI singularity has occurred? If so, what’s your reasoning for that assumption?
Not always—minds with the right preference produce surprising outcomes that couldn’t be anticipated, of more or less anticipated good quality. (Expected Creative Surprises)
But that property is not limited to outcomes of good quality, correct?
Agreed—but that caveat doesn’t apply in this instance, does it?
It does apply, the argument you attacked is wrong for a different reason. Amusingly, I see your original comment, and the follow-up arguments for incorrectness of the previous arguments as all wrong (under assumptions not widely accepted though). Let’s break it up:
(1) “If I am revived, I expect to live for billions of years”
(2) “That seems wildly optimistic”
(3) “We must first think about what we anticipate, and our level of optimism must flow from that”
(3) is wrong because the general pattern of reasoning from how good the postulated outcome is to its plausibility is valid. (2) is wrong because it’s not in fact too optimistic, quite the opposite. And (1) is wrong because it’s not optimistic enough. If your concepts haven’t broken down when the world is optimized for a magical concept of preference, it’s not optimized strongly enough. “Revival” and “quality of life” are status quo natural categories which are unlikely to survive strong optimization according to the whole of human preference in a recognizable form.
Do you think that if someone frozen in the near future is revived, that’s likely to happen after a friendly-AI singularity has occurred? If so, what’s your reasoning for that assumption?