This seems an odd response. I’d understand a response that said “why on Earth do you anticipate that?” or one that said “I think I know why you anticipate that, here are some arguments against...”. But “wildly optimistic” seems to me to make the mistake of offering “a literary criticism, not a scientific one”—as if we knew more about how optimistic a future to expect than what sort of future to expect. These must come the other way around—we must first think about what we anticipate, and our level of optimism must flow from that.
These must come the other way around—we must first think about what we anticipate, and our level of optimism must flow from that.
Not always—minds with the right preference produce surprising outcomes that couldn’t be anticipated, of more or less anticipated good quality. (Expected Creative Surprises)
It does apply, the argument you attacked is wrong for a different reason. Amusingly, I see your original comment, and the follow-up arguments for incorrectness of the previous arguments as all wrong (under assumptions not widely accepted though). Let’s break it up:
(3) is wrong because the general pattern of reasoning from how good the postulated outcome is to its plausibility is valid. (2) is wrong because it’s not in fact too optimistic, quite the opposite. And (1) is wrong because it’s not optimistic enough. If your concepts haven’t broken down when the world is optimized for a magical concept of preference, it’s not optimized strongly enough. “Revival” and “quality of life” are status quo natural categories which are unlikely to survive strong optimization according to the whole of human preference in a recognizable form.
Do you think that if someone frozen in the near future is revived, that’s likely to happen after a friendly-AI singularity has occurred? If so, what’s your reasoning for that assumption?
Sure, I’m talking about heuristics. Don’t think that’s a mistake, though, in an instance with so many unknowns. I agree that my comment above is not a counter-argument, per se, just explaining why your statement goes over my head.
Since you prefer specificity: Why on Earth do you anticipate that?
I can’t argue that cryonics would strike me as an excellent deal if I believed that, but that seems wildly optimistic.
This seems an odd response. I’d understand a response that said “why on Earth do you anticipate that?” or one that said “I think I know why you anticipate that, here are some arguments against...”. But “wildly optimistic” seems to me to make the mistake of offering “a literary criticism, not a scientific one”—as if we knew more about how optimistic a future to expect than what sort of future to expect. These must come the other way around—we must first think about what we anticipate, and our level of optimism must flow from that.
Not always—minds with the right preference produce surprising outcomes that couldn’t be anticipated, of more or less anticipated good quality. (Expected Creative Surprises)
But that property is not limited to outcomes of good quality, correct?
Agreed—but that caveat doesn’t apply in this instance, does it?
It does apply, the argument you attacked is wrong for a different reason. Amusingly, I see your original comment, and the follow-up arguments for incorrectness of the previous arguments as all wrong (under assumptions not widely accepted though). Let’s break it up:
(1) “If I am revived, I expect to live for billions of years”
(2) “That seems wildly optimistic”
(3) “We must first think about what we anticipate, and our level of optimism must flow from that”
(3) is wrong because the general pattern of reasoning from how good the postulated outcome is to its plausibility is valid. (2) is wrong because it’s not in fact too optimistic, quite the opposite. And (1) is wrong because it’s not optimistic enough. If your concepts haven’t broken down when the world is optimized for a magical concept of preference, it’s not optimized strongly enough. “Revival” and “quality of life” are status quo natural categories which are unlikely to survive strong optimization according to the whole of human preference in a recognizable form.
Do you think that if someone frozen in the near future is revived, that’s likely to happen after a friendly-AI singularity has occurred? If so, what’s your reasoning for that assumption?
Sure, I’m talking about heuristics. Don’t think that’s a mistake, though, in an instance with so many unknowns. I agree that my comment above is not a counter-argument, per se, just explaining why your statement goes over my head.
Since you prefer specificity: Why on Earth do you anticipate that?