A big chunk of my uncertainty about whether at least 95% of the future’s potential value is realized comes from uncertainty about “the order of magnitude at which utility is bounded”. That is, if unbounded total utilitarianism is roughly true, I think there is a <1% chance in any of these scenarios that >95% of the future’s potential value would be realized. If decreasing marginal returns in the [amount of hedonium → utility] conversion kick in fast enough for 10^20 slightly conscious humans on heroin for a million years to yield 95% of max utility, then I’d probably give >10% of strong utopia even conditional on building the default superintelligent AI. Both options seem significantly probable to me, causing my odds to vary much less between the scenarios.
This is assuming that “the future’s potential value” is referring to something like the (expected) utility that would be attained by the action sequence recommended by an oracle giving humanity optimal advice according to our CEV. If that’s a misinterpretation or a bad framing more generally, I’d enjoy thinking again about the better question. I would guess that my disagreement with the probabilities is greatly reduced on the level of the underlying empirical outcome distribution.
A big chunk of my uncertainty about whether at least 95% of the future’s potential value is realized comes from uncertainty about “the order of magnitude at which utility is bounded”. That is, if unbounded total utilitarianism is roughly true, I think there is a <1% chance in any of these scenarios that >95% of the future’s potential value would be realized. If decreasing marginal returns in the [amount of hedonium → utility] conversion kick in fast enough for 10^20 slightly conscious humans on heroin for a million years to yield 95% of max utility, then I’d probably give >10% of strong utopia even conditional on building the default superintelligent AI. Both options seem significantly probable to me, causing my odds to vary much less between the scenarios.
This is assuming that “the future’s potential value” is referring to something like the (expected) utility that would be attained by the action sequence recommended by an oracle giving humanity optimal advice according to our CEV. If that’s a misinterpretation or a bad framing more generally, I’d enjoy thinking again about the better question. I would guess that my disagreement with the probabilities is greatly reduced on the level of the underlying empirical outcome distribution.