This does seem to be a useful insight, though I don’t think it’s anywhere near so precise as that.
Personally, the Knightian uncertainty completely dominates my timeline estimates. If someone asks for which year the cumulative probability reaches some threshold, then firstly that sounds like a confusion of terms, and secondly I have or can generate (as described) a whole bunch of probability distributions without anything usable as weightings attached for each. Any answer I give is going to be pointless and subject to the whims of whatever arbitrary weightings I assign in the moment, which is likely to be influenced by the precise wording of the question and probably what I ate for breakfast.
It’s not going to be the worst case—that’s something like “I am already a simulation within a superintelligent AGI and any fact of the matter about when it happened is completely meaningless due to not occurring in my subjective universe at all”. It’s not going to be the best case either—that’s something like “AGI is not something that humans can create, for reasons we don’t yet know”. Note that both of these are based on uncertainties: hypotheses that cannot be assigned any useful probability since there is no precedent nor any current evidence for or against them.
It’s going to be something in the interior, but where exactly in the interior will be arbitrary, and asking the question a different way will likely shift where.
This does seem to be a useful insight, though I don’t think it’s anywhere near so precise as that.
Personally, the Knightian uncertainty completely dominates my timeline estimates. If someone asks for which year the cumulative probability reaches some threshold, then firstly that sounds like a confusion of terms, and secondly I have or can generate (as described) a whole bunch of probability distributions without anything usable as weightings attached for each. Any answer I give is going to be pointless and subject to the whims of whatever arbitrary weightings I assign in the moment, which is likely to be influenced by the precise wording of the question and probably what I ate for breakfast.
It’s not going to be the worst case—that’s something like “I am already a simulation within a superintelligent AGI and any fact of the matter about when it happened is completely meaningless due to not occurring in my subjective universe at all”. It’s not going to be the best case either—that’s something like “AGI is not something that humans can create, for reasons we don’t yet know”. Note that both of these are based on uncertainties: hypotheses that cannot be assigned any useful probability since there is no precedent nor any current evidence for or against them.
It’s going to be something in the interior, but where exactly in the interior will be arbitrary, and asking the question a different way will likely shift where.