I feel 4. can be explained by humans not having probability distributions on future events but something more like infradistrbutions/imprecise distributions. This a symptom of larger problem of Bayesian dogmatism that has taken hold of some parts of LW/rationalists.
Let me xplain how this works:
To recall: an imprecise distribution I is the convex (closed) hull of a collection of probability distributions {pi}i∈I. In other words it combines ‘Knightian’ uncertainty with probabilistic uncertainty.
If you ask people for 10%,50%,90% chance of AI happening you are implicitly asking for the worst case: i.e. there in at least one probability distributions pisuch that pi(AGI) =10%,50%,90%
On the other hand when you ask for a certain event to happen for certain in 10,20,50 years you are asking for the dual ‘best case’ scenario, i.e. for ALL probability distributions pi∈I what probability pi(AGI in 10y), pi(AGI in 20y), pi(AGI in 50y) is and taking the minimum.
This does seem to be a useful insight, though I don’t think it’s anywhere near so precise as that.
Personally, the Knightian uncertainty completely dominates my timeline estimates. If someone asks for which year the cumulative probability reaches some threshold, then firstly that sounds like a confusion of terms, and secondly I have or can generate (as described) a whole bunch of probability distributions without anything usable as weightings attached for each. Any answer I give is going to be pointless and subject to the whims of whatever arbitrary weightings I assign in the moment, which is likely to be influenced by the precise wording of the question and probably what I ate for breakfast.
It’s not going to be the worst case—that’s something like “I am already a simulation within a superintelligent AGI and any fact of the matter about when it happened is completely meaningless due to not occurring in my subjective universe at all”. It’s not going to be the best case either—that’s something like “AGI is not something that humans can create, for reasons we don’t yet know”. Note that both of these are based on uncertainties: hypotheses that cannot be assigned any useful probability since there is no precedent nor any current evidence for or against them.
It’s going to be something in the interior, but where exactly in the interior will be arbitrary, and asking the question a different way will likely shift where.
I feel 4. can be explained by humans not having probability distributions on future events but something more like infradistrbutions/imprecise distributions. This a symptom of larger problem of Bayesian dogmatism that has taken hold of some parts of LW/rationalists.
Let me xplain how this works:
To recall: an imprecise distribution I is the convex (closed) hull of a collection of probability distributions {pi}i∈I. In other words it combines ‘Knightian’ uncertainty with probabilistic uncertainty.
If you ask people for 10%,50%,90% chance of AI happening you are implicitly asking for the worst case: i.e. there in at least one probability distributions pisuch that pi(AGI) =10%,50%,90%
On the other hand when you ask for a certain event to happen for certain in 10,20,50 years you are asking for the dual ‘best case’ scenario, i.e. for ALL probability distributions pi∈I what probability pi(AGI in 10y), pi(AGI in 20y), pi(AGI in 50y) is and taking the minimum.
This does seem to be a useful insight, though I don’t think it’s anywhere near so precise as that.
Personally, the Knightian uncertainty completely dominates my timeline estimates. If someone asks for which year the cumulative probability reaches some threshold, then firstly that sounds like a confusion of terms, and secondly I have or can generate (as described) a whole bunch of probability distributions without anything usable as weightings attached for each. Any answer I give is going to be pointless and subject to the whims of whatever arbitrary weightings I assign in the moment, which is likely to be influenced by the precise wording of the question and probably what I ate for breakfast.
It’s not going to be the worst case—that’s something like “I am already a simulation within a superintelligent AGI and any fact of the matter about when it happened is completely meaningless due to not occurring in my subjective universe at all”. It’s not going to be the best case either—that’s something like “AGI is not something that humans can create, for reasons we don’t yet know”. Note that both of these are based on uncertainties: hypotheses that cannot be assigned any useful probability since there is no precedent nor any current evidence for or against them.
It’s going to be something in the interior, but where exactly in the interior will be arbitrary, and asking the question a different way will likely shift where.