Correlations don’t necessarily raise or lower the joint probability of several events. Suppose there are two events:
We build AGI
We align AGI
and both are monotone functions of another variable, our competence. Then if we’re not competent enough to align AGI then maybe we’re also not competent enough to build AGI at all so there is no problem. Here the events correlate in a helpful way. This example illustrates what I think Paul means by “weakness in one area compensates for strength in the other”.
Your model could also be that there are two events:
We align AGI
We coordinate to not build AGI if we can’t solve 1.
Here if we’re not competent enough to solve 1 that’s some evidence that we won’t be competent enough to solve 2. So the events correlate in an unhelpful way.
I think the need for a nuanced view of the correlations between the events in your model of the future is what Paul means when he says “a little bit here and a little bit here and a little bit here”.
Correlations don’t necessarily raise or lower the joint probability of several events. Suppose there are two events:
We build AGI
We align AGI
and both are monotone functions of another variable, our competence. Then if we’re not competent enough to align AGI then maybe we’re also not competent enough to build AGI at all so there is no problem. Here the events correlate in a helpful way. This example illustrates what I think Paul means by “weakness in one area compensates for strength in the other”.
Your model could also be that there are two events:
We align AGI
We coordinate to not build AGI if we can’t solve 1.
Here if we’re not competent enough to solve 1 that’s some evidence that we won’t be competent enough to solve 2. So the events correlate in an unhelpful way.
I think the need for a nuanced view of the correlations between the events in your model of the future is what Paul means when he says “a little bit here and a little bit here and a little bit here”.