I don’t think I understand your question about Y-problems, since it seems to depend entirely on how specific something can be and still count as a “problem”. Obviously there is already experimental evidence that informs predictions about existential risk from AI in general, but we will get no experimental evidence of any exact situation that occurs beforehand. My claim was more of a vague impression about how OpenAI leadership and John tend to respond to different kinds of evidence in general, and I do not hold it strongly.
To rephrase, it seems to me that in some sense all evidence is experimental. What changes is the degree of generalisation/abstraction required to apply it to a particular problem.
Once we make the distinction between experimental and non-experimental evidence, then we allow for problems on which we only get the “non-experimental” kind—i.e. the kind requiring sufficient generalisation/abstraction that we’d no longer tend to think of it as experimental.
So the question on Y-problems becomes something like:
Given some characterisation of [experimental evidence] (e.g. whatever you meant that OpenAI leadership would tend to put more weight on than John)...
...do you believe there are high-stakes problems for which we’ll get no decision-relevant [experimental evidence] before it’s too late?
I don’t think I understand your question about Y-problems, since it seems to depend entirely on how specific something can be and still count as a “problem”. Obviously there is already experimental evidence that informs predictions about existential risk from AI in general, but we will get no experimental evidence of any exact situation that occurs beforehand. My claim was more of a vague impression about how OpenAI leadership and John tend to respond to different kinds of evidence in general, and I do not hold it strongly.
To rephrase, it seems to me that in some sense all evidence is experimental. What changes is the degree of generalisation/abstraction required to apply it to a particular problem.
Once we make the distinction between experimental and non-experimental evidence, then we allow for problems on which we only get the “non-experimental” kind—i.e. the kind requiring sufficient generalisation/abstraction that we’d no longer tend to think of it as experimental.
So the question on Y-problems becomes something like:
Given some characterisation of [experimental evidence] (e.g. whatever you meant that OpenAI leadership would tend to put more weight on than John)...
...do you believe there are high-stakes problems for which we’ll get no decision-relevant [experimental evidence] before it’s too late?