This is actually a testable claim: that forecasts ends up trailing things that Eliezer said 10 years later.
Not really, unless you accept corruptible-formats of forecasts with lots of wiggle room. It isn’t true that we can have a clear view of how he is forecasting if he skips proper forecasting.
I think you’re right that it’s impressive he alerted people to potential AI risks. But if you think that’s an informative forecasting track record, I don’t think that heuristic is remotely workable in measuring forecasters.
Doing a precise prediction when you don’t have the information I feel like there’s been a lot of misunderstanding about why Eliezer doesn’t want to give timeline predictions, when he said it repeatedly: he thinks there is just not enough bits of evidence for making a precise prediction. There is enough evidence to be pessimistic, and realize we’re running out of time, but I think he would see giving a precise year like a strong epistemic sin. Realize when you have very little evidence, instead of inventing some to make your forecast more concrete.
To clarify, I’m not saying he should give a specific year that he thinks it happens, like such a 50% confidence interval of 12 months. That would be nuts. Per Tetlock, it just isn’t true that you can’t (or shouldn’t) give specific numbers when you are uncertainty. You just give a wider distribution. And not giving that unambiguous distribution when you’re very uncertain just obfuscates, and is the real epistemic sin.
As for the financial pundit example, there’s a massive disanalogy: it’s easy to predict that there will be a crash. Everybody does it, we have past examples to generalize from, and models and theories accepted by a lot of people for why they might be inevitable. On the other hand, when Eliezer started talking about AI Risks and investing himself fully in them, nobody gave a shit about it or took it seriously. This was not an obvious prediction that everyone was making, and he gave far more details than just saying “AI Risks, man”.
I don’t understand what you mean by the bolded part. What do you mean everybody does it? No they don’t. Some people pretend to, though. The analogy is relevant in the sense that Eliezer should show that he is calibrated at predicting AI risks, rather than only arguing so. The details you mention don’t work as a proper forecasting track record.
The subtlety I really want to point out here is that the choice is not necessarily “make a precise forecast” or “not make any forecast at all”. Notably, the precise forecasts that you generally can write down or put on website are limited to distributions that you can compute decently well and that have well-defined properties. If you arrive at a distribution that is particularly hard to compute, it can still tell you qualitative things (the kind of predictions Eliezer actually makes) without you being able to honestly extract a precise prediction.
In such a situation, making a precise prediction is the same as taking one element of a set of solutions for an equation and labelling it “the” solution.
(If you want to read more about Eliezer’s model, I recommend this paper)
Not really, unless you accept corruptible-formats of forecasts with lots of wiggle room. It isn’t true that we can have a clear view of how he is forecasting if he skips proper forecasting.
I think you’re right that it’s impressive he alerted people to potential AI risks. But if you think that’s an informative forecasting track record, I don’t think that heuristic is remotely workable in measuring forecasters.
To clarify, I’m not saying he should give a specific year that he thinks it happens, like such a 50% confidence interval of 12 months. That would be nuts. Per Tetlock, it just isn’t true that you can’t (or shouldn’t) give specific numbers when you are uncertainty. You just give a wider distribution. And not giving that unambiguous distribution when you’re very uncertain just obfuscates, and is the real epistemic sin.
I don’t understand what you mean by the bolded part. What do you mean everybody does it? No they don’t. Some people pretend to, though. The analogy is relevant in the sense that Eliezer should show that he is calibrated at predicting AI risks, rather than only arguing so. The details you mention don’t work as a proper forecasting track record.
The subtlety I really want to point out here is that the choice is not necessarily “make a precise forecast” or “not make any forecast at all”. Notably, the precise forecasts that you generally can write down or put on website are limited to distributions that you can compute decently well and that have well-defined properties. If you arrive at a distribution that is particularly hard to compute, it can still tell you qualitative things (the kind of predictions Eliezer actually makes) without you being able to honestly extract a precise prediction.
In such a situation, making a precise prediction is the same as taking one element of a set of solutions for an equation and labelling it “the” solution.
(If you want to read more about Eliezer’s model, I recommend this paper)