That being, I feel like there is a misunderstanding here. Daniel mentioned that in another comment thread, but I don’t think Eliezer claims what you’re attributing to him, nor that your analogy with financial pundits works in this context.
My model of Eliezer, based on reading a lot of his posts (old and new) and one conversation, is that he’s dunking on Metaculus and forecasters for a combination of two epistemic sins:
Taking a long time to update on available information Basically, you shouldn’t take so long to update on the risk for AI, the accelerating pace, the power of scaling. I don’t think Eliezer is perfect on this, but he definitely can claim that he thought and invested himself in AI risks literally decades before any metaculus forecaster even thought about the topic. This is actually a testable claim: that forecasts ends up trailing things that Eliezer said 10 years later.
Doing a precise prediction when you don’t have the information I feel like there’s been a lot of misunderstanding about why Eliezer doesn’t want to give timeline predictions, when he said it repeatedly: he thinks there is just not enough bits of evidence for making a precise prediction. There is enough evidence to be pessimistic, and realize we’re running out of time, but I think he would see giving a precise year like a strong epistemic sin. Realize when you have very little evidence, instead of inventing some to make your forecast more concrete.[1]
As for the financial pundit example, there’s a massive disanalogy: it’s easy to predict that there will be a crash. Everybody does it, we have past examples to generalize from, and models and theories accepted by a lot of people for why they might be inevitable. On the other hand, when Eliezer started talking about AI Risks and investing himself fully in them, nobody gave a shit about it or took it seriously. This was not an obvious prediction that everyone was making, and he gave far more details than just saying “AI Risks, man”.
Note that I’m not saying that Eliezer has a perfect track record or that you shouldn’t criticize him. On the first point, I feel like he had a massive miss of GPT-like models, which are incoherent with the models of intelligence and agency that Eliezer used in the sequences and at MIRI — that’s a strong failed prediction for me, a qualitative unknown unknown that was missed. And on the second point, I’m definitely for more productive debate around alignment and Eliezer’s position.
I just wanted to point out ways in which your post seemed to discuss a strawman, which I don’t think was your intention.
I think this comes a lot from the Bayesian ontology: you remove hypotheses rather than confirming them; so you often end up with a whole space of potential candidates with similar probability. I explore Eliezer’s writing on this topic here.
This is actually a testable claim: that forecasts ends up trailing things that Eliezer said 10 years later.
Not really, unless you accept corruptible-formats of forecasts with lots of wiggle room. It isn’t true that we can have a clear view of how he is forecasting if he skips proper forecasting.
I think you’re right that it’s impressive he alerted people to potential AI risks. But if you think that’s an informative forecasting track record, I don’t think that heuristic is remotely workable in measuring forecasters.
Doing a precise prediction when you don’t have the information I feel like there’s been a lot of misunderstanding about why Eliezer doesn’t want to give timeline predictions, when he said it repeatedly: he thinks there is just not enough bits of evidence for making a precise prediction. There is enough evidence to be pessimistic, and realize we’re running out of time, but I think he would see giving a precise year like a strong epistemic sin. Realize when you have very little evidence, instead of inventing some to make your forecast more concrete.
To clarify, I’m not saying he should give a specific year that he thinks it happens, like such a 50% confidence interval of 12 months. That would be nuts. Per Tetlock, it just isn’t true that you can’t (or shouldn’t) give specific numbers when you are uncertainty. You just give a wider distribution. And not giving that unambiguous distribution when you’re very uncertain just obfuscates, and is the real epistemic sin.
As for the financial pundit example, there’s a massive disanalogy: it’s easy to predict that there will be a crash. Everybody does it, we have past examples to generalize from, and models and theories accepted by a lot of people for why they might be inevitable. On the other hand, when Eliezer started talking about AI Risks and investing himself fully in them, nobody gave a shit about it or took it seriously. This was not an obvious prediction that everyone was making, and he gave far more details than just saying “AI Risks, man”.
I don’t understand what you mean by the bolded part. What do you mean everybody does it? No they don’t. Some people pretend to, though. The analogy is relevant in the sense that Eliezer should show that he is calibrated at predicting AI risks, rather than only arguing so. The details you mention don’t work as a proper forecasting track record.
The subtlety I really want to point out here is that the choice is not necessarily “make a precise forecast” or “not make any forecast at all”. Notably, the precise forecasts that you generally can write down or put on website are limited to distributions that you can compute decently well and that have well-defined properties. If you arrive at a distribution that is particularly hard to compute, it can still tell you qualitative things (the kind of predictions Eliezer actually makes) without you being able to honestly extract a precise prediction.
In such a situation, making a precise prediction is the same as taking one element of a set of solutions for an equation and labelling it “the” solution.
(If you want to read more about Eliezer’s model, I recommend this paper)
Thanks for the post and expressing your opinion!
That being, I feel like there is a misunderstanding here. Daniel mentioned that in another comment thread, but I don’t think Eliezer claims what you’re attributing to him, nor that your analogy with financial pundits works in this context.
My model of Eliezer, based on reading a lot of his posts (old and new) and one conversation, is that he’s dunking on Metaculus and forecasters for a combination of two epistemic sins:
Taking a long time to update on available information Basically, you shouldn’t take so long to update on the risk for AI, the accelerating pace, the power of scaling. I don’t think Eliezer is perfect on this, but he definitely can claim that he thought and invested himself in AI risks literally decades before any metaculus forecaster even thought about the topic. This is actually a testable claim: that forecasts ends up trailing things that Eliezer said 10 years later.
Doing a precise prediction when you don’t have the information I feel like there’s been a lot of misunderstanding about why Eliezer doesn’t want to give timeline predictions, when he said it repeatedly: he thinks there is just not enough bits of evidence for making a precise prediction. There is enough evidence to be pessimistic, and realize we’re running out of time, but I think he would see giving a precise year like a strong epistemic sin. Realize when you have very little evidence, instead of inventing some to make your forecast more concrete.[1]
As for the financial pundit example, there’s a massive disanalogy: it’s easy to predict that there will be a crash. Everybody does it, we have past examples to generalize from, and models and theories accepted by a lot of people for why they might be inevitable. On the other hand, when Eliezer started talking about AI Risks and investing himself fully in them, nobody gave a shit about it or took it seriously. This was not an obvious prediction that everyone was making, and he gave far more details than just saying “AI Risks, man”.
Note that I’m not saying that Eliezer has a perfect track record or that you shouldn’t criticize him. On the first point, I feel like he had a massive miss of GPT-like models, which are incoherent with the models of intelligence and agency that Eliezer used in the sequences and at MIRI — that’s a strong failed prediction for me, a qualitative unknown unknown that was missed. And on the second point, I’m definitely for more productive debate around alignment and Eliezer’s position.
I just wanted to point out ways in which your post seemed to discuss a strawman, which I don’t think was your intention.
I think this comes a lot from the Bayesian ontology: you remove hypotheses rather than confirming them; so you often end up with a whole space of potential candidates with similar probability. I explore Eliezer’s writing on this topic here.
Not really, unless you accept corruptible-formats of forecasts with lots of wiggle room. It isn’t true that we can have a clear view of how he is forecasting if he skips proper forecasting.
I think you’re right that it’s impressive he alerted people to potential AI risks. But if you think that’s an informative forecasting track record, I don’t think that heuristic is remotely workable in measuring forecasters.
To clarify, I’m not saying he should give a specific year that he thinks it happens, like such a 50% confidence interval of 12 months. That would be nuts. Per Tetlock, it just isn’t true that you can’t (or shouldn’t) give specific numbers when you are uncertainty. You just give a wider distribution. And not giving that unambiguous distribution when you’re very uncertain just obfuscates, and is the real epistemic sin.
I don’t understand what you mean by the bolded part. What do you mean everybody does it? No they don’t. Some people pretend to, though. The analogy is relevant in the sense that Eliezer should show that he is calibrated at predicting AI risks, rather than only arguing so. The details you mention don’t work as a proper forecasting track record.
The subtlety I really want to point out here is that the choice is not necessarily “make a precise forecast” or “not make any forecast at all”. Notably, the precise forecasts that you generally can write down or put on website are limited to distributions that you can compute decently well and that have well-defined properties. If you arrive at a distribution that is particularly hard to compute, it can still tell you qualitative things (the kind of predictions Eliezer actually makes) without you being able to honestly extract a precise prediction.
In such a situation, making a precise prediction is the same as taking one element of a set of solutions for an equation and labelling it “the” solution.
(If you want to read more about Eliezer’s model, I recommend this paper)