if you think timelines are short for reasons unrelated to biological anchors, I don’t think Bio Anchors provides an affirmative argument that you should change your mind.
Eliezer: I wish I could say that it probably beats showing a single estimate, in terms of its impact on the reader. But in fact, writing a huge careful Very Serious Report like that and snowing the reader under with Alternative Calculations is probably going to cause them to give more authority to the whole thing. It’s all very well to note the Ways I Could Be Wrong and to confess one’s Uncertainty, but you did not actually reach the conclusion, “And that’s enough uncertainty and potential error that we should throw out this whole deal and start over,” and that’s the conclusion you needed to reach.
I would be curious to know what the intended consequences of the forecasting piece were.
A lot of Eliezer’s argument seems to me to be pushing at something like ‘there is a threshold for how much evidence you need before you start putting down numbers, and you haven’t reached it’, and I take what I’ve quoted from your piece to be supporting something like ‘there is a threshold for how much evidence you might have, and if you’re above it (and believe this forecast to be an overestimate) then you may be free to ignore the numbers here’, contra the Humbali position. I’m not particularly confident on that, though.
Where this leaves me is feeling like you two have different beliefs about who will (or should) update on reading this kind of thing, and to what end, which is probably tangled up in beliefs about how good people are at holding uncertainty in their mind. But I’m not really sure what these beliefs are.
The Bio Anchors report is intended as a tool for making debates about AI timelines more concrete, for those who find some bio-anchor-related bound helpful (e.g., some think we should lower bound P(AGI) at some reasonably high number for any year in which we expect to hit a particular kind of “biological anchor”). Ajeya’s work lengthened my own timelines, because it helped me understand that some bio-anchor-inspired arguments for shorter timelines didn’t have as much going for them as I’d thought; but I think it may have shortened some other folks’.
(The presentation of the report in the Most Important Century series had a different aim. That series is aimed at making the case that we could be in the most important century, to a skeptic.)
I don’t personally believe I have a high-enough-quality estimate using another framework that I’d be justified in ignoring bio-anchors-based reasoning, but I don’t think it’s wild to think someone else might have such an estimate.
I would be curious to know what the intended consequences of the forecasting piece were.
A lot of Eliezer’s argument seems to me to be pushing at something like ‘there is a threshold for how much evidence you need before you start putting down numbers, and you haven’t reached it’, and I take what I’ve quoted from your piece to be supporting something like ‘there is a threshold for how much evidence you might have, and if you’re above it (and believe this forecast to be an overestimate) then you may be free to ignore the numbers here’, contra the Humbali position. I’m not particularly confident on that, though.
Where this leaves me is feeling like you two have different beliefs about who will (or should) update on reading this kind of thing, and to what end, which is probably tangled up in beliefs about how good people are at holding uncertainty in their mind. But I’m not really sure what these beliefs are.
The Bio Anchors report is intended as a tool for making debates about AI timelines more concrete, for those who find some bio-anchor-related bound helpful (e.g., some think we should lower bound P(AGI) at some reasonably high number for any year in which we expect to hit a particular kind of “biological anchor”). Ajeya’s work lengthened my own timelines, because it helped me understand that some bio-anchor-inspired arguments for shorter timelines didn’t have as much going for them as I’d thought; but I think it may have shortened some other folks’.
(The presentation of the report in the Most Important Century series had a different aim. That series is aimed at making the case that we could be in the most important century, to a skeptic.)
I don’t personally believe I have a high-enough-quality estimate using another framework that I’d be justified in ignoring bio-anchors-based reasoning, but I don’t think it’s wild to think someone else might have such an estimate.