The Bio Anchors report is intended as a tool for making debates about AI timelines more concrete, for those who find some bio-anchor-related bound helpful (e.g., some think we should lower bound P(AGI) at some reasonably high number for any year in which we expect to hit a particular kind of “biological anchor”). Ajeya’s work lengthened my own timelines, because it helped me understand that some bio-anchor-inspired arguments for shorter timelines didn’t have as much going for them as I’d thought; but I think it may have shortened some other folks’.
(The presentation of the report in the Most Important Century series had a different aim. That series is aimed at making the case that we could be in the most important century, to a skeptic.)
I don’t personally believe I have a high-enough-quality estimate using another framework that I’d be justified in ignoring bio-anchors-based reasoning, but I don’t think it’s wild to think someone else might have such an estimate.
The Bio Anchors report is intended as a tool for making debates about AI timelines more concrete, for those who find some bio-anchor-related bound helpful (e.g., some think we should lower bound P(AGI) at some reasonably high number for any year in which we expect to hit a particular kind of “biological anchor”). Ajeya’s work lengthened my own timelines, because it helped me understand that some bio-anchor-inspired arguments for shorter timelines didn’t have as much going for them as I’d thought; but I think it may have shortened some other folks’.
(The presentation of the report in the Most Important Century series had a different aim. That series is aimed at making the case that we could be in the most important century, to a skeptic.)
I don’t personally believe I have a high-enough-quality estimate using another framework that I’d be justified in ignoring bio-anchors-based reasoning, but I don’t think it’s wild to think someone else might have such an estimate.