Thank you for raising this explicitly. I think probably lots of people’s timelines are based partially on vibes-to-do-with-what-positions-sound-humble/cautious, and this isn’t totally unreasonable so deserves serious explicit consideration.
I think it’ll be pretty obvious whether my models were wrong or whether the government cracked down. E.g. how much compute is spent on the largest training run in 2030? If it’s only on the same OOM as it is today, then it must have been government crackdown. If instead it’s several OOMs more, and moreover the training runs are still of the same type of AI system (or something even more powerful) as today (big multimodal LLMs) then I’ll very happily say I was wrong.
Re humility and caution: Humility and caution should push in both directions, not just one. If your best guess is that AGI is X years away, adding an extra dose of uncertainty should make you fatten both tails of your distribution—maybe it’s 2X years away, but maybe instead it’s X/2 years away.
(Exception is for planning fallacy stuff—there we have good reason to think people are systematically biased toward shorter timelines. So if your AGI timelines are primarily based on planning out a series of steps, adding more uncertainty should systematically push your timelines farther out.)
Another thing to mention re humility and caution is that it’s very very easy for framing effects to bias your judgments of who is being humble and who isn’t. For one thing it’s easy to appear more humble than you are simply by claiming to be so. I could have preceded many of my sentences above with “I think we should be more cautious than that...” for example. For another thing when three people debate the middle person has an aura of humility and caution simply because they are the middle person. Relatedly when someone has a position which disagrees with the common wisdom, that position is unfairly labelled unhumble/incautious even when it’s the common wisdom that is crazy.
Thank you for raising this explicitly. I think probably lots of people’s timelines are based partially on vibes-to-do-with-what-positions-sound-humble/cautious, and this isn’t totally unreasonable so deserves serious explicit consideration.
I think it’ll be pretty obvious whether my models were wrong or whether the government cracked down. E.g. how much compute is spent on the largest training run in 2030? If it’s only on the same OOM as it is today, then it must have been government crackdown. If instead it’s several OOMs more, and moreover the training runs are still of the same type of AI system (or something even more powerful) as today (big multimodal LLMs) then I’ll very happily say I was wrong.
Re humility and caution: Humility and caution should push in both directions, not just one. If your best guess is that AGI is X years away, adding an extra dose of uncertainty should make you fatten both tails of your distribution—maybe it’s 2X years away, but maybe instead it’s X/2 years away.
(Exception is for planning fallacy stuff—there we have good reason to think people are systematically biased toward shorter timelines. So if your AGI timelines are primarily based on planning out a series of steps, adding more uncertainty should systematically push your timelines farther out.)
Another thing to mention re humility and caution is that it’s very very easy for framing effects to bias your judgments of who is being humble and who isn’t. For one thing it’s easy to appear more humble than you are simply by claiming to be so. I could have preceded many of my sentences above with “I think we should be more cautious than that...” for example. For another thing when three people debate the middle person has an aura of humility and caution simply because they are the middle person. Relatedly when someone has a position which disagrees with the common wisdom, that position is unfairly labelled unhumble/incautious even when it’s the common wisdom that is crazy.