A question for all: If you are wrong and in 4/13/40 years most of this fails to come true, will you blame it on your own models being wrong or shift goalposts towards the success of the AI safety movement / government crack downs on AI development? If the latter, how will you be able to prove that AGI definitely would have come had the government and industry not slowed down development?
To add more substance to this comment: I felt Ege came out looking the most salient here. In general, making predictions about the future should be backed by heavy uncertainty. He didn’t even disagree very strongly with most of the central premises of the other participants, he just placed his estimates much more humbly and cautiously. He also brought up the mundanity of progress and boring engineering problems, something I see as the main bottleneck in the way of a singularity. I wouldn’t be surprised if the singularity turns out to be a physically impossible phenomenon because of hard limits in parallelisation of compute or queueing theory or supply chains or materials processing or something.
Thank you for raising this explicitly. I think probably lots of people’s timelines are based partially on vibes-to-do-with-what-positions-sound-humble/cautious, and this isn’t totally unreasonable so deserves serious explicit consideration.
I think it’ll be pretty obvious whether my models were wrong or whether the government cracked down. E.g. how much compute is spent on the largest training run in 2030? If it’s only on the same OOM as it is today, then it must have been government crackdown. If instead it’s several OOMs more, and moreover the training runs are still of the same type of AI system (or something even more powerful) as today (big multimodal LLMs) then I’ll very happily say I was wrong.
Re humility and caution: Humility and caution should push in both directions, not just one. If your best guess is that AGI is X years away, adding an extra dose of uncertainty should make you fatten both tails of your distribution—maybe it’s 2X years away, but maybe instead it’s X/2 years away.
(Exception is for planning fallacy stuff—there we have good reason to think people are systematically biased toward shorter timelines. So if your AGI timelines are primarily based on planning out a series of steps, adding more uncertainty should systematically push your timelines farther out.)
Another thing to mention re humility and caution is that it’s very very easy for framing effects to bias your judgments of who is being humble and who isn’t. For one thing it’s easy to appear more humble than you are simply by claiming to be so. I could have preceded many of my sentences above with “I think we should be more cautious than that...” for example. For another thing when three people debate the middle person has an aura of humility and caution simply because they are the middle person. Relatedly when someone has a position which disagrees with the common wisdom, that position is unfairly labelled unhumble/incautious even when it’s the common wisdom that is crazy.
When models give particular ways of updating on future evidence, current predictions being wrong doesn’t by itself make models wrong. Models learn, the way they learn is already part of them. An updating model is itself wrong when other available models are better in some harder-to-pin-down sense, not just at being right about particular predictions. When future evidence isn’t in scope of a model, that invalidates the model. But not all models are like that with respect to relevant future evidence, even when such evidence dramatically changes their predictions in retrospect.
A question for all: If you are wrong and in 4/13/40 years most of this fails to come true, will you blame it on your own models being wrong or shift goalposts towards the success of the AI safety movement / government crack downs on AI development? If the latter, how will you be able to prove that AGI definitely would have come had the government and industry not slowed down development?
To add more substance to this comment: I felt Ege came out looking the most salient here. In general, making predictions about the future should be backed by heavy uncertainty. He didn’t even disagree very strongly with most of the central premises of the other participants, he just placed his estimates much more humbly and cautiously. He also brought up the mundanity of progress and boring engineering problems, something I see as the main bottleneck in the way of a singularity. I wouldn’t be surprised if the singularity turns out to be a physically impossible phenomenon because of hard limits in parallelisation of compute or queueing theory or supply chains or materials processing or something.
Thank you for raising this explicitly. I think probably lots of people’s timelines are based partially on vibes-to-do-with-what-positions-sound-humble/cautious, and this isn’t totally unreasonable so deserves serious explicit consideration.
I think it’ll be pretty obvious whether my models were wrong or whether the government cracked down. E.g. how much compute is spent on the largest training run in 2030? If it’s only on the same OOM as it is today, then it must have been government crackdown. If instead it’s several OOMs more, and moreover the training runs are still of the same type of AI system (or something even more powerful) as today (big multimodal LLMs) then I’ll very happily say I was wrong.
Re humility and caution: Humility and caution should push in both directions, not just one. If your best guess is that AGI is X years away, adding an extra dose of uncertainty should make you fatten both tails of your distribution—maybe it’s 2X years away, but maybe instead it’s X/2 years away.
(Exception is for planning fallacy stuff—there we have good reason to think people are systematically biased toward shorter timelines. So if your AGI timelines are primarily based on planning out a series of steps, adding more uncertainty should systematically push your timelines farther out.)
Another thing to mention re humility and caution is that it’s very very easy for framing effects to bias your judgments of who is being humble and who isn’t. For one thing it’s easy to appear more humble than you are simply by claiming to be so. I could have preceded many of my sentences above with “I think we should be more cautious than that...” for example. For another thing when three people debate the middle person has an aura of humility and caution simply because they are the middle person. Relatedly when someone has a position which disagrees with the common wisdom, that position is unfairly labelled unhumble/incautious even when it’s the common wisdom that is crazy.
When models give particular ways of updating on future evidence, current predictions being wrong doesn’t by itself make models wrong. Models learn, the way they learn is already part of them. An updating model is itself wrong when other available models are better in some harder-to-pin-down sense, not just at being right about particular predictions. When future evidence isn’t in scope of a model, that invalidates the model. But not all models are like that with respect to relevant future evidence, even when such evidence dramatically changes their predictions in retrospect.