I think maybe this was mistitled. It seems to make a solid argument against certainty in AI timelines. It does not argue against the attempt or against taking seriously the distribution across attempts.
It raises the accuracy of some predictions of space flight, then notes that others were never implemented. It could well be that there are multiple viable ways to build a rocket, a steam engine, and an AGI.
Von Braun would weep at our lack of progress on space flight. But we did not progress because there don’t actually seem to be near term economic incentives. There probably are for AGI.
Timelines are highly uncertain, but dismissing the possibility of short timelines makes as little sense as dismissing long timelines.
I think maybe this was mistitled. It seems to make a solid argument against certainty in AI timelines. It does not argue against the attempt or against taking seriously the distribution across attempts.
It raises the accuracy of some predictions of space flight, then notes that others were never implemented. It could well be that there are multiple viable ways to build a rocket, a steam engine, and an AGI.
Von Braun would weep at our lack of progress on space flight. But we did not progress because there don’t actually seem to be near term economic incentives. There probably are for AGI.
Timelines are highly uncertain, but dismissing the possibility of short timelines makes as little sense as dismissing long timelines.