Knowing how much time we’ve got is important to using it well. It’s worth this sort of careful analysis.
I found most of this to be wasted effort based on too much of an outside view. The human brain gives neither an upper nor lower bound on the computation needed to achieve transformative AGI. Inside views that include gears-level models of how our first AGIs will function seem much more valuable; thus Daniel Kokatijlo’s predictions seem far better informed than the others here.
Outside views like “things take longer than they could, often a lot longer” are valuable, but if we look at predicting when other engineering feats would first be accomplished, good predictions would’ve taken both expert engineers and usually, those who could predict when funding and enthusiasm for the project would become available.
In any case, more careful timeline predictions like this would improve our odds.
Knowing how much time we’ve got is important to using it well. It’s worth this sort of careful analysis.
I found most of this to be wasted effort based on too much of an outside view. The human brain gives neither an upper nor lower bound on the computation needed to achieve transformative AGI. Inside views that include gears-level models of how our first AGIs will function seem much more valuable; thus Daniel Kokatijlo’s predictions seem far better informed than the others here.
Outside views like “things take longer than they could, often a lot longer” are valuable, but if we look at predicting when other engineering feats would first be accomplished, good predictions would’ve taken both expert engineers and usually, those who could predict when funding and enthusiasm for the project would become available.
In any case, more careful timeline predictions like this would improve our odds.