For the goal of getting humans to mars, we can do the calculations and see that we need quite a bit of rocket fuel. You could reasonably be in a situation where you had all the design work done, but you still needed to get atoms into the right places, and that took a while. Big infrastructure projects can be easier to design. For a giant damm, most of the effort is in actually getting all the raw materials in place. This means you can know what it takes to build a damm, and be confident it will take at least 5 years given the current rate of concrete production.
Mathematics is near the other end of the scale. If you know how to prove theorem X, you’ve proved it. This stops us being confident that a theorem won’t be proved soon. Its more like a radioactive decay of an fairly long lived atom more likely to be next week than any other week.
I think AI is fairly close to the maths, most of the effort is figuring out what to do.
Ways my statement could be false.
If we knew the algorithm, and the compute needed, but couldn’t get that compute.
If AI development was an accumulation of many little tricks, and we knew how many tricks were needed.
But at the moment, I think we can rule out confident long termism on AI. We have no way of knowing that we aren’t just one clever idea away from AGI.
For the goal of getting humans to mars, we can do the calculations and see that we need quite a bit of rocket fuel. You could reasonably be in a situation where you had all the design work done, but you still needed to get atoms into the right places, and that took a while. Big infrastructure projects can be easier to design. For a giant damm, most of the effort is in actually getting all the raw materials in place. This means you can know what it takes to build a damm, and be confident it will take at least 5 years given the current rate of concrete production.
Mathematics is near the other end of the scale. If you know how to prove theorem X, you’ve proved it. This stops us being confident that a theorem won’t be proved soon. Its more like a radioactive decay of an fairly long lived atom more likely to be next week than any other week.
I think AI is fairly close to the maths, most of the effort is figuring out what to do.
Ways my statement could be false.
If we knew the algorithm, and the compute needed, but couldn’t get that compute.
If AI development was an accumulation of many little tricks, and we knew how many tricks were needed.
But at the moment, I think we can rule out confident long termism on AI. We have no way of knowing that we aren’t just one clever idea away from AGI.