Good point! I’d love to see a more thorough investigation into cases like this. This is the best comment so far IMO; strong-upvoted.
My immediate reply would be: Shorty here is just wrong about what the key parameters are; as Longs points out, size seems pretty important, because it means you don’t have to worry about control. Trying to make a fusion reactor much smaller than a star seems to me to be analogous to trying to make a flying machine with engines much weaker than bird muscle, or an AI with neural nets much smaller than human brains. Yeah, maybe it’s possible in principle, but in practice we should expect it to be very difficult. But I’m not sure, I’d want to think about this more.
Update: Actually, I think I analyzed that wrong. Shorty did mention “controlling the plasma” as a key variable; in that case, I agree that Shorty got the key variables correct. Shorty’s methodology is to plot a graph with the key variables and say “We’ll achieve it when our variables reach roughly the same level as they are in nature’s equivalent.” But how do we measure level of control? How can we say that we’ve reached the same level of control over the plasma as the Sun has? This bit seems implausible. So I think a steelman Shorty would either say that it’s unknown whether we’ve reached the key variables yet (because we don’t know how good tokamaks are at controlling plasma) or that control isn’t a key variable (because it can be compensated for by other things, like temperature and pressure.) (Though in this case if Shorty went that second route, they’d probably just be wrong? Compare to the case of flight, where the problem of controlling the craft really does become a lot easier when you have access to more powerful&light engines. I don’t know much about fusion designs but I suspect that cranking up temperature and pressure doesn’t, in fact, make controlling the reaction easier. Am I wrong?)
Probably nowadays what Shorty missed was the difficulty in dealing with the energetic neutrons being created and associated radiation. Then associated maintenance costs etc and therefore price-competitiveness. I chose nuclear fusion purely because it was the most salient example of project-that-always-misses-its-deadlines.
(I did my university placement year in nuclear fusion research but still don’t feel like I properly understand it! I’m pretty sure you’re right though about temperature, pressure and control.)
In theory a steelman Shorty could have thought of all of these things but in practice it’s hard to think of everything. I find myself in the weird position of agreeing with you but arguing in the opposite direction.
For a random large project X, which is more likely to be true:
Project X took longer than expert estimates because of failure to account for Y
Project X was delivered approximately on time
In general I suspect that it is the former (1). In that case the burden of evidence is on Shorty to show why project X is outside of the reference class of typical-large-projects and maybe in some subclass where accurate predictions of timelines are more achievable.
Maybe what is required is to justify TAI as being in the subclass
I think this is essentially the argument the OP is making in Analysis Part1?
***
I notice in the above I’ve probably gone beyond the original argument—the OP was arguing specifically against using the fact that natural systems have such properties to say that they’re required. I’m talking about something more general—systems generally have more complexity than we realize. I think this is importantly different.
It may be the case that Longs’ argument about brains having such properties is based on an intuition from the broader argument. I think that the OP is essentially correct in saying that adding examples from the human brain into the argument does little to make such an argument stronger (Analysis part 2).
***
(1) Although there is also the question of how much later counts as a failure of prediction. I guess Shorty is arguing for TAI in the next 20 years, Longs is arguing 50-100 years?
I still prefer my analysis above: Fusion is not a case of Shorty being wrong, because a steelman Shorty wouldn’t have predicted that we’d get fusion soon. Why? Because we don’t have the key variables. Why? Because controlling the plasma is one of the key variables, and the sun has near-perfect control, whereas we are trying to substitute with various designs which may or may not work.
Shorty is actually arguing for TAI much sooner than 20 years from now; if TAI comes around the HBHL milestone then it could happen any day now, it’s just a matter of spending a billion dollars on compute and then iterating a few times to work out the details, wright-brothers style. Of course we shouldn’t think Shorty is probably correct here; the truth is probably somewhere in between. (Unless we do more historical analyses and find that the case of flight is truly representative of the reference class AI fits in, in which case ho boy singularity here we come)
And yeah, the main purpose of the OP was to argue that certain anti-short-timelines arguments are bogus; this issue of whether timelines are actually short or long is secondary and the case of flight is just one case study, of limited evidential import.
I do take your point that maybe Longs’ argument was drawing on intuitions of the sort you are sketching out. In other words, maybe there’s a steelman of the arguments I think are bogus, such that they become non-bogus. I already agree this is true in at least one way (see Part 3). I like your point about large projects—insofar as we think of AI in that reference class, it seems like our timelines should be “Take whatever the experts say and then double it.” But if we had done this for flight we would have been disastrously wrong. I definitely want to think, talk, and hear more about these issues… I’d like to have a model of what sorts of technologies are like fusion and what sort are like flight, and why.
My own (hinted at in the OP) was going to be something like “When your basic theory of a design problem is developed enough that you have identified the key variables, and there is a natural design that solves the problem in a similar way to the thing you are trying to build, then you can predict roughly when the problem will be solved by saying that it’ll happen around the time that parity-with-the-natural-design is reached in the key variables. What are key variables? I’m not sure how to define them, but one property that seems maybe important is that the design problem becomes easier when you have more of the key variables.”
Another thing worth mentioning is that probably having a healthy competition between different smart people is important. The Wright brother succeeded but there were several other groups around the same time also trying to build flying machines, who were less successful (or who took longer to succeed). If instead there had been one big government-funded project, there’s more room for human error and the usual failures to cause cost overruns and delays. (OTOH having more funding might have made it happen sooner? IDK). In the case of AI, there are enough different projects full of enough smart people working on the problem that I don’t think this is a major constraint. I’d be curious to hear more about the case of fusion. I’ve heard some people say that actually it could have been achieved by now if only it had more funding, and I think I’ve heard other people say that it could have been achieved by now if it was handled by a competitive market instead of a handful of bureaucracies (though I may be misremembering that, maybe no one said that).
Good point! I’d love to see a more thorough investigation into cases like this. This is the best comment so far IMO; strong-upvoted.
My immediate reply would be: Shorty here is just wrong about what the key parameters are; as Longs points out, size seems pretty important, because it means you don’t have to worry about control. Trying to make a fusion reactor much smaller than a star seems to me to be analogous to trying to make a flying machine with engines much weaker than bird muscle, or an AI with neural nets much smaller than human brains. Yeah, maybe it’s possible in principle, but in practice we should expect it to be very difficult. But I’m not sure, I’d want to think about this more.
Update: Actually, I think I analyzed that wrong. Shorty did mention “controlling the plasma” as a key variable; in that case, I agree that Shorty got the key variables correct. Shorty’s methodology is to plot a graph with the key variables and say “We’ll achieve it when our variables reach roughly the same level as they are in nature’s equivalent.” But how do we measure level of control? How can we say that we’ve reached the same level of control over the plasma as the Sun has? This bit seems implausible. So I think a steelman Shorty would either say that it’s unknown whether we’ve reached the key variables yet (because we don’t know how good tokamaks are at controlling plasma) or that control isn’t a key variable (because it can be compensated for by other things, like temperature and pressure.) (Though in this case if Shorty went that second route, they’d probably just be wrong? Compare to the case of flight, where the problem of controlling the craft really does become a lot easier when you have access to more powerful&light engines. I don’t know much about fusion designs but I suspect that cranking up temperature and pressure doesn’t, in fact, make controlling the reaction easier. Am I wrong?)
Probably nowadays what Shorty missed was the difficulty in dealing with the energetic neutrons being created and associated radiation. Then associated maintenance costs etc and therefore price-competitiveness. I chose nuclear fusion purely because it was the most salient example of project-that-always-misses-its-deadlines.
(I did my university placement year in nuclear fusion research but still don’t feel like I properly understand it! I’m pretty sure you’re right though about temperature, pressure and control.)
In theory a steelman Shorty could have thought of all of these things but in practice it’s hard to think of everything. I find myself in the weird position of agreeing with you but arguing in the opposite direction.
For a random large project X, which is more likely to be true:
Project X took longer than expert estimates because of failure to account for Y
Project X was delivered approximately on time
In general I suspect that it is the former (1). In that case the burden of evidence is on Shorty to show why project X is outside of the reference class of typical-large-projects and maybe in some subclass where accurate predictions of timelines are more achievable.
Maybe what is required is to justify TAI as being in the subclass
projects-that-are-mainly-determined-by-a-single-limiting-factor
or
projects-whose-key-variables-are-reliably-identifiable-in-advance
I think this is essentially the argument the OP is making in Analysis Part1?
***
I notice in the above I’ve probably gone beyond the original argument—the OP was arguing specifically against using the fact that natural systems have such properties to say that they’re required. I’m talking about something more general—systems generally have more complexity than we realize. I think this is importantly different.
It may be the case that Longs’ argument about brains having such properties is based on an intuition from the broader argument. I think that the OP is essentially correct in saying that adding examples from the human brain into the argument does little to make such an argument stronger (Analysis part 2).
***
(1) Although there is also the question of how much later counts as a failure of prediction. I guess Shorty is arguing for TAI in the next 20 years, Longs is arguing 50-100 years?
I still prefer my analysis above: Fusion is not a case of Shorty being wrong, because a steelman Shorty wouldn’t have predicted that we’d get fusion soon. Why? Because we don’t have the key variables. Why? Because controlling the plasma is one of the key variables, and the sun has near-perfect control, whereas we are trying to substitute with various designs which may or may not work.
Shorty is actually arguing for TAI much sooner than 20 years from now; if TAI comes around the HBHL milestone then it could happen any day now, it’s just a matter of spending a billion dollars on compute and then iterating a few times to work out the details, wright-brothers style. Of course we shouldn’t think Shorty is probably correct here; the truth is probably somewhere in between. (Unless we do more historical analyses and find that the case of flight is truly representative of the reference class AI fits in, in which case ho boy singularity here we come)
And yeah, the main purpose of the OP was to argue that certain anti-short-timelines arguments are bogus; this issue of whether timelines are actually short or long is secondary and the case of flight is just one case study, of limited evidential import.
I do take your point that maybe Longs’ argument was drawing on intuitions of the sort you are sketching out. In other words, maybe there’s a steelman of the arguments I think are bogus, such that they become non-bogus. I already agree this is true in at least one way (see Part 3). I like your point about large projects—insofar as we think of AI in that reference class, it seems like our timelines should be “Take whatever the experts say and then double it.” But if we had done this for flight we would have been disastrously wrong. I definitely want to think, talk, and hear more about these issues… I’d like to have a model of what sorts of technologies are like fusion and what sort are like flight, and why.
I like your suggestions:
My own (hinted at in the OP) was going to be something like “When your basic theory of a design problem is developed enough that you have identified the key variables, and there is a natural design that solves the problem in a similar way to the thing you are trying to build, then you can predict roughly when the problem will be solved by saying that it’ll happen around the time that parity-with-the-natural-design is reached in the key variables. What are key variables? I’m not sure how to define them, but one property that seems maybe important is that the design problem becomes easier when you have more of the key variables.”
Another thing worth mentioning is that probably having a healthy competition between different smart people is important. The Wright brother succeeded but there were several other groups around the same time also trying to build flying machines, who were less successful (or who took longer to succeed). If instead there had been one big government-funded project, there’s more room for human error and the usual failures to cause cost overruns and delays. (OTOH having more funding might have made it happen sooner? IDK). In the case of AI, there are enough different projects full of enough smart people working on the problem that I don’t think this is a major constraint. I’d be curious to hear more about the case of fusion. I’ve heard some people say that actually it could have been achieved by now if only it had more funding, and I think I’ve heard other people say that it could have been achieved by now if it was handled by a competitive market instead of a handful of bureaucracies (though I may be misremembering that, maybe no one said that).