Those conclusions are relevant/important! I’m particularly struck by the one about expert disagreement since that was new to me. Another potential historical example to add to the list: When airplanes were invented experts were still arguing about how birds were able to soar. “They watched buzzards glide from horizon to horizon without moving their wings, and guessed they must be sucking some mysterious essence of upness from the air. ” (To be clear some experts had the right answer, or at least a sketch of it.)
I can think of two intertwined objections to your conclusions-as-applied-to-AI risk:
One is “Your selection of historical techs is biased because it draws heavily from the discontinuities project. There are other important technologies that were developed non-discontinuously, and maybe AI will be one of them. For example computers. Or guns. Or cell phones. For these technologies, the lessons you draw do not apply.”
The other is “AI will in fact be more like big ships or buildings than like lasers or nukes. It’ll mostly be scaled-up versions of pre-existing stuff, possibly even stuff that already exists today.”
I’m particularly struck by the one about expert disagreement since that was new to me.
Yeah, I can’t really tell how much to conclude from the examples I give on this. The problem is that “uncertainty” is both hard to specify in a way that makes for good comparisons and hard to evaluate in retrospect.
I’m glad you brought up flight, because I think it may be a counterexample to my claim that uncertain communities have produced important advances, but confused communities have not. My impression is that everyone was pretty confused about flight in 1903, but I don’t know that much about it. There may also be a connection between level of confusion and ability to make the first version less terrible or improve on it quickly (for example, I think the Manhattan Project scientists were less confused than the scientists working on early lasers).
Your selection of historical techs is biased because it draws heavily from the discontinuities project. There are other important technologies that were developed non-discontinuously, and maybe AI will be one of them.
I think this objection is basically right, in that this sample (and arguably the entire reference class) relies heavily on discreteness in a way that may ultimately be irrelevant to TAI. Like, maybe there will be no clear “first version” of an AI that deeply and irrevocably changes the world. Still, it may be worth mentioning that some of the members of this reference class, such as penicillin and the Haber process, turned out not to be discontinuities (according to our narrow definition).
AI will in fact be more like big ships or buildings than like lasers or nukes. It’ll mostly be scaled-up versions of pre-existing stuff, possibly even stuff that already exists today.
This doesn’t seem crazy. I think the lesson from historical building sizes is “Whoa, building height really didn’t track underlying tech at all”. If for some reason AI performance tracks limits of underlying technology very badly, we might expect the first version of a scary thing to conform badly to these patterns.
I would guess this is not what will happen, though, since most scary AI capabilities that we worry about are much more valuable than building height. Still, penicillin was a valuable technology that sat around for stupid reasons for a decade, so who knows.
Those conclusions are relevant/important! I’m particularly struck by the one about expert disagreement since that was new to me. Another potential historical example to add to the list: When airplanes were invented experts were still arguing about how birds were able to soar. “They watched buzzards glide from horizon to horizon without moving their wings, and guessed they must be sucking some mysterious essence of upness from the air. ” (To be clear some experts had the right answer, or at least a sketch of it.)
I can think of two intertwined objections to your conclusions-as-applied-to-AI risk:
One is “Your selection of historical techs is biased because it draws heavily from the discontinuities project. There are other important technologies that were developed non-discontinuously, and maybe AI will be one of them. For example computers. Or guns. Or cell phones. For these technologies, the lessons you draw do not apply.”
The other is “AI will in fact be more like big ships or buildings than like lasers or nukes. It’ll mostly be scaled-up versions of pre-existing stuff, possibly even stuff that already exists today.”
What do you think of those objections?
Yeah, I can’t really tell how much to conclude from the examples I give on this. The problem is that “uncertainty” is both hard to specify in a way that makes for good comparisons and hard to evaluate in retrospect.
I’m glad you brought up flight, because I think it may be a counterexample to my claim that uncertain communities have produced important advances, but confused communities have not. My impression is that everyone was pretty confused about flight in 1903, but I don’t know that much about it. There may also be a connection between level of confusion and ability to make the first version less terrible or improve on it quickly (for example, I think the Manhattan Project scientists were less confused than the scientists working on early lasers).
I think this objection is basically right, in that this sample (and arguably the entire reference class) relies heavily on discreteness in a way that may ultimately be irrelevant to TAI. Like, maybe there will be no clear “first version” of an AI that deeply and irrevocably changes the world. Still, it may be worth mentioning that some of the members of this reference class, such as penicillin and the Haber process, turned out not to be discontinuities (according to our narrow definition).
This doesn’t seem crazy. I think the lesson from historical building sizes is “Whoa, building height really didn’t track underlying tech at all”. If for some reason AI performance tracks limits of underlying technology very badly, we might expect the first version of a scary thing to conform badly to these patterns.
I would guess this is not what will happen, though, since most scary AI capabilities that we worry about are much more valuable than building height. Still, penicillin was a valuable technology that sat around for stupid reasons for a decade, so who knows.