1. I wonder if the AI Impacts discontinuities project might serve as a sort of random sample. We selected technological advances on the basis of how trend-busting they were, and while that may still be correlated with how surprising they were to experts at the time, the sample is surely less biased than the Ord/Yudkowsky/Russell set of examples. It would be interesting to see how many of the discontinuities were surprising, and how much, to the people at the time.
2. AI research is open today… kinda. I mean, OpenAI has recently started to close. But I don’t think this is super relevant to the overall discussion, because the issue is about whether AI research will be open or closed at the time crazy AGI advances start happening. I think it’s quite likely that it will be closed, and thus (absent leaks and spies) the only people who have advance warning will be project insiders. (Unless it is a slow takeoff, such that people outside can use trend extrapolation to not be surprised by the stuff coming out of the secret projects)
3. Even cherry-picked anecdotes can be useful evidence, if they are picked from a set that is sufficiently small. E.g. if there are only 100 ‘important’ technological advancements, and nukes and planes are 2 of them, then that means there’s at least a 2% chance that another important technological advancement will catch almost the whole world by complete surprise. I don’t have a principled way of judging importance, but it seems plausible to me that if you asked me to list the 100 most important advancements I’d have nukes and flight in there. Heck, they might even make the top 20.
Yes, I think that’d be very interesting. If this post could play a tiny role in prompting something like that, I’d be very happy. And that’s the case whether or it supports some of Ord and Yudkowsky’s stronger claims/implications (i.e., beyond just that experts are sometimes wrong about these things) - it just seems it’d be good to have some clearer data, either way. ETA: But I take this post by Muelhauser as indirect evidence that it’d be hard to do at least certain versions of this.
Interesting point. I think that, if we expect AGI research to be closed during it shortly before really major/crazy AGI advances, then the nuclear engineering analogy would indeed have more direct relevance, from that point on. But it might not make the analogy stronger until those advances start happening. So perhaps we wouldn’t necessarily strongly expect major surprises about when AGI development starts having major/crazy advances, but then expect a closing up and major surprises from that point on. (But this is all just about what that one analogy might suggest, and we obviously have other lines of argument and evidence too.)
That’s a good point; I hadn’t really thought about that explicitly, and if I had I think I would’ve noted it in the post. But that’s about how well the cases provide evidence about the likely inaccuracy of expert forecasts (or surprisingness) of the most important technology developments, or something like that. This is what Ord and Yudkowsky (and I) primarily care about in this context, as their focus when they make these claims is AGI. But they do sometimes (at least in my reading) make the claims as if they apply to technology forecasts more generally.
Well said!
1. I wonder if the AI Impacts discontinuities project might serve as a sort of random sample. We selected technological advances on the basis of how trend-busting they were, and while that may still be correlated with how surprising they were to experts at the time, the sample is surely less biased than the Ord/Yudkowsky/Russell set of examples. It would be interesting to see how many of the discontinuities were surprising, and how much, to the people at the time.
2. AI research is open today… kinda. I mean, OpenAI has recently started to close. But I don’t think this is super relevant to the overall discussion, because the issue is about whether AI research will be open or closed at the time crazy AGI advances start happening. I think it’s quite likely that it will be closed, and thus (absent leaks and spies) the only people who have advance warning will be project insiders. (Unless it is a slow takeoff, such that people outside can use trend extrapolation to not be surprised by the stuff coming out of the secret projects)
3. Even cherry-picked anecdotes can be useful evidence, if they are picked from a set that is sufficiently small. E.g. if there are only 100 ‘important’ technological advancements, and nukes and planes are 2 of them, then that means there’s at least a 2% chance that another important technological advancement will catch almost the whole world by complete surprise. I don’t have a principled way of judging importance, but it seems plausible to me that if you asked me to list the 100 most important advancements I’d have nukes and flight in there. Heck, they might even make the top 20.
Thanks!
Yes, I think that’d be very interesting. If this post could play a tiny role in prompting something like that, I’d be very happy. And that’s the case whether or it supports some of Ord and Yudkowsky’s stronger claims/implications (i.e., beyond just that experts are sometimes wrong about these things) - it just seems it’d be good to have some clearer data, either way. ETA: But I take this post by Muelhauser as indirect evidence that it’d be hard to do at least certain versions of this.
Interesting point. I think that, if we expect AGI research to be closed during it shortly before really major/crazy AGI advances, then the nuclear engineering analogy would indeed have more direct relevance, from that point on. But it might not make the analogy stronger until those advances start happening. So perhaps we wouldn’t necessarily strongly expect major surprises about when AGI development starts having major/crazy advances, but then expect a closing up and major surprises from that point on. (But this is all just about what that one analogy might suggest, and we obviously have other lines of argument and evidence too.)
That’s a good point; I hadn’t really thought about that explicitly, and if I had I think I would’ve noted it in the post. But that’s about how well the cases provide evidence about the likely inaccuracy of expert forecasts (or surprisingness) of the most important technology developments, or something like that. This is what Ord and Yudkowsky (and I) primarily care about in this context, as their focus when they make these claims is AGI. But they do sometimes (at least in my reading) make the claims as if they apply to technology forecasts more generally.