Yes, I think that’d be very interesting. If this post could play a tiny role in prompting something like that, I’d be very happy. And that’s the case whether or it supports some of Ord and Yudkowsky’s stronger claims/implications (i.e., beyond just that experts are sometimes wrong about these things) - it just seems it’d be good to have some clearer data, either way. ETA: But I take this post by Muelhauser as indirect evidence that it’d be hard to do at least certain versions of this.
Interesting point. I think that, if we expect AGI research to be closed during it shortly before really major/crazy AGI advances, then the nuclear engineering analogy would indeed have more direct relevance, from that point on. But it might not make the analogy stronger until those advances start happening. So perhaps we wouldn’t necessarily strongly expect major surprises about when AGI development starts having major/crazy advances, but then expect a closing up and major surprises from that point on. (But this is all just about what that one analogy might suggest, and we obviously have other lines of argument and evidence too.)
That’s a good point; I hadn’t really thought about that explicitly, and if I had I think I would’ve noted it in the post. But that’s about how well the cases provide evidence about the likely inaccuracy of expert forecasts (or surprisingness) of the most important technology developments, or something like that. This is what Ord and Yudkowsky (and I) primarily care about in this context, as their focus when they make these claims is AGI. But they do sometimes (at least in my reading) make the claims as if they apply to technology forecasts more generally.
Thanks!
Yes, I think that’d be very interesting. If this post could play a tiny role in prompting something like that, I’d be very happy. And that’s the case whether or it supports some of Ord and Yudkowsky’s stronger claims/implications (i.e., beyond just that experts are sometimes wrong about these things) - it just seems it’d be good to have some clearer data, either way. ETA: But I take this post by Muelhauser as indirect evidence that it’d be hard to do at least certain versions of this.
Interesting point. I think that, if we expect AGI research to be closed during it shortly before really major/crazy AGI advances, then the nuclear engineering analogy would indeed have more direct relevance, from that point on. But it might not make the analogy stronger until those advances start happening. So perhaps we wouldn’t necessarily strongly expect major surprises about when AGI development starts having major/crazy advances, but then expect a closing up and major surprises from that point on. (But this is all just about what that one analogy might suggest, and we obviously have other lines of argument and evidence too.)
That’s a good point; I hadn’t really thought about that explicitly, and if I had I think I would’ve noted it in the post. But that’s about how well the cases provide evidence about the likely inaccuracy of expert forecasts (or surprisingness) of the most important technology developments, or something like that. This is what Ord and Yudkowsky (and I) primarily care about in this context, as their focus when they make these claims is AGI. But they do sometimes (at least in my reading) make the claims as if they apply to technology forecasts more generally.