I don’t think the linked PCP thing is a great example. Yes, the first time someone seriously writes an algorithm to do X it typically represents a big speedup on X. The prediction of the “progress is continuous” hypothesis is that the first time someone writes an algorithm to do X, it won’t be very economically important—otherwise someone would have done it sooner—and this example conforms to that trend pretty well.
The other issue seems closer to relevant; mathematical problems do go from being “unsolved” to “solved” with comparatively little warning. I think this is largely because they are small enough problems that they are 1 man jobs (which would not be plausible if anyone really cared about the outcome), but that may not be the whole story and at any rate something is going on here.
In the PCP case, the relevantly similar outcome would be the situation where theoretical work on interactive proofs turned out to be useful right out of the box. I’m not aware of any historical cases where this has happened, but I could be missing some, and I don’t really understand why it would happen as rarely as it does. It would be nice to understand this possibility better.
As for “people can’t tell the difference between watson and being close to broadly human-level AI,” I think this is unlikely. At the very least the broader intellectual community is going to have little trouble distinguishing between watson and economically disruptive AI, so this is only plausible if we get a discontinuous jump. But even assuming a jump, the AI community is not all that impressed by watson and I expect this is an important channel by which significant developments would affect expectations.
I don’t think the linked PCP thing is a great example. Yes, the first time someone seriously writes an algorithm to do X it typically represents a big speedup on X. The prediction of the “progress is continuous” hypothesis is that the first time someone writes an algorithm to do X, it won’t be very economically important—otherwise someone would have done it sooner—and this example conforms to that trend pretty well.
The other issue seems closer to relevant; mathematical problems do go from being “unsolved” to “solved” with comparatively little warning. I think this is largely because they are small enough problems that they are 1 man jobs (which would not be plausible if anyone really cared about the outcome), but that may not be the whole story and at any rate something is going on here.
In the PCP case, the relevantly similar outcome would be the situation where theoretical work on interactive proofs turned out to be useful right out of the box. I’m not aware of any historical cases where this has happened, but I could be missing some, and I don’t really understand why it would happen as rarely as it does. It would be nice to understand this possibility better.
As for “people can’t tell the difference between watson and being close to broadly human-level AI,” I think this is unlikely. At the very least the broader intellectual community is going to have little trouble distinguishing between watson and economically disruptive AI, so this is only plausible if we get a discontinuous jump. But even assuming a jump, the AI community is not all that impressed by watson and I expect this is an important channel by which significant developments would affect expectations.
Just for the record, I wasn’t proposing the PCP thing as a counterexample to your model of “economically important progress is continuous.”