… then the main reason to expect a discontinuity would be if there is some other weird discontinuity elsewhere
This discontinuity could lie in the space of AI discoveries. The discovery space is not guaranteed to be efficiently explored: there could be simple and high impact discoveries which occur later on. I’m not sure how much credence I put in this idea. Empirically it does seem like the discovery space is explored efficiently in most fields with high investment, but generalizing this to AI seems non-trivial. Possible exceptions include relativity in physics.
Edit: I’m using the term efficiency somewhat loosely here. There could be discoveries which are very difficult to think of but which are considerably more simple than current approaches. I’m refering to the failure to find these discoveries as ‘inefficiency’, but there isn’t concrete action which can/should be taken to resolve this.
This discontinuity could lie in the space of AI discoveries. The discovery space is not guaranteed to be efficiently explored: there could be simple and high impact discoveries which occur later on. I’m not sure how much credence I put in this idea. Empirically it does seem like the discovery space is explored efficiently in most fields with high investment, but generalizing this to AI seems non-trivial. Possible exceptions include relativity in physics.
Edit: I’m using the term efficiency somewhat loosely here. There could be discoveries which are very difficult to think of but which are considerably more simple than current approaches. I’m refering to the failure to find these discoveries as ‘inefficiency’, but there isn’t concrete action which can/should be taken to resolve this.
Rob Bensinger examines this idea in more detail in this discussion.