At t=1, a discrete discovery occurs that advances us 1 unit of knowledge towards AI. At t=3/2, (i.e. 1 + 1⁄2) a discrete discovery occurs that advances us 1⁄2 unit of knowledge towards AI. At t=11/6, (i.e. 1 + 1⁄2 + 1⁄3) a discrete discovery occurs that advances us 1⁄4 unit of knowledge towards AI. At t=25/12, (i.e. 1 + 1⁄2 + 1⁄3 + 1⁄4) a discrete discovery occurs that advances us 1/8th unit of knowledge towards AI. Etc.
One the one hand, this looks very much like an accelerating timeline if you are solely looking at breakthroughs. On the other hand, the actual rate of knowledge acquiry over time is decreasing.
I would argue that this sort of trend is fairly common in research. Research discoveries in a particular field do continue over time, and the rate of discoveries increases over time, but the purport of each discovery tends to lessen over time.
Very interesting counterexample! I would suspect it gets increasingly sketchy to characterize 1/8th, 1/16th, etc. ‘units of knowledge towards AI’ as ‘breakthroughs’ in the way I define the term in the post.
I take your point that we might get our wires crossed when a given field looks like it’s accelerating, but when we zoom in to only look at that field’s breakthroughs, we find that they are decelerating. It seems important to watch out for this. Thanks for your comment!
I would suspect it gets increasingly sketchy to characterize 1/8th, 1/16th, etc. ‘units of knowledge towards AI’ as ‘breakthroughs’ in the way I define the term in the post.
Absolutely. It does—eventually. Which is partially my point. The extrapolation looks sound, until suddenly it isn’t.
I take your point that we might get our wires crossed when a given field looks like it’s accelerating, but when we zoom in to only look at that field’s breakthroughs, we find that they are decelerating. It seems important to watch out for this.
I think you may be slightly missing my point.
Once you hit the point that you no longer consider any recent advances breakthroughs, yes, it becomes obvious that you’re decelerating.
But until that point, breakthroughs appear to be accelerating.
And if you’re discretizing into breakthrough / non-breakthrough, you’re ignoring all the warning signs that the trend might not continue.
(To return to my previous example: say we currently consider any one step that’s >=1/16th of a unit of knowledge as a breakthrough, and we’re at t=2.4… we had breakthroughs at t=1, 3⁄2, 11⁄6, 25⁄12, 137⁄60. The rate of breakthroughs are accelerating! And then we hit t=49/20, and no breakthrough. And it either looks like we plateaued, or someone goes ‘no, 1/32nd of advancement should be considered a breakthrough’ and makes another chart of accelerating breakthroughs.)
(Yes, in this example every discovery is half as much knowledge as the last one, which makes it somewhat obvious that things have changed. Power of 0.5 was just chosen because it makes the math simpler. However, all the same issues occur with an power of e.g. 0.99 not 0.5. Just more gradually. Which makes the ‘no, the last advance should be considered a breakthrough too’ argument a whole lot easier to inadvertently accept...)
Consider ‘Zeno’s breakthrough’:
At t=1, a discrete discovery occurs that advances us 1 unit of knowledge towards AI.
At t=3/2, (i.e. 1 + 1⁄2) a discrete discovery occurs that advances us 1⁄2 unit of knowledge towards AI.
At t=11/6, (i.e. 1 + 1⁄2 + 1⁄3) a discrete discovery occurs that advances us 1⁄4 unit of knowledge towards AI.
At t=25/12, (i.e. 1 + 1⁄2 + 1⁄3 + 1⁄4) a discrete discovery occurs that advances us 1/8th unit of knowledge towards AI.
Etc.
One the one hand, this looks very much like an accelerating timeline if you are solely looking at breakthroughs. On the other hand, the actual rate of knowledge acquiry over time is decreasing.
I would argue that this sort of trend is fairly common in research. Research discoveries in a particular field do continue over time, and the rate of discoveries increases over time, but the purport of each discovery tends to lessen over time.
Very interesting counterexample! I would suspect it gets increasingly sketchy to characterize 1/8th, 1/16th, etc. ‘units of knowledge towards AI’ as ‘breakthroughs’ in the way I define the term in the post.
I take your point that we might get our wires crossed when a given field looks like it’s accelerating, but when we zoom in to only look at that field’s breakthroughs, we find that they are decelerating. It seems important to watch out for this. Thanks for your comment!
Absolutely. It does—eventually. Which is partially my point. The extrapolation looks sound, until suddenly it isn’t.
I think you may be slightly missing my point.
Once you hit the point that you no longer consider any recent advances breakthroughs, yes, it becomes obvious that you’re decelerating.
But until that point, breakthroughs appear to be accelerating.
And if you’re discretizing into breakthrough / non-breakthrough, you’re ignoring all the warning signs that the trend might not continue.
(To return to my previous example: say we currently consider any one step that’s >=1/16th of a unit of knowledge as a breakthrough, and we’re at t=2.4… we had breakthroughs at t=1, 3⁄2, 11⁄6, 25⁄12, 137⁄60. The rate of breakthroughs are accelerating! And then we hit t=49/20, and no breakthrough. And it either looks like we plateaued, or someone goes ‘no, 1/32nd of advancement should be considered a breakthrough’ and makes another chart of accelerating breakthroughs.)
(Yes, in this example every discovery is half as much knowledge as the last one, which makes it somewhat obvious that things have changed. Power of 0.5 was just chosen because it makes the math simpler. However, all the same issues occur with an power of e.g. 0.99 not 0.5. Just more gradually. Which makes the ‘no, the last advance should be considered a breakthrough too’ argument a whole lot easier to inadvertently accept...)