I haven’t specifically tried to model the low hanging fruit hypothesis, but I do believe the hypothesis and so it probably doesn’t contradict the model strongly. I don’t quite follow your reasoning though—how does the hypothesis make discontinuities more likely? Can you elaborate?
I have a few implicit assumptions that affect my thinking:
A soft takeoff starts from something resembling our world, distributed
There is at least one layer above ideas (capability)
Low hanging fruit hypothesis
The real work is being done by an additional two assumptions:
The capability layer grows in a way similar to the idea layer, and competes for the same resources
Innovation consists of at least one capability
So under my model, the core mechanism of differentiation is that developing an insurmountable single capability advantage competes with rapid gains in a different capability (or line of ideas), which includes innovation capacity. Further, different lines of ideas and capabilities will have different development speeds.
Now a lot of this differentiation collapses when we get more specific about what we are comparing, like if we choose Google, Facebook and Microsoft on the single capability of Deep Learning. It is worth considering that software has an unusually cheap transfer of ideas to capability, which is the crux of why AI weighs so heavily as a concern. But this is unique to software for now, and in order to be a strategic threat it has to cash out in non-software capability eventually, so keeping the others in mind feels important.
OK, so if I’m getting this correctly, the idea is that there are different capabilities, and the low hanging fruit hypothesis applies separately to each one, and not all capabilities are being pursued successfully at all times, so when a new capability starts being pursued successfully there is a burst of rapid progress as low-hanging fruit is picked. Thus, progress should proceed jumpily, with some capabilities stagnant or nonexistent for a while and then quickly becoming great and then levelling off. Is this what you have in mind?
That is correct. And since different players start with different capabilities and are in different local environments under the soft takeoff assumption, I really can’t imagine a scenario where everyone winds up in the same place (or even tries to get there—I strongly expect optimizing for different capabilities depending on the environment, too).
OK, I think I agree with this picture to some extent. It’s just that if things like taking over the world require lots of different capabilities, maybe jumpy progress in specific capabilities distributed unevenly across factions all sorta averages out thanks to law of large numbers into smooth progress in world-takeover-ability distributed mostly evenly across factions.
Or not. Idk. I think this is an important variable to model and forecast, thanks for bringing it up!
I haven’t specifically tried to model the low hanging fruit hypothesis, but I do believe the hypothesis and so it probably doesn’t contradict the model strongly. I don’t quite follow your reasoning though—how does the hypothesis make discontinuities more likely? Can you elaborate?
Sure!
I have a few implicit assumptions that affect my thinking:
A soft takeoff starts from something resembling our world, distributed
There is at least one layer above ideas (capability)
Low hanging fruit hypothesis
The real work is being done by an additional two assumptions:
The capability layer grows in a way similar to the idea layer, and competes for the same resources
Innovation consists of at least one capability
So under my model, the core mechanism of differentiation is that developing an insurmountable single capability advantage competes with rapid gains in a different capability (or line of ideas), which includes innovation capacity. Further, different lines of ideas and capabilities will have different development speeds.
Now a lot of this differentiation collapses when we get more specific about what we are comparing, like if we choose Google, Facebook and Microsoft on the single capability of Deep Learning. It is worth considering that software has an unusually cheap transfer of ideas to capability, which is the crux of why AI weighs so heavily as a concern. But this is unique to software for now, and in order to be a strategic threat it has to cash out in non-software capability eventually, so keeping the others in mind feels important.
OK, so if I’m getting this correctly, the idea is that there are different capabilities, and the low hanging fruit hypothesis applies separately to each one, and not all capabilities are being pursued successfully at all times, so when a new capability starts being pursued successfully there is a burst of rapid progress as low-hanging fruit is picked. Thus, progress should proceed jumpily, with some capabilities stagnant or nonexistent for a while and then quickly becoming great and then levelling off. Is this what you have in mind?
That is correct. And since different players start with different capabilities and are in different local environments under the soft takeoff assumption, I really can’t imagine a scenario where everyone winds up in the same place (or even tries to get there—I strongly expect optimizing for different capabilities depending on the environment, too).
OK, I think I agree with this picture to some extent. It’s just that if things like taking over the world require lots of different capabilities, maybe jumpy progress in specific capabilities distributed unevenly across factions all sorta averages out thanks to law of large numbers into smooth progress in world-takeover-ability distributed mostly evenly across factions.
Or not. Idk. I think this is an important variable to model and forecast, thanks for bringing it up!