I agree that we are already in this regime. In the section “AI Helping Humans with AI” I tried to make it more precise at what threshold we would see substantial change in how humans interact with AI to build more advanced AI systems. Essentially, it will be when most people would use those tools most of their time (like on a daily basis) and they would observe some substantial gains of productivity (like using some oracle to make a lot of progress on a problem they are stuck on, or Copilot auto-completing a lot of their lines of code without having to manually edit.) The intuition for a threshold is “most people would need to use”.
Re diminishing returns: see my other comment. In summary, if you just consider one team building AIHHAI, they would get more data and research as input from the outside world, and they would get increases in productivity from using more capable AIHHAIs. Diminishing returns could happen if: 1) scaling laws for coding AI do not hold anymore 2) we are not able to gather coding data (or do other tricks like data augmentation) at a pace high enough 3) investments for some reasons do not follow 4) there are some hardware bottlenecks in building larger and larger infrastructures. For now I have only seen evidence for 2) and this seems something that can be solved via transfer learning or new ML research.
Better modeling of those different interactions between AI labor and AI capability tech are definitely needed. For some high-level picture that mostly thinks about substitutability between capital and labor, applying to AI, I would recommend this paper (or video and slides). The equation that is the closest to self-improving {H,AI} would be this one.
I agree that we are already in this regime. In the section “AI Helping Humans with AI” I tried to make it more precise at what threshold we would see substantial change in how humans interact with AI to build more advanced AI systems. Essentially, it will be when most people would use those tools most of their time (like on a daily basis) and they would observe some substantial gains of productivity (like using some oracle to make a lot of progress on a problem they are stuck on, or Copilot auto-completing a lot of their lines of code without having to manually edit.) The intuition for a threshold is “most people would need to use”.
Re diminishing returns: see my other comment. In summary, if you just consider one team building AIHHAI, they would get more data and research as input from the outside world, and they would get increases in productivity from using more capable AIHHAIs. Diminishing returns could happen if: 1) scaling laws for coding AI do not hold anymore 2) we are not able to gather coding data (or do other tricks like data augmentation) at a pace high enough 3) investments for some reasons do not follow 4) there are some hardware bottlenecks in building larger and larger infrastructures. For now I have only seen evidence for 2) and this seems something that can be solved via transfer learning or new ML research.
Better modeling of those different interactions between AI labor and AI capability tech are definitely needed. For some high-level picture that mostly thinks about substitutability between capital and labor, applying to AI, I would recommend this paper (or video and slides). The equation that is the closest to self-improving {H,AI} would be this one.