I agree that there isn’t a sharp line between helper AIs and autonomous AIs. I think it’s also important that autonomous won’t necessarily outcompete helper AIs.
If we use DWIM as our alignment target, you could see a “helper AI” that’s autonomous enough to “create a plan to solve cancer”. The human just told it to do that, and will need to check the plan and ask the AI to actually carry it out if it seems safe.
If you only have a human in the loop at key points in big plans, there’s no real competitive advantage for fully autonomous AGI.
I agree that there isn’t a sharp line between helper AIs and autonomous AIs. I think it’s also important that autonomous won’t necessarily outcompete helper AIs.
If we use DWIM as our alignment target, you could see a “helper AI” that’s autonomous enough to “create a plan to solve cancer”. The human just told it to do that, and will need to check the plan and ask the AI to actually carry it out if it seems safe.
If you only have a human in the loop at key points in big plans, there’s no real competitive advantage for fully autonomous AGI.