I generally agree with the notion that IMO this sort of AI (non-agentic oracular tools) should be as far as we go along that road, and they pretty much destroy all the arguments for why we should rush to AGI as they provide most of the same benefits at a fraction of the cost. However, the crucial point remains that once you have these, the step to agentic AGI seems tiny; possibly as easy as rigging up an AutoGPT-like system which uses these Scientist AIs as one of its core components.
The comparison with human cloning seems apt, as a technology that we know well enough that we surely could do it, we just have for now mostly successfully avoided to do it out of ethical concerns. That said, human cloning is both more instinctively repugnant and less economically useful than building AGI, so the incentives at play are very different. It probably would be much safer to not even have the temptation.
The main advantage of Tool-AIs is that they can be used to solve alignment for more agentic approaches. You don’t need to prevent people from building agentic AI for all time, just in the intermittent period while we have Tool AI, but don’t yet have alignment.
Well, that’s assuming there is something akin to a solution for alignment. I think it’s feasible for the technical aspect but I highly doubt it for the social/political one. I think most or all aligned AIs would just be aligned with someone, specifically.
I generally agree with the notion that IMO this sort of AI (non-agentic oracular tools) should be as far as we go along that road, and they pretty much destroy all the arguments for why we should rush to AGI as they provide most of the same benefits at a fraction of the cost. However, the crucial point remains that once you have these, the step to agentic AGI seems tiny; possibly as easy as rigging up an AutoGPT-like system which uses these Scientist AIs as one of its core components.
The comparison with human cloning seems apt, as a technology that we know well enough that we surely could do it, we just have for now mostly successfully avoided to do it out of ethical concerns. That said, human cloning is both more instinctively repugnant and less economically useful than building AGI, so the incentives at play are very different. It probably would be much safer to not even have the temptation.
The main advantage of Tool-AIs is that they can be used to solve alignment for more agentic approaches. You don’t need to prevent people from building agentic AI for all time, just in the intermittent period while we have Tool AI, but don’t yet have alignment.
Well, that’s assuming there is something akin to a solution for alignment. I think it’s feasible for the technical aspect but I highly doubt it for the social/political one. I think most or all aligned AIs would just be aligned with someone, specifically.