I see a few criticisms about how this doesn’t really solve the problem, it only delays it because we expect a unified agent to outperform the combined services.
It seems to me on the basis of that criticism that this is worth driving as a commercial template anyway. Every R&D dollar that goes into a bounded service is one that doesn’t drive specifically for an unbounded agent; every PhD doing development an individual service is not doing development on a unified agent.
We’re currently still in the regime where first mover advantage is overwhelming; if CAIS were in place rather than win all the marbles immediately they would win all the marbles eventually and so the incentives are reduced. I expect this approach to extend the runway we have for nailing down the safety questions before a unified agent takes off.
I suppose the delaying action could backfire by reducing funding for safety, and also potentially by simplifying the problem of a unified AGI to bootstrapping from a superintelligent CAIS coordinator. Is there any difference between the superintelligent CAIS coordinator and the AGI in terms of alignment?
I see a few criticisms about how this doesn’t really solve the problem, it only delays it because we expect a unified agent to outperform the combined services.
Not sure if you’re talking about me, but I suspect that my criticism could be read that way. Just want to clarify that I do think “we expect a unified agent to outperform the combined services” but I don’t think this means we shouldn’t pursue CAIS. That strategic question seems hard and I don’t have a strong opinion on it.
You were one of them, but not the only one. I thought it was worth pointing the strategic question out specifically, because we have only recently had enough plausible alternatives for there to even be such a question.
Granted, the lack of options makes me feel a bit like anime-guy-looks-at-butterfly for alternatives. I agree the strategic question is hard.
I see a few criticisms about how this doesn’t really solve the problem, it only delays it because we expect a unified agent to outperform the combined services.
It seems to me on the basis of that criticism that this is worth driving as a commercial template anyway. Every R&D dollar that goes into a bounded service is one that doesn’t drive specifically for an unbounded agent; every PhD doing development an individual service is not doing development on a unified agent.
We’re currently still in the regime where first mover advantage is overwhelming; if CAIS were in place rather than win all the marbles immediately they would win all the marbles eventually and so the incentives are reduced. I expect this approach to extend the runway we have for nailing down the safety questions before a unified agent takes off.
I suppose the delaying action could backfire by reducing funding for safety, and also potentially by simplifying the problem of a unified AGI to bootstrapping from a superintelligent CAIS coordinator. Is there any difference between the superintelligent CAIS coordinator and the AGI in terms of alignment?
Not sure if you’re talking about me, but I suspect that my criticism could be read that way. Just want to clarify that I do think “we expect a unified agent to outperform the combined services” but I don’t think this means we shouldn’t pursue CAIS. That strategic question seems hard and I don’t have a strong opinion on it.
You were one of them, but not the only one. I thought it was worth pointing the strategic question out specifically, because we have only recently had enough plausible alternatives for there to even be such a question.
Granted, the lack of options makes me feel a bit like anime-guy-looks-at-butterfly for alternatives. I agree the strategic question is hard.