This is a prediction I make, with “general-seeming” replaced by “more general”, and I think of this as a prediction inspired much more by CAIS than by EY/Bostrom.
I notice I’m confused. My model of CAIS predicts that there would be poor returns to building general services compared to specialised ones (though this might be more of a claim about economics than a claim about the nature of intelligence).
My model of CAIS predicts that there would be poor returns to building general services compared to specialised ones
Depends what you mean by “general”. If you mean that there would be poor returns to building an AGI that has a broad understanding of the world that you then ask to always perform surgery, I agree that that’s not going to be as good as creating a system that is specialized for surgeries. If you mean that there would be poor returns to building a machine translation system that uses end-to-end trained neural nets, I can just point to Google Translate using those neural nets instead of more specialized systems that built parse trees before translating. When you say “domain-specific hacks”, I think much more of the latter than the former.
Another way of putting it is that CAIS says that there are poor returns to building task-general AI systems, but does not say that there are poor returns to building general AI building blocks. In fact, I think CAIS says that you really do make very general AI building blocks—the premise of recursive technological improvement is that AI systems can autonomously perform AI R&D which makes better AI building blocks which makes all of the other services better.
All of that said, Eric and I probably do disagree on how important generality is, though I’m not sure exactly what the disagreement is, so to the extent that you’re trying to use Eric’s conception of CAIS you might want to downweight these particular beliefs of mine.
I notice I’m confused. My model of CAIS predicts that there would be poor returns to building general services compared to specialised ones (though this might be more of a claim about economics than a claim about the nature of intelligence).
Depends what you mean by “general”. If you mean that there would be poor returns to building an AGI that has a broad understanding of the world that you then ask to always perform surgery, I agree that that’s not going to be as good as creating a system that is specialized for surgeries. If you mean that there would be poor returns to building a machine translation system that uses end-to-end trained neural nets, I can just point to Google Translate using those neural nets instead of more specialized systems that built parse trees before translating. When you say “domain-specific hacks”, I think much more of the latter than the former.
Another way of putting it is that CAIS says that there are poor returns to building task-general AI systems, but does not say that there are poor returns to building general AI building blocks. In fact, I think CAIS says that you really do make very general AI building blocks—the premise of recursive technological improvement is that AI systems can autonomously perform AI R&D which makes better AI building blocks which makes all of the other services better.
All of that said, Eric and I probably do disagree on how important generality is, though I’m not sure exactly what the disagreement is, so to the extent that you’re trying to use Eric’s conception of CAIS you might want to downweight these particular beliefs of mine.