Showing excellent narrow performance *using components that look general* is extremely suggestive [of a future system that can develop lots and lots of different “narrow” expertises, using general components].
Hanson:
It is only broad sets of skills that are suggestive. Being very good at specific tasks is great, but doesn’t suggest much about what it will take to be good at a wide range of tasks. [...] The components look MORE general than the specific problem on which they are applied, but the question is: HOW general overall, relative to the standard of achieving human level abilities across a wide scope of tasks.
It’s somewhat hard to hash this out as an absolute rather than conditional prediction (e.g. conditional on there being breakthroughs involving some domain-specific hacks, and major labs keep working on them, they will somewhat quickly superseded by breakthroughs with general-seming architectures).
Maybe EY would be more bullish on Starcraft without imitation learning, or AlphaFold with only 1 or 2 modules (rather than 4⁄5 or 8⁄9 depending on how you count).
While I find Robin’s model more convincing than Eliezer’s, I’m still pretty uncertain.
That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:
A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussedrecently, or something along similar lines.
Long-run data showing AI systems gradually increasing in capability without any increase in complexity. The AGZ example here might be part of an overall trend in that direction, but as a single data point it really doesn’t say much.
EY seems to have interpreted AlphaGo Zero as strong evidence for his view in the AI-foom debate
I don’t think CAIS takes much of a position on the AI-foom debate. CAIS seems entirely compatible with very fast progress in AI.
I don’t think CAIS would anti-predict AlphaGo Zero, though plausibly it doesn’t predict as strongly as EY’s position does.
conditional on there being breakthroughs involving some domain-specific hacks, and major labs keep working on them, they will somewhat quickly superseded by breakthroughs with general-seming architectures
This is a prediction I make, with “general-seeming” replaced by “more general”, and I think of this as a prediction inspired much more by CAIS than by EY/Bostrom.
The equivalent of the “foom scenario” for CAIS would be rapidly improving basic AI capabilities due to automated AI R&D services, such that the aggregate “soup of services” is quickly able to do more and more complex tasks with constantly improving performance. If you look at the “soup” as an aggregate, this looks like a thing that is quickly becoming superintelligent by self-improving.
The main difference from the classical AI foom scenario is that the thing that’s improving cannot easily be modeled as pursuing a single goal. Also, there are more safety affordances: there can still be humans in the loop for services that have large real world consequences, you can monitor the interactions between services to make sure they aren’t doing anything unexpected, etc.
This is a prediction I make, with “general-seeming” replaced by “more general”, and I think of this as a prediction inspired much more by CAIS than by EY/Bostrom.
I notice I’m confused. My model of CAIS predicts that there would be poor returns to building general services compared to specialised ones (though this might be more of a claim about economics than a claim about the nature of intelligence).
My model of CAIS predicts that there would be poor returns to building general services compared to specialised ones
Depends what you mean by “general”. If you mean that there would be poor returns to building an AGI that has a broad understanding of the world that you then ask to always perform surgery, I agree that that’s not going to be as good as creating a system that is specialized for surgeries. If you mean that there would be poor returns to building a machine translation system that uses end-to-end trained neural nets, I can just point to Google Translate using those neural nets instead of more specialized systems that built parse trees before translating. When you say “domain-specific hacks”, I think much more of the latter than the former.
Another way of putting it is that CAIS says that there are poor returns to building task-general AI systems, but does not say that there are poor returns to building general AI building blocks. In fact, I think CAIS says that you really do make very general AI building blocks—the premise of recursive technological improvement is that AI systems can autonomously perform AI R&D which makes better AI building blocks which makes all of the other services better.
All of that said, Eric and I probably do disagree on how important generality is, though I’m not sure exactly what the disagreement is, so to the extent that you’re trying to use Eric’s conception of CAIS you might want to downweight these particular beliefs of mine.
EY seems to have interpreted AlphaGo Zero as strong evidence for his view in the AI-foom debate, though Hanson disagrees.
EY:
Hanson:
It’s somewhat hard to hash this out as an absolute rather than conditional prediction (e.g. conditional on there being breakthroughs involving some domain-specific hacks, and major labs keep working on them, they will somewhat quickly superseded by breakthroughs with general-seming architectures).
Maybe EY would be more bullish on Starcraft without imitation learning, or AlphaFold with only 1 or 2 modules (rather than 4⁄5 or 8⁄9 depending on how you count).
The following exchange is also relevant:
[-] Raiden 1y link 30
Robin, or anyone who agrees with Robin:
What evidence can you imagine would convince you that AGI would go FOOM?
Reply[-] jprwg 1y link 22
While I find Robin’s model more convincing than Eliezer’s, I’m still pretty uncertain.
That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:
A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussedrecently, or something along similar lines.
Long-run data showing AI systems gradually increasing in capability without any increase in complexity. The AGZ example here might be part of an overall trend in that direction, but as a single data point it really doesn’t say much.
Reply[-] RobinHanson 1y link 23
This seems to me a reasonable statement of the kind of evidence that would be most relevant.
I don’t think CAIS takes much of a position on the AI-foom debate. CAIS seems entirely compatible with very fast progress in AI.
I don’t think CAIS would anti-predict AlphaGo Zero, though plausibly it doesn’t predict as strongly as EY’s position does.
This is a prediction I make, with “general-seeming” replaced by “more general”, and I think of this as a prediction inspired much more by CAIS than by EY/Bostrom.
Isn’t the “foom scenario” referring to an individual AI that quickly gains ASI status by self-improving?
The equivalent of the “foom scenario” for CAIS would be rapidly improving basic AI capabilities due to automated AI R&D services, such that the aggregate “soup of services” is quickly able to do more and more complex tasks with constantly improving performance. If you look at the “soup” as an aggregate, this looks like a thing that is quickly becoming superintelligent by self-improving.
The main difference from the classical AI foom scenario is that the thing that’s improving cannot easily be modeled as pursuing a single goal. Also, there are more safety affordances: there can still be humans in the loop for services that have large real world consequences, you can monitor the interactions between services to make sure they aren’t doing anything unexpected, etc.
I notice I’m confused. My model of CAIS predicts that there would be poor returns to building general services compared to specialised ones (though this might be more of a claim about economics than a claim about the nature of intelligence).
Depends what you mean by “general”. If you mean that there would be poor returns to building an AGI that has a broad understanding of the world that you then ask to always perform surgery, I agree that that’s not going to be as good as creating a system that is specialized for surgeries. If you mean that there would be poor returns to building a machine translation system that uses end-to-end trained neural nets, I can just point to Google Translate using those neural nets instead of more specialized systems that built parse trees before translating. When you say “domain-specific hacks”, I think much more of the latter than the former.
Another way of putting it is that CAIS says that there are poor returns to building task-general AI systems, but does not say that there are poor returns to building general AI building blocks. In fact, I think CAIS says that you really do make very general AI building blocks—the premise of recursive technological improvement is that AI systems can autonomously perform AI R&D which makes better AI building blocks which makes all of the other services better.
All of that said, Eric and I probably do disagree on how important generality is, though I’m not sure exactly what the disagreement is, so to the extent that you’re trying to use Eric’s conception of CAIS you might want to downweight these particular beliefs of mine.