I agree with this. This also indicates one of the problems with the AI risk idea. If there is an AI going around that people call “human level,” it will actually be better than humans in many ways. So how come it can’t or doesn’t want to destroy the world yet? Suppose there are 500 domains left in which it is inferior to humans.
Eliezer says that “superintelligence” for the purposes of our bet only counts if the thing is better than humans in basically every domain. But this seems to imply that at some point, as those 500 areas slowly disappear, the AI will suddenly acquire magical powers. If not, it will be able to surpass humans in all 500 areas, and so be a superintelligence, and the world will still be going on as usual.
I agree with this. This also indicates one of the problems with the AI risk idea. If there is an AI going around that people call “human level,” it will actually be better than humans in many ways. So how come it can’t or doesn’t want to destroy the world yet? Suppose there are 500 domains left in which it is inferior to humans.
Eliezer says that “superintelligence” for the purposes of our bet only counts if the thing is better than humans in basically every domain. But this seems to imply that at some point, as those 500 areas slowly disappear, the AI will suddenly acquire magical powers. If not, it will be able to surpass humans in all 500 areas, and so be a superintelligence, and the world will still be going on as usual.