As a basic prior, our only example of general intelligence so far is ourselves—a species composed of agentlike individuals who pursue open-ended goals. So it makes sense to expect AGIs to be similar—especially if you believe that our progress in artificial intelligence is largely driven by semi-random search with lots of compute (like evolution was) rather than principled intelligent design.
The objective function for the semi-random search matters a lot. Evolution did semi-random search over organisms which maximize reproductive success. It seems that the best organisms for maximizing reproductive success tend to be agentlike. AI researchers do semi-random search over programs which are good for making money or impressing other researchers. If the programs which are good for making money or impressing other researchers tend to be services, I think we see progress in services.
My model is there is a lot of interest in agents recently because AlphaGo impressed other researchers a lot. However, it doesn’t seem like the techniques behind AlphaGo have a lot of commercial applications. (Maybe OpenAI’s new robotics initiative will change that.)
Humans think in terms of individuals with goals, and so even if there’s an equally good approach to AGI which doesn’t conceive of it as a single goal-directed agent, researchers will be biased against it.
I could imagine this being true in an alternate universe where there’s a much greater overlap between SF fandom and machine learning research, but it doesn’t seem like an accurate description of the current ML research community. (I think the biggest bias shaping the direction of the current ML research community is the emphasis on constructing benchmarks, e.g. ImageNet, and competing to excel at them. I suspect the benchmark paradigm is a contributor to the AI services trend Eric identifies.)
The objective function for the semi-random search matters a lot. Evolution did semi-random search over organisms which maximize reproductive success. It seems that the best organisms for maximizing reproductive success tend to be agentlike. AI researchers do semi-random search over programs which are good for making money or impressing other researchers. If the programs which are good for making money or impressing other researchers tend to be services, I think we see progress in services.
My model is there is a lot of interest in agents recently because AlphaGo impressed other researchers a lot. However, it doesn’t seem like the techniques behind AlphaGo have a lot of commercial applications. (Maybe OpenAI’s new robotics initiative will change that.)
I could imagine this being true in an alternate universe where there’s a much greater overlap between SF fandom and machine learning research, but it doesn’t seem like an accurate description of the current ML research community. (I think the biggest bias shaping the direction of the current ML research community is the emphasis on constructing benchmarks, e.g. ImageNet, and competing to excel at them. I suspect the benchmark paradigm is a contributor to the AI services trend Eric identifies.)