Today’s specialized AIs have little chance of becoming self-improving, but as as specialized AIs adopt more advanced techniques (like the ones Nesov suggested), the line between specialized AIs and AGIs won’t be so clear. After all, chess-playing and car-driving programs can always be implemented as AGIs with very specific and limited super-goals, so I expect that as AGI techniques advance, people working on specialized AIs will also adopt them, but perhaps without giving as much thought about the AI-foom problem.
I would think that specialization reduces the variant trees that the AI has to consider which makes it unlikely that implenting AGI techniques would help the chess playing program.
It is not clear to me that the AGI wouldn’t (eventually) be able to do everything that a specialised program would (and more). After all, humans are a general intelligence and can specialise; some of us are great chess players, and if we stretch the word specialise, creating a chess AI also counts (it’s a human effort to create a better optimisation process for winning chess).
So I imagine an AGI, able to rewrite its own code, would at the same time be able to develop the techniques of specialised AIs, while considering broader issues that might also be of use (like taking over the world/lightcone to get more processing power for playing chess). Just like humanity making chess machines, it could discover and implement better techniques (and if it breaks out of the box, hardware), something the chess programs themselves cannot do.
Or maybe I’m nuts. /layman ignoramus disclaimer/ but in that case I’d appreciate a hint at the error I’m making (besides being a layman ignoramus). :)
EDIT: scary idea, but an AGI with the goal of becoming better at chess might only not kill us because chess is perhaps a problem that’s generally soluble with finite resources.
Today’s specialized AIs have little chance of becoming self-improving, but as as specialized AIs adopt more advanced techniques (like the ones Nesov suggested), the line between specialized AIs and AGIs won’t be so clear. After all, chess-playing and car-driving programs can always be implemented as AGIs with very specific and limited super-goals, so I expect that as AGI techniques advance, people working on specialized AIs will also adopt them, but perhaps without giving as much thought about the AI-foom problem.
I would think that specialization reduces the variant trees that the AI has to consider which makes it unlikely that implenting AGI techniques would help the chess playing program.
It is not clear to me that the AGI wouldn’t (eventually) be able to do everything that a specialised program would (and more). After all, humans are a general intelligence and can specialise; some of us are great chess players, and if we stretch the word specialise, creating a chess AI also counts (it’s a human effort to create a better optimisation process for winning chess).
So I imagine an AGI, able to rewrite its own code, would at the same time be able to develop the techniques of specialised AIs, while considering broader issues that might also be of use (like taking over the world/lightcone to get more processing power for playing chess). Just like humanity making chess machines, it could discover and implement better techniques (and if it breaks out of the box, hardware), something the chess programs themselves cannot do.
Or maybe I’m nuts. /layman ignoramus disclaimer/ but in that case I’d appreciate a hint at the error I’m making (besides being a layman ignoramus). :)
EDIT: scary idea, but an AGI with the goal of becoming better at chess might only not kill us because chess is perhaps a problem that’s generally soluble with finite resources.