Isn’t every AI potentially a self-improving AI? All it takes is for the AI to come upon the insight “hey, I can build an AI to do my job better.” I guess it requires some minimum amount of intelligence for such an insight to become likely, but my point is that one doesn’t necessarily have to set out to build a self-improving AI, to actually build a self-improving AI.
I’m very much out of touch with the AI scene, but I believe the key distinction is between Artificial General Intelligence, versus specialized approaches like chess-playing programs or systems that drive cars.
A chess program’s goal structure is strictly restricted to playing chess, but any AI with the ability to formulate arbitrary sub-goals could potentially stumble on self-improvement as a sub-goal.
Additionally, the actions that a chess AI can consider and take are limited to moving pieces on a virtual chess board, and the consequences of such actions that it considers are limited to the state of the chess game, with no model of how the outside world affects the opposing moves other than the abstract assumption that the opponent will make the best move available. The chess AI simply does not have any awareness of anything outside the chess game.
with no model of how the outside world affects the opposing moves other than the abstract assumption that the opponent will make the best move available.
A good chess AI would not be so constrained. A history of all chess games played by the particular opponent would be quite useful. As would his psychology
Additionally, the actions that a chess AI can consider and take are limited to moving pieces on a virtual chess board
Is it worth me examining the tree beyond this particular move further? How long will it take me (metacognitive awareness...) relative to my time limit?
The chess AI simply does not have any awareness of anything outside the chess game.
Unless someone gives them such awareness, which may be useful in some situations or may just seem useful to naive developers who get their hands on more GAI research than they can safely handle.
Today’s specialized AIs have little chance of becoming self-improving, but as as specialized AIs adopt more advanced techniques (like the ones Nesov suggested), the line between specialized AIs and AGIs won’t be so clear. After all, chess-playing and car-driving programs can always be implemented as AGIs with very specific and limited super-goals, so I expect that as AGI techniques advance, people working on specialized AIs will also adopt them, but perhaps without giving as much thought about the AI-foom problem.
I would think that specialization reduces the variant trees that the AI has to consider which makes it unlikely that implenting AGI techniques would help the chess playing program.
It is not clear to me that the AGI wouldn’t (eventually) be able to do everything that a specialised program would (and more). After all, humans are a general intelligence and can specialise; some of us are great chess players, and if we stretch the word specialise, creating a chess AI also counts (it’s a human effort to create a better optimisation process for winning chess).
So I imagine an AGI, able to rewrite its own code, would at the same time be able to develop the techniques of specialised AIs, while considering broader issues that might also be of use (like taking over the world/lightcone to get more processing power for playing chess). Just like humanity making chess machines, it could discover and implement better techniques (and if it breaks out of the box, hardware), something the chess programs themselves cannot do.
Or maybe I’m nuts. /layman ignoramus disclaimer/ but in that case I’d appreciate a hint at the error I’m making (besides being a layman ignoramus). :)
EDIT: scary idea, but an AGI with the goal of becoming better at chess might only not kill us because chess is perhaps a problem that’s generally soluble with finite resources.
Isn’t every AI potentially a self-improving AI? All it takes is for the AI to come upon the insight “hey, I can build an AI to do my job better.” I guess it requires some minimum amount of intelligence for such an insight to become likely, but my point is that one doesn’t necessarily have to set out to build a self-improving AI, to actually build a self-improving AI.
I’m very much out of touch with the AI scene, but I believe the key distinction is between Artificial General Intelligence, versus specialized approaches like chess-playing programs or systems that drive cars.
A chess program’s goal structure is strictly restricted to playing chess, but any AI with the ability to formulate arbitrary sub-goals could potentially stumble on self-improvement as a sub-goal.
Additionally, the actions that a chess AI can consider and take are limited to moving pieces on a virtual chess board, and the consequences of such actions that it considers are limited to the state of the chess game, with no model of how the outside world affects the opposing moves other than the abstract assumption that the opponent will make the best move available. The chess AI simply does not have any awareness of anything outside the chess game.
A good chess AI would not be so constrained. A history of all chess games played by the particular opponent would be quite useful. As would his psychology
Is it worth me examining the tree beyond this particular move further? How long will it take me (metacognitive awareness...) relative to my time limit?
Unless someone gives them such awareness, which may be useful in some situations or may just seem useful to naive developers who get their hands on more GAI research than they can safely handle.
Such a history would also contain of a list of move on a virtual chess game.
If you are very naive it’s unlikely that you understand the problem of AI well enough to solve it.
Today’s specialized AIs have little chance of becoming self-improving, but as as specialized AIs adopt more advanced techniques (like the ones Nesov suggested), the line between specialized AIs and AGIs won’t be so clear. After all, chess-playing and car-driving programs can always be implemented as AGIs with very specific and limited super-goals, so I expect that as AGI techniques advance, people working on specialized AIs will also adopt them, but perhaps without giving as much thought about the AI-foom problem.
I would think that specialization reduces the variant trees that the AI has to consider which makes it unlikely that implenting AGI techniques would help the chess playing program.
It is not clear to me that the AGI wouldn’t (eventually) be able to do everything that a specialised program would (and more). After all, humans are a general intelligence and can specialise; some of us are great chess players, and if we stretch the word specialise, creating a chess AI also counts (it’s a human effort to create a better optimisation process for winning chess).
So I imagine an AGI, able to rewrite its own code, would at the same time be able to develop the techniques of specialised AIs, while considering broader issues that might also be of use (like taking over the world/lightcone to get more processing power for playing chess). Just like humanity making chess machines, it could discover and implement better techniques (and if it breaks out of the box, hardware), something the chess programs themselves cannot do.
Or maybe I’m nuts. /layman ignoramus disclaimer/ but in that case I’d appreciate a hint at the error I’m making (besides being a layman ignoramus). :)
EDIT: scary idea, but an AGI with the goal of becoming better at chess might only not kill us because chess is perhaps a problem that’s generally soluble with finite resources.