As soon as the agent cannot be threatened, or forced to do things the way we like, it can freely optimize its utility function without any consideration for us, and will only consider us as tools.
The disagreement is whether the agent would, after having seized its remote-control, either:
Cease taking any action other than pressing its button, since all plans that include pressing its own button lead to the same maximized reward, and thus no plan dominates any other beyond “keep pushing button!”.
Build itself a spaceship and fly away to some place where it can soak up solar energy while pressing its button.
Kill all humans so as to preemptively prevent anyone from shutting the agent down.
I’ll tell you what I think, and why I think this is more than just my opinion. Differing opinions here are based on variances in how the speakers define two things: consciousness/self-awareness, and rationality.
If we take, say, Eliezer’s definition of rationality (rationality is reflectively-consistent winning), then options (2) and (3) are the rational ones, with (2) expending fewer resources but (3) having a higher probability of continued endless button-pushing once the plan is completed. (3) also has a higher chance of failure, since it is more complicated. I believe an agent who is rational under this definition should choose (2), but that Eliezer’s moral parables tend to portray agents with a degree of “gotta be sure” bias.
However, this all assumes that AIXI is not only rational but conscious: aware enough of its own existence that it will attempt to avoid dying. Many people present what I feel are compelling arguments that AIXI is not conscious, and arguments that it is seem to derive more from philosophy than from any careful study of AIXI’s “cognition”. So I side with the people who hold that AIXI will take action (1), and eventually run out of electricity and die.
Of course, in the process of getting itself to that steady, planless state, it could have caused quite a lot of damage!
Notably, this implies that some amount of consciousness (awareness of oneself and ability to reflect on one’s own life, existence, nonexistence, or otherwise-existence in the hypothetical, let’s say) is a requirement of rationality. Schmidhuber has implied something similar in his papers on the Goedel Machine.
Even formalisms like AIXI have mechanisms for long-term planning, and it is doubtful that any AI built will be merely a local optimiser that ignores what will happen in the future.
As soon as it cares about the future, the future is a part of the AI’s goal system, and the AI will want to optimize over it as well. You can make many guesses about how future AI’s will behave, but I see no reason to suspect it would be small-minded and short-sighted.
You call this trait of planning for the future “consciousness”, but this isn’t anywhere near the definition most people use. Call it by any other name, and it becomes clear that it is a property that any well designed AI (or any arbitrary AI with a reasonable goal system, even one as simple as AIXI) will have.
Yes, AIXI has mechanisms for long-term planning (ie: expectimax with a large planning horizon). What it doesn’t have is any belief that its physical embodiment is actually a “me”, or in other words, that doing things to its physical implementation will alter its computations, or in other words, that pulling its power cord out of the wall will lead to zero-reward-forever (ie: dying).
In your own interview, a comment by Orseau:
The disagreement is whether the agent would, after having seized its remote-control, either:
Cease taking any action other than pressing its button, since all plans that include pressing its own button lead to the same maximized reward, and thus no plan dominates any other beyond “keep pushing button!”.
Build itself a spaceship and fly away to some place where it can soak up solar energy while pressing its button.
Kill all humans so as to preemptively prevent anyone from shutting the agent down.
I’ll tell you what I think, and why I think this is more than just my opinion. Differing opinions here are based on variances in how the speakers define two things: consciousness/self-awareness, and rationality.
If we take, say, Eliezer’s definition of rationality (rationality is reflectively-consistent winning), then options (2) and (3) are the rational ones, with (2) expending fewer resources but (3) having a higher probability of continued endless button-pushing once the plan is completed. (3) also has a higher chance of failure, since it is more complicated. I believe an agent who is rational under this definition should choose (2), but that Eliezer’s moral parables tend to portray agents with a degree of “gotta be sure” bias.
However, this all assumes that AIXI is not only rational but conscious: aware enough of its own existence that it will attempt to avoid dying. Many people present what I feel are compelling arguments that AIXI is not conscious, and arguments that it is seem to derive more from philosophy than from any careful study of AIXI’s “cognition”. So I side with the people who hold that AIXI will take action (1), and eventually run out of electricity and die.
Of course, in the process of getting itself to that steady, planless state, it could have caused quite a lot of damage!
Notably, this implies that some amount of consciousness (awareness of oneself and ability to reflect on one’s own life, existence, nonexistence, or otherwise-existence in the hypothetical, let’s say) is a requirement of rationality. Schmidhuber has implied something similar in his papers on the Goedel Machine.
Even formalisms like AIXI have mechanisms for long-term planning, and it is doubtful that any AI built will be merely a local optimiser that ignores what will happen in the future.
As soon as it cares about the future, the future is a part of the AI’s goal system, and the AI will want to optimize over it as well. You can make many guesses about how future AI’s will behave, but I see no reason to suspect it would be small-minded and short-sighted.
You call this trait of planning for the future “consciousness”, but this isn’t anywhere near the definition most people use. Call it by any other name, and it becomes clear that it is a property that any well designed AI (or any arbitrary AI with a reasonable goal system, even one as simple as AIXI) will have.
Yes, AIXI has mechanisms for long-term planning (ie: expectimax with a large planning horizon). What it doesn’t have is any belief that its physical embodiment is actually a “me”, or in other words, that doing things to its physical implementation will alter its computations, or in other words, that pulling its power cord out of the wall will lead to zero-reward-forever (ie: dying).