This post would benefit greatly from a link introducing AIXI so we know what you’re talking about.
It doesn’t model its past self as a goal seeking agent, so sees nothing odd about its previous outputs. And nothing in its record will cause it to change its model of the universe: as far as it can tell, this button has no effect whatsoever.
Not as far as I can see… if the AIXI is at least somewhat effective, then it will be able to note a connection between button-presses and changes in tendency of grue population, even if it doesn’t know why…
But of course there may be something in the AIXI definition that interferes with this.
Not as far as I can see… if the AIXI is at least somewhat effective, then it will be able to note a connection between button-presses and changes in tendency of grue population, even if it doesn’t know why...
No, it won’t: it knows exactly why the grue population went down, which is because it choose to output the grue-killing bit. It has no idea—and no interest—as to why it did that, but it can see the button has no effect on the universe: everything can be explained in terms of its own actions.
A google search puts forward an abstract by Hutter proclaiming, “We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. ”
I posit that that is utterly and profoundly wrong, or the AI would be able to figure out that mashing the button produces effects it’s not (presently) fond of.
This post would benefit greatly from a link introducing AIXI so we know what you’re talking about.
Not as far as I can see… if the AIXI is at least somewhat effective, then it will be able to note a connection between button-presses and changes in tendency of grue population, even if it doesn’t know why…
But of course there may be something in the AIXI definition that interferes with this.
No, it won’t: it knows exactly why the grue population went down, which is because it choose to output the grue-killing bit. It has no idea—and no interest—as to why it did that, but it can see the button has no effect on the universe: everything can be explained in terms of its own actions.
A google search puts forward an abstract by Hutter proclaiming, “We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. ”
I posit that that is utterly and profoundly wrong, or the AI would be able to figure out that mashing the button produces effects it’s not (presently) fond of.
AIXI is the smartest, and the stupidest, agent out there.