Have you seen papers like this one? Embedded AIXIs converge on Nash equilibrium against each other, that’s optimal enough, you don’t need to go up another level. I agree it’s not very relevant to our world, but there’s no difference in terms of embeddedness, the only difference is resource constraints.
I was not aware of these results—thanks. I’d glanced at the papers on reflective oracles but mentally filed them as just about game theory, when of course they are really very relevant to the sort of thing I am concerned with here.
We have a remaining semantic disagreement. I think you’re using “embeddedness” quite differently than it’s used in the “Embedded World-Models” post. For example, in that post (text version):
In a traditional Bayesian framework, “learning” means Bayesian updating. But as we noted, Bayesian updating requires that the agent start out large enough to consider a bunch of ways the world can be, and learn by ruling some of these out.
Embedded agents need resource-limited, logically uncertain updates, which don’t work like this.
Unfortunately, Bayesian updating is the main way we know how to think about an agent progressing through time as one unified agent. The Dutch book justification for Bayesian reasoning is basically saying this kind of updating is the only way to not have the agent’s actions on Monday work at cross purposes, at least a little, to the agent’s actions on Tuesday.
Embedded agents are non-Bayesian. And non-Bayesian agents tend to get into wars with their future selves.
The 2nd and 4th paragraphs here are clearly false for reflective AIXI. And the 2nd paragraph implies that embedded agents are definitionally resource-limited. There is a true and important sense in which reflective AIXI can be “embedded”—that was the point of coming up with it! -- but the Embedded Agency sequence seems to be excluding this kind of case when it talks about embedded agents. This strikes me as something I’d like to see clarified by the authors of the sequence, actually.
I think the difference may be that we talk about “a theory of rationality for embedded agents,” we could mean “a theory that has consequences for agents equally powerful to it,” or we could mean something more like “a theory that has consequences for agents of arbitrarily low power.” Reflective AIXI (as a theory of rationality) explains why reflective AIXI (as an agent) is optimally designed, but it can’t explain why a real-world robot might or might not be optimally designed.
Have you seen papers like this one? Embedded AIXIs converge on Nash equilibrium against each other, that’s optimal enough, you don’t need to go up another level. I agree it’s not very relevant to our world, but there’s no difference in terms of embeddedness, the only difference is resource constraints.
I was not aware of these results—thanks. I’d glanced at the papers on reflective oracles but mentally filed them as just about game theory, when of course they are really very relevant to the sort of thing I am concerned with here.
We have a remaining semantic disagreement. I think you’re using “embeddedness” quite differently than it’s used in the “Embedded World-Models” post. For example, in that post (text version):
The 2nd and 4th paragraphs here are clearly false for reflective AIXI. And the 2nd paragraph implies that embedded agents are definitionally resource-limited. There is a true and important sense in which reflective AIXI can be “embedded”—that was the point of coming up with it! -- but the Embedded Agency sequence seems to be excluding this kind of case when it talks about embedded agents. This strikes me as something I’d like to see clarified by the authors of the sequence, actually.
I think the difference may be that we talk about “a theory of rationality for embedded agents,” we could mean “a theory that has consequences for agents equally powerful to it,” or we could mean something more like “a theory that has consequences for agents of arbitrarily low power.” Reflective AIXI (as a theory of rationality) explains why reflective AIXI (as an agent) is optimally designed, but it can’t explain why a real-world robot might or might not be optimally designed.