you and programs like you make up a small amount of measure in AIXI’s beliefs
I understand that this is the claim, but my intuition is that, supposing that AIXI has observed a long enough sequence to have as good an idea as I do of how the world is put together, I and programs like me (like “naturalized induction”) are the shortest of the survivors, and hence dominate AIXI’s predictions. Basically, I’m positing that after a certain point, AIXI will notice that it is embodied and doesn’t have a soul, for essentially the same reason that I have noticed those things: they are implications of the simplest explanations consistent with the observations I have made so far.
Well, I guess it could, but that isn’t the claim being put forth in the OP.
(Unlike some around these parts, I see a clear distinction between an agent’s posterior distribution and the agent’s posterior-utility-maximizing part. From the outside, expected-utility-maximizing agents form an equivalence class such that all agents with the same are equivalent, and we need only consider the quotient space of agents; from the inside, the epistemic and value-laden parts of an agent can thought of separately.)
I understand that this is the claim, but my intuition is that, supposing that AIXI has observed a long enough sequence to have as good an idea as I do of how the world is put together, I and programs like me (like “naturalized induction”) are the shortest of the survivors, and hence dominate AIXI’s predictions. Basically, I’m positing that after a certain point, AIXI will notice that it is embodied and doesn’t have a soul, for essentially the same reason that I have noticed those things: they are implications of the simplest explanations consistent with the observations I have made so far.
Why couldn’t it also be a program that has predictive powers similar to yours, but doesn’t care about avoiding death?
Well, I guess it could, but that isn’t the claim being put forth in the OP.
(Unlike some around these parts, I see a clear distinction between an agent’s posterior distribution and the agent’s posterior-utility-maximizing part. From the outside, expected-utility-maximizing agents form an equivalence class such that all agents with the same are equivalent, and we need only consider the quotient space of agents; from the inside, the epistemic and value-laden parts of an agent can thought of separately.)
Oh, I see what you’re saying now.