Omega lets me decide to take only one box after meeting Omega, when I have already updated on the fact that Omega exists, and so I have much better knowledge about which sort of god I’m likely to encounter. Upsilon treats me on the basis of a guess I would subjunctively make without knowledge of Upsilon. It is therefore not surprising that I tend to do much better with Omega than with Upsilon, because the relevant choices being made by me are being made with much better knowledge. To put it another way, when Omega offers me a Newcomb’s Problem, I will condition my choice on the known existence of Omega, and all the Upsilon-like gods will tend to cancel out into Pascal’s Wagers. If I run into an Upsilon-like god, then, I am not overly worried about my poor performance—it’s like running into the Christian God, you’re screwed, but so what, you won’t actually run into one. Even the best rational agents cannot perform well on this sort of subjunctive hypothesis without much better knowledge while making the relevant choices than you are offering them. For every rational agent who performs well with respect to Upsilon there is one who performs poorly with respect to anti-Upsilon.
On the other hand, beating Newcomb’s Problem is easy, once you let go of the idea that to be “rational” means performing a strange ritual cognition in which you must only choose on the basis of physical consequences and not on the basis of correct predictions that other agents reliably make about you, so that (if you choose using this bizarre ritual) you go around regretting how terribly “rational” you are because of the correct predictions that others make about you. I simply choose on the basis of the correct predictions that others make about me, and so I do not regret being rational.
And these questions are highly relevant and realistic, unlike Upsilon; in the future we can expect there to be lots of rational agents that make good predictions about each other.
Omega lets me decide to take only one box after meeting Omega, when I have already updated on the fact that Omega exists, and so I have much better knowledge about which sort of god I’m likely to encounter.
In what sense can you update? Updating is about following a plan, not about deciding on a plan. You already know that it’s possible to observe anything, you don’t learn anything new about environment by observing any given thing. There could be a deep connection between updating and logical uncertainty that makes it a good plan to update, but it’s not obvious what it is.
Intuitively, the notion of updating a map of fixed reality makes sense, but in the context of decision-making, formalization in full generality proves elusive, even unnecessary, so far.
By making a choice, you control the truth value of certain statements—statements about your decision-making algorithm and about mathematical objects depending on your algorithm. Only some of these mathematical objects are part of the “real world”. Observations affect what choices you make (“updating is about following a plan”), but you must have decided beforehand what consequences you want to establish (“[updating is] not about deciding on a plan”). You could have decided beforehand to care only about mathematical structures that are “real”, but what characterizes those structures apart from the fact that you care about them?
Omega lets me decide to take only one box after meeting Omega, when I have already updated on the fact that Omega exists, and so I have much better knowledge about which sort of god I’m likely to encounter. Upsilon treats me on the basis of a guess I would subjunctively make without knowledge of Upsilon. It is therefore not surprising that I tend to do much better with Omega than with Upsilon, because the relevant choices being made by me are being made with much better knowledge. To put it another way, when Omega offers me a Newcomb’s Problem, I will condition my choice on the known existence of Omega, and all the Upsilon-like gods will tend to cancel out into Pascal’s Wagers. If I run into an Upsilon-like god, then, I am not overly worried about my poor performance—it’s like running into the Christian God, you’re screwed, but so what, you won’t actually run into one. Even the best rational agents cannot perform well on this sort of subjunctive hypothesis without much better knowledge while making the relevant choices than you are offering them. For every rational agent who performs well with respect to Upsilon there is one who performs poorly with respect to anti-Upsilon.
On the other hand, beating Newcomb’s Problem is easy, once you let go of the idea that to be “rational” means performing a strange ritual cognition in which you must only choose on the basis of physical consequences and not on the basis of correct predictions that other agents reliably make about you, so that (if you choose using this bizarre ritual) you go around regretting how terribly “rational” you are because of the correct predictions that others make about you. I simply choose on the basis of the correct predictions that others make about me, and so I do not regret being rational.
And these questions are highly relevant and realistic, unlike Upsilon; in the future we can expect there to be lots of rational agents that make good predictions about each other.
In what sense can you update? Updating is about following a plan, not about deciding on a plan. You already know that it’s possible to observe anything, you don’t learn anything new about environment by observing any given thing. There could be a deep connection between updating and logical uncertainty that makes it a good plan to update, but it’s not obvious what it is.
Huh? Updating is just about updating your map. (?) The next sentence I didn’t understand the reasoning of, could you expand?
Intuitively, the notion of updating a map of fixed reality makes sense, but in the context of decision-making, formalization in full generality proves elusive, even unnecessary, so far.
By making a choice, you control the truth value of certain statements—statements about your decision-making algorithm and about mathematical objects depending on your algorithm. Only some of these mathematical objects are part of the “real world”. Observations affect what choices you make (“updating is about following a plan”), but you must have decided beforehand what consequences you want to establish (“[updating is] not about deciding on a plan”). You could have decided beforehand to care only about mathematical structures that are “real”, but what characterizes those structures apart from the fact that you care about them?
Vladimir talks more about his crazy idea in this comment.
Pascal’s Wagers, huh. So your decision theory requires a specific prior?