That seems obviously incorrect to me because as an updateless decision maker you don’t know you are in the branch where you replace odds with evens. Your utility is half way between a correct updateless analysis and a correct analysis with updates. Or it is the correct utility if Omega also replaces the result in worlds where the parity of Q is different (so either Q is different or Omega randomly decides whether it’s actually going to visit anyone or just predict what you would decide if the situation was different and applies that to whatever happens), in which case you have done a horrible job of miscommunication.
I have only a vague idea what exactly required more explanation so I’ll try to explain everything.
My U_replace is the utility if you act on the general policy of replacing the result in counterfactual branches with the result in the branch Omega visits. It’s the average over all imaginable worlds (imaginable worlds where Q is even and those where Q is odd), the probability of a world multiplied with its utility.
P(“odd”|Odd)*( P(“odd” n Odd)*100 + P(“even” n Odd)*100) + P(“even”|Odd)*( P(“odd” n Odd)*0 + P(“even” n Odd)*0) is the utility for the half of imaginable worlds where Q is odd (all possible worlds if Q is odd).
P(“odd”|Odd) is the probability that the calculator shows odd in whatever other possible world Omega visits, conditional on Q being odd (which is correct to use because here only imaginable worlds where Q is odd are considered, the even worlds come later). If that happens the utility for worlds where the calculator shows even is replaced with 100.
P(“even”|Odd) is the probability that the calculator shows even in the other possible (=odd) world Omega visits. If that happens the utility for possible worlds where the calculator shows odd is replaced with 0.
At this point I’d just say replace odd with even for the other half, but last time I said something like that it didn’t seem to work so here’s it replaced manually:
P(“even”|even)*( P(“even” n even)*100 + P(“odd” n even)*100) + P(“odd”|even)*( P(“even” n even)*0 + P(“odd” n even)*0) is the utility for the half of imaginable worlds where Q is even (all possible worlds if Q is even).
P(“even”|even) is the probability that the calculator shows even in whatever other possible world Omega visits, conditional on Q being even (which is correct to use because here only imaginable worlds where Q is even are considered, the odd worlds came earlier). If that happens the utility for worlds where the calculator shows odd is replaced with 100.
P(“odd”|even) is the probability that the calculator shows odd in the other possible (=even) world Omega visits. If that happens the utility for possible worlds where the calculator shows even is replaced with 0.
If you want to say that updateless analysis is not allowed to take dependencies of this kind into account I ask you for an updateless analysis of the game with black and white balls a few comments upthread. Either updateless analysis as you understand it can’t deal with that game (and is therefore incomplete) or I can use whatever you use to formalize that game for this problem and you can’t brush me aside with saying that I’m not working updatelessly.
EDIT: The third interpretation of your utility function would be the utility of the general policy of always replacing odds with evens regardless of what the calculator in the world Omega visited showed, which would be so ridiculously stupid that it didn’t occur to me anyone might possibly be talking about that, even to point out fallacious thinking.
P(“odd”|Odd)*( P(“odd” n Odd)*100 + P(“even” n Odd)*100) + P(“even”|Odd)*( P(“odd” n Odd)*0 + P(“even” n Odd)*0) is the utility for the half of imaginable worlds where Q is odd (all possible worlds if Q is odd).
Consider expected utility [P(“odd” n Odd)*100 + P(“even” n Odd)*100)] from your formula. What event and decision is this the expected utility of? It seems to consider two events, [“odd” n Odd] and [“even” n Odd]. For both of them to get 100 utils, the strategy (decision) you’re considering must be, always answer-odd (since you can only answer in response to indication on the calculators, and here we have both indications and the same answer necessary for success in both events).
But U_replace estimates the expected utility of a different strategy, of strategy where you answer-even on your own “even” branch and also answer-even on the “odd” branch with Omega’s help. So you’re already computing something different.
Then, in the same formula, you have [P(“odd” n Odd)*0 + P(“even” n Odd)*0]. But to get 0 utils in both cases, you have to answer incorrectly in both cases, and since we’re considering Odd, this must be unconditional answer-even. This contradicts the way you did your expected utility calculation in the first terms of the formula (where you were considering the strategy of unconditional answer-odd).
Expected utility is computed for one strategy at a time, and values of expected utility computed separately for each strategy are used to compare the strategies. You seem to be doing something else.
Expected utility is computed for one strategy at a time, and values of expected utility computed separately for each strategy are used to compare the strategies. You seem to be doing something else.
I’m calculating for one strategy, the strategy of “fill in whatever the calculator in the world Omega appeared in showed”, but I have a probability distribution across what that entails (see my other reply). I’m multiplying the utility of picking “odd” with the probability of picking “odd” and the utility of picking “even” with the probability of picking “even”.
So that’s what happens when you don’t describe what strategy you’re computing expected utility of in enough detail in advance. By problem statement, the calculator in the world in which Omega showed shows “even”.
But even if you expect Omega to appear on either side, this still isn’t right. Where’s the probability of Omega appearing on either side in your calculation? The event of Omega appearing on one or the other side must enter the model, and it wasn’t explicitly referenced in any of your formulas.
And since every summand includes a P(Odd n X) or a P(Even n X) everything is already multiplied with P(Even) or P(Odd) as appropriate. In retrospect it would have been a lot clearer if I had factored that out, but I wrote U_not_replace first in the way that seemed most obvious and merely modified that to U_replace so it never occured to me to do that.
Omega visits either the “odd” world or “even” world, not Odd world or Even world. For example, in Odd world it’d still need to decide between “odd” and “even”.
That’s what multiplying with P(“odd”|Odd) etc was about. (the probability that, given Omega appearing in an Odd world it would appear in an “odd” world). I thought I explained that?
Or it is the correct utility if Omega also replaces the result in worlds where the parity of Q is different
Since you don’t know what parity of Q is, you can’t refer to the class of worlds where it’s “the same” or “different”, in particular because it can’t be different. So again, I don’t know what you describe here.
(It’s still correct to talk about the sets of possible worlds that rely on Q being either even or odd, because that’s your model of uncertainty, and you are uncertain about whether Q is even or odd. But not of sets of possible worlds that have your parity of Q, just as it doesn’t make sense to talk of the actual state of the world (as opposed to the current observational event, which is defined by past observations).)
Since you don’t know what parity of Q is, you can’t refer to the class of worlds where it’s “the same” or “different”, in particular because it can’t be different. So again, I don’t know what you describe here.
I’m merely trying to exclude a possible misunderstanding that would mean both of us being correct in the version of the problem we are talking about. Here’s another attempt. The only difference between the world Omega shows up in and the counterfactual worlds Omega affects regarding the calculator result is whether or not the calculator malfunctioned, you just don’t know on which side it malfunctioned. Is that correct?
as an updateless decision maker you don’t know you are in the branch where you replace odds with evens.
I don’t understand what this refers to. (Which branch is that? What do you mean by “replace”? Does your ‘odd’ refer to calculator-shows-odd or it’s-actually-odd or ’let’s-write-”odd”-on-the-test-sheet etc.?)
Also, updateless decision-maker reasons about strategies, which describe responses to all possible observations, and in this sense updateless analysis does take possible observations into account.
(The downside of long replies and asynchronous communication: it’s better to be able to interrupt after a few words and make sure we won’t talk past each other for another hour.)
Here’s another attempt at explaining your error (as it appears to me):
In the terminology of Wei Dai’s original post an updateless agent considers the consequences of a program S(X) returning Y on input X, where X includes all observations and memories, and the agent is updateless in respect to things included in X. For an ideal updateless agent this X includes everything, including the memory of having seen the calculator come up even. So it does not make sense for such an agent to consider the unconditional strategy of choosing even, and doing so does not properly model an updating agent choosing even after seeing even, it models an updating agent choosing even without having seen anything.
An obvious simplification of an (computationally extremely expensive) updateless agent would be to simplify X. If X is made up of the parts X1 and X2 and X1 is identical for all instances of S being called, then it makes sense to incorporate X1 into a modified version of S, S’ (more precisely the part of S or S’ that generates the world programs S or S’ tries to maximize). In that case a normal Bayesian update would be performed (UDT is not a blanket rejection of Bayesianism, see Wei Dai’s original post). S’ would be updateless with resepct to X2, but not with respect to X1. If X1 is indeed always part of the argument when S is called S’ should always give back the same output as S.
Your utility implies an S’ with respect to having observed “even”, but without the corresponding update, so it generates faulty world programs, and a different utility expectation than the original S or a correctly simplified version S″ (which in this case is not updateless because there is nothing else to be updateless towards).
The updateless analogue to the updater strategy “ask Omega to fill in the answer “even” in counterfactual worlds because you have seen the calculator result “even”″ is “ask Omega to fill in the answer the calculator gives whereever Omega shows up”. As an updateless decision maker you don’t know that the calculator showed “even” in your world because “your world” doesn’t even make sense to an updateless reasoner. The updateless replacing strategy is a fixed strategy that has a particular observation as parameter. An updateless strategy without parameter would be equivalent to an updater strategy of asking Omega to write in “even” in other worlds before seeing any calculator result.
The updateless analogue to the updater strategy “ask Omega to fill in the answer “even” in counterfactual worlds because you have seen the calculator result “even”″ is...
Updateless strategies describe how you react to observations. You do react to observations in updateless strategies. In our case, we don’t even need that, since all observations are fixed by the problem statement: you observe “even”, case closed. The strategies you consider specify what you write down on your own “even” test sheet, and what you write on the “odd” counterfactual test sheet, all independently of observations.
The “updateless” aspect is in not forgetting about counterfactuals and using prior probabilities everywhere, instead of updated probabilities. So, you use P(Odd n “odd”) to describe the situation where Q is Odd and the counterfactual calculator shows “odd”, instead of using P(Odd n “odd”|”even”), which doesn’t even make sense.
More generally, you can have updateless analysis being wrong on any kind of problem, simply by incorporating an observation into the problem statement and then not updating on it.
Huh? If you don’t update, you don’t need to update, so to speak. By not forgetting about events, you do take into account their relative probability in the context of the sub-events relevant for your problem. Examples please.
Updateless strategies describe how you react to observations. You do react to observations in updateless strategies. In our case, we don’t even need that, since all observations are fixed by the problem statement: you observe “even”, case closed.
Holding observations fixed but not updating on them is simply a misapplication of UDT. For an ideal updateless agent no observation is fixed and everything (every memory and observation) part of the variable input X. See this comment
Holding observations fixed but not updating on them is simply a misapplication of UDT.
A misapplication, strictly speaking, but not “simply”. Without restricting your attention to particular situations, while ignoring other situations, you won’t be able to consider any thought experiments. For any thought experiment I show you, you’ll say that you have to compute expected utility over all possible thought experiments, and that would be end of it.
So in applying UDT in real life, it’s necessary to stipulate the problem statement, the boundary event in which all relevant possibilities are contained, and over which we compute expected utility. You, too, introduced such an event, you just did it a step earlier than what’s given in the problem statement, by paying attention to the term “observation” attached to the calculator, and the fact that all other elements of the problem are observations also.
(On unrelated note, I have doubts about correctness of your work with that broader event too, see this comment.)
So in applying UDT in real life, it’s necessary to stipulate the problem statement, the boundary event in which all relevant possibilities are contained, and over which we compute expected utility.
Yes, of course. But you perform normal Bayesian updates for everything else (everything you hold fixed). Holding something fixed and not updating leads to errors.
Simple example: An urn with either 90% red and 10% blue balls or 90% blue and 10% red balls (0.5 prior for either). You have drawn a red ball and put it back. What’s the updateless expected utility of drawing another ball, assuming you get 1 util for drawing a ball in the same color and −2 utils for drawing a ball in a different color? Calculating as getting 1 util for red balls and −2 for blue, but not updating on the observation of having drawn a red ball suggests that it’s −0.5, when in fact it’s 0.46.
EDIT: miscalculated the utilities, but the general thrust is the same.
Holding something fixed and not updating leads to errors.
No, controlling something and updating it away leads to errors. Fixed terms in expected utility don’t influence optimality, you just lose ability to consider the influence of various strategies on them. Here, the strategies under considerations don’t have any relevant effects outside the problem statement.
I admit that I did not anticipate you replying in this way and even though I think I understand what you are saying I still don’t understand why. This is the main source of my uncertainty on whether I’m right at this point. It seems increasingly clear that at least one of us doesn’t properly understand UDT. I hope we can clear this up and if it turns out the misunderstanding was on my part I commit to upvoting all comments by you that contributed to enlightening me about that.
Unless I completely misunderstand you that’s a completely different context for/meaning of “fixed term” and while true not at all relevant here. I mean fixed in the sense of knowing the utilities of red and blue balls in the example I gave.
No, controlling something and updating it away leads to errors.
Also leads to errors, obviously. And I’m not doing that anyway. Something leading to errors is extremely weak evidence against something else also leading to error, so how is this relevant?
Correct (if you mean to say that all errors apparently caused by lack of updating can also be framed as being caused by wrongly holding something fixed) for a sufficiently wide sense of not fixed. The fact that you are considering to replace odd results in counterfactual worlds with even results and not the other way round, or the fact that the utility of drawing a red ball is 1 and for a blue ball −2 in my example (did you get around to taking a look at it?) both have to be considered not fixed in that sense.
Basically in the terminology of this comment you can consider anything in X1 fixed and avoid the error I’m talking about by updating. Or you can avoid that error by not holding it fixed in the first place. The same holds for anything in X2 for which the decision will never have any consequences anywhere it’s not true (or at least all its implications fully carry over), though that’s obviously more dangerous (and has the side effect of splitting the agent into different versions in different environments).
The error you’re talking about (the very error which UDT is correction for) is holding something in X2 fixed and updating when it does have outside consequences. Sometimes the error will only manifest when you actually update and only holding fixed gives results equivalent to the correct ones.
The test to see whether it’s allowable to update on x is to check whether the update results in the same answers as an updateless analysis that does not hold x fixed. If an analysis with update on x and one that holds x fixed but does not update disagree the problem is not always with the analysis with update. In fact in all problems CDT and UDT agree (most boring problems) the version with update should be correct and the version that only holds fixed might not be.
That seems obviously incorrect to me because as an updateless decision maker you don’t know you are in the branch where you replace odds with evens. Your utility is half way between a correct updateless analysis and a correct analysis with updates. Or it is the correct utility if Omega also replaces the result in worlds where the parity of Q is different (so either Q is different or Omega randomly decides whether it’s actually going to visit anyone or just predict what you would decide if the situation was different and applies that to whatever happens), in which case you have done a horrible job of miscommunication.
I have only a vague idea what exactly required more explanation so I’ll try to explain everything.
My U_replace is the utility if you act on the general policy of replacing the result in counterfactual branches with the result in the branch Omega visits. It’s the average over all imaginable worlds (imaginable worlds where Q is even and those where Q is odd), the probability of a world multiplied with its utility.
P(“odd”|Odd)*( P(“odd” n Odd)*100 + P(“even” n Odd)*100) + P(“even”|Odd)*( P(“odd” n Odd)*0 + P(“even” n Odd)*0) is the utility for the half of imaginable worlds where Q is odd (all possible worlds if Q is odd).
P(“odd”|Odd) is the probability that the calculator shows odd in whatever other possible world Omega visits, conditional on Q being odd (which is correct to use because here only imaginable worlds where Q is odd are considered, the even worlds come later). If that happens the utility for worlds where the calculator shows even is replaced with 100.
P(“even”|Odd) is the probability that the calculator shows even in the other possible (=odd) world Omega visits. If that happens the utility for possible worlds where the calculator shows odd is replaced with 0.
At this point I’d just say replace odd with even for the other half, but last time I said something like that it didn’t seem to work so here’s it replaced manually:
P(“even”|even)*( P(“even” n even)*100 + P(“odd” n even)*100) + P(“odd”|even)*( P(“even” n even)*0 + P(“odd” n even)*0) is the utility for the half of imaginable worlds where Q is even (all possible worlds if Q is even).
P(“even”|even) is the probability that the calculator shows even in whatever other possible world Omega visits, conditional on Q being even (which is correct to use because here only imaginable worlds where Q is even are considered, the odd worlds came earlier). If that happens the utility for worlds where the calculator shows odd is replaced with 100.
P(“odd”|even) is the probability that the calculator shows odd in the other possible (=even) world Omega visits. If that happens the utility for possible worlds where the calculator shows even is replaced with 0.
If you want to say that updateless analysis is not allowed to take dependencies of this kind into account I ask you for an updateless analysis of the game with black and white balls a few comments upthread. Either updateless analysis as you understand it can’t deal with that game (and is therefore incomplete) or I can use whatever you use to formalize that game for this problem and you can’t brush me aside with saying that I’m not working updatelessly.
EDIT: The third interpretation of your utility function would be the utility of the general policy of always replacing odds with evens regardless of what the calculator in the world Omega visited showed, which would be so ridiculously stupid that it didn’t occur to me anyone might possibly be talking about that, even to point out fallacious thinking.
Consider expected utility [P(“odd” n Odd)*100 + P(“even” n Odd)*100)] from your formula. What event and decision is this the expected utility of? It seems to consider two events, [“odd” n Odd] and [“even” n Odd]. For both of them to get 100 utils, the strategy (decision) you’re considering must be, always answer-odd (since you can only answer in response to indication on the calculators, and here we have both indications and the same answer necessary for success in both events).
But U_replace estimates the expected utility of a different strategy, of strategy where you answer-even on your own “even” branch and also answer-even on the “odd” branch with Omega’s help. So you’re already computing something different.
Then, in the same formula, you have [P(“odd” n Odd)*0 + P(“even” n Odd)*0]. But to get 0 utils in both cases, you have to answer incorrectly in both cases, and since we’re considering Odd, this must be unconditional answer-even. This contradicts the way you did your expected utility calculation in the first terms of the formula (where you were considering the strategy of unconditional answer-odd).
Expected utility is computed for one strategy at a time, and values of expected utility computed separately for each strategy are used to compare the strategies. You seem to be doing something else.
I’m calculating for one strategy, the strategy of “fill in whatever the calculator in the world Omega appeared in showed”, but I have a probability distribution across what that entails (see my other reply). I’m multiplying the utility of picking “odd” with the probability of picking “odd” and the utility of picking “even” with the probability of picking “even”.
So that’s what happens when you don’t describe what strategy you’re computing expected utility of in enough detail in advance. By problem statement, the calculator in the world in which Omega showed shows “even”.
But even if you expect Omega to appear on either side, this still isn’t right. Where’s the probability of Omega appearing on either side in your calculation? The event of Omega appearing on one or the other side must enter the model, and it wasn’t explicitly referenced in any of your formulas.
But implicitly.
P(Omega_in_Odd_world)=P(Omega_in_Even_world)=0.5, but
P(Omega_in_Odd_world|Odd)= P(Omega_in_Even_world|Even)=1
And since every summand includes a P(Odd n X) or a P(Even n X) everything is already multiplied with P(Even) or P(Odd) as appropriate. In retrospect it would have been a lot clearer if I had factored that out, but I wrote U_not_replace first in the way that seemed most obvious and merely modified that to U_replace so it never occured to me to do that.
Omega visits either the “odd” world or “even” world, not Odd world or Even world. For example, in Odd world it’d still need to decide between “odd” and “even”.
That’s what multiplying with P(“odd”|Odd) etc was about. (the probability that, given Omega appearing in an Odd world it would appear in an “odd” world). I thought I explained that?
Since you don’t know what parity of Q is, you can’t refer to the class of worlds where it’s “the same” or “different”, in particular because it can’t be different. So again, I don’t know what you describe here.
(It’s still correct to talk about the sets of possible worlds that rely on Q being either even or odd, because that’s your model of uncertainty, and you are uncertain about whether Q is even or odd. But not of sets of possible worlds that have your parity of Q, just as it doesn’t make sense to talk of the actual state of the world (as opposed to the current observational event, which is defined by past observations).)
I’m merely trying to exclude a possible misunderstanding that would mean both of us being correct in the version of the problem we are talking about. Here’s another attempt. The only difference between the world Omega shows up in and the counterfactual worlds Omega affects regarding the calculator result is whether or not the calculator malfunctioned, you just don’t know on which side it malfunctioned. Is that correct?
Sounds right, although when you speak of the only difference, it’s easy to miss something.
I don’t understand what this refers to. (Which branch is that? What do you mean by “replace”? Does your ‘odd’ refer to calculator-shows-odd or it’s-actually-odd or ’let’s-write-”odd”-on-the-test-sheet etc.?)
Also, updateless decision-maker reasons about strategies, which describe responses to all possible observations, and in this sense updateless analysis does take possible observations into account.
(The downside of long replies and asynchronous communication: it’s better to be able to interrupt after a few words and make sure we won’t talk past each other for another hour.)
Here’s another attempt at explaining your error (as it appears to me):
In the terminology of Wei Dai’s original post an updateless agent considers the consequences of a program S(X) returning Y on input X, where X includes all observations and memories, and the agent is updateless in respect to things included in X. For an ideal updateless agent this X includes everything, including the memory of having seen the calculator come up even. So it does not make sense for such an agent to consider the unconditional strategy of choosing even, and doing so does not properly model an updating agent choosing even after seeing even, it models an updating agent choosing even without having seen anything.
An obvious simplification of an (computationally extremely expensive) updateless agent would be to simplify X. If X is made up of the parts X1 and X2 and X1 is identical for all instances of S being called, then it makes sense to incorporate X1 into a modified version of S, S’ (more precisely the part of S or S’ that generates the world programs S or S’ tries to maximize). In that case a normal Bayesian update would be performed (UDT is not a blanket rejection of Bayesianism, see Wei Dai’s original post). S’ would be updateless with resepct to X2, but not with respect to X1. If X1 is indeed always part of the argument when S is called S’ should always give back the same output as S.
Your utility implies an S’ with respect to having observed “even”, but without the corresponding update, so it generates faulty world programs, and a different utility expectation than the original S or a correctly simplified version S″ (which in this case is not updateless because there is nothing else to be updateless towards).
(This question seems to depend on resolving this first.)
The updateless analogue to the updater strategy “ask Omega to fill in the answer “even” in counterfactual worlds because you have seen the calculator result “even”″ is “ask Omega to fill in the answer the calculator gives whereever Omega shows up”. As an updateless decision maker you don’t know that the calculator showed “even” in your world because “your world” doesn’t even make sense to an updateless reasoner. The updateless replacing strategy is a fixed strategy that has a particular observation as parameter. An updateless strategy without parameter would be equivalent to an updater strategy of asking Omega to write in “even” in other worlds before seeing any calculator result.
Updateless strategies describe how you react to observations. You do react to observations in updateless strategies. In our case, we don’t even need that, since all observations are fixed by the problem statement: you observe “even”, case closed. The strategies you consider specify what you write down on your own “even” test sheet, and what you write on the “odd” counterfactual test sheet, all independently of observations.
The “updateless” aspect is in not forgetting about counterfactuals and using prior probabilities everywhere, instead of updated probabilities. So, you use P(Odd n “odd”) to describe the situation where Q is Odd and the counterfactual calculator shows “odd”, instead of using P(Odd n “odd”|”even”), which doesn’t even make sense.
More generally, you can have updateless analysis being wrong on any kind of problem, simply by incorporating an observation into the problem statement and then not updating on it.
Huh? If you don’t update, you don’t need to update, so to speak. By not forgetting about events, you do take into account their relative probability in the context of the sub-events relevant for your problem. Examples please.
here
Holding observations fixed but not updating on them is simply a misapplication of UDT. For an ideal updateless agent no observation is fixed and everything (every memory and observation) part of the variable input X. See this comment
A misapplication, strictly speaking, but not “simply”. Without restricting your attention to particular situations, while ignoring other situations, you won’t be able to consider any thought experiments. For any thought experiment I show you, you’ll say that you have to compute expected utility over all possible thought experiments, and that would be end of it.
So in applying UDT in real life, it’s necessary to stipulate the problem statement, the boundary event in which all relevant possibilities are contained, and over which we compute expected utility. You, too, introduced such an event, you just did it a step earlier than what’s given in the problem statement, by paying attention to the term “observation” attached to the calculator, and the fact that all other elements of the problem are observations also.
(On unrelated note, I have doubts about correctness of your work with that broader event too, see this comment.)
Yes, of course. But you perform normal Bayesian updates for everything else (everything you hold fixed). Holding something fixed and not updating leads to errors.
Simple example: An urn with either 90% red and 10% blue balls or 90% blue and 10% red balls (0.5 prior for either). You have drawn a red ball and put it back. What’s the updateless expected utility of drawing another ball, assuming you get 1 util for drawing a ball in the same color and −2 utils for drawing a ball in a different color? Calculating as getting 1 util for red balls and −2 for blue, but not updating on the observation of having drawn a red ball suggests that it’s −0.5, when in fact it’s 0.46.
EDIT: miscalculated the utilities, but the general thrust is the same.
P(RedU)=P(BlueU)=P(red)=P(blue)=0.5
P(red|RedU)=P(RedU|red)=P(blue|BlueU)=P(BlueU|blue)=0.9
P(blue|RedU)=P(RedU|blue)=P(BlueU|red)=P(Red|BlueU)=0.1
U_updating=P(RedU|red)*P(red|RedU)*1 + P(BlueU|red)*Pred(|BlueU)*1 - P(RedU|red)*P(blue|RedU)*2 - P(BlueU|red)*P(blue|BlueU)*2 = 0.9*0.9+0.1*0.1-0.9*0.1*2*2= 0.46
U_semi_updateless=P(red)*1-P(blue)*2=-0.5
U_updateless= P(red)(P(RedU|red)*P(red|RedU)*1 + P(BlueU|red)*Pred(|BlueU)*1 - P(RedU|red)*P(blue|RedU)*2 - P(BlueU|red)*P(blue|BlueU)*2) +P(blue)(P(BlueU|blue)*P(blue|BlueU)*1 + P(RedU|blue)*P(blue|RedU)*1 - P(BlueU|blue)*P(red|BlueU)*2 - P(RedU|blue)*P(red|RedU)*2) =0.5*(0.9*0.9+0.1*0.1-0.9*0.1*2*2)+0.5* (0.9*0.9+0.1*0.1-0.9*0.1*2*2)=0.46
(though normally you’d probably come up with U_updateless in a differently factored form)
EDIT3: More sensible/readable factorization of U_updateless:
P(RedU)((P(red|RedU)(P(red|RedU)*1-P(blue|RedU)*2)+(P(blue|RedU)(P(blue|RedU)*1-P(red|RedU)*2)) + P(BlueU)((P(blue|BlueU)(P(blue|BlueU)*1-P(red|BlueU)*2)+(P(red|BlueU)(P(red|BlueU)*1-P(blue|BlueU)*2))
No, controlling something and updating it away leads to errors. Fixed terms in expected utility don’t influence optimality, you just lose ability to consider the influence of various strategies on them. Here, the strategies under considerations don’t have any relevant effects outside the problem statement.
(I’ll look into your example another time.)
Just to make sure: You mean something like updating on the box being empty in transparent Newcomb’s here, right? Not relevant as far as I can see.
I admit that I did not anticipate you replying in this way and even though I think I understand what you are saying I still don’t understand why. This is the main source of my uncertainty on whether I’m right at this point. It seems increasingly clear that at least one of us doesn’t properly understand UDT. I hope we can clear this up and if it turns out the misunderstanding was on my part I commit to upvoting all comments by you that contributed to enlightening me about that.
Unless I completely misunderstand you that’s a completely different context for/meaning of “fixed term” and while true not at all relevant here. I mean fixed in the sense of knowing the utilities of red and blue balls in the example I gave.
Also leads to errors, obviously. And I’m not doing that anyway. Something leading to errors is extremely weak evidence against something else also leading to error, so how is this relevant?
This is the very error which UDT (at least, this aspect of it) is correction for.
That still doesn’t make it evidence for something different not being an error. (and formal UDT is not the only way to avoid that error)
Not updating never leads to errors. Holding fixed what isn’t can.
Correct (if you mean to say that all errors apparently caused by lack of updating can also be framed as being caused by wrongly holding something fixed) for a sufficiently wide sense of not fixed. The fact that you are considering to replace odd results in counterfactual worlds with even results and not the other way round, or the fact that the utility of drawing a red ball is 1 and for a blue ball −2 in my example (did you get around to taking a look at it?) both have to be considered not fixed in that sense.
Basically in the terminology of this comment you can consider anything in X1 fixed and avoid the error I’m talking about by updating. Or you can avoid that error by not holding it fixed in the first place. The same holds for anything in X2 for which the decision will never have any consequences anywhere it’s not true (or at least all its implications fully carry over), though that’s obviously more dangerous (and has the side effect of splitting the agent into different versions in different environments).
The error you’re talking about (the very error which UDT is correction for) is holding something in X2 fixed and updating when it does have outside consequences. Sometimes the error will only manifest when you actually update and only holding fixed gives results equivalent to the correct ones.
The test to see whether it’s allowable to update on x is to check whether the update results in the same answers as an updateless analysis that does not hold x fixed. If an analysis with update on x and one that holds x fixed but does not update disagree the problem is not always with the analysis with update. In fact in all problems CDT and UDT agree (most boring problems) the version with update should be correct and the version that only holds fixed might not be.