It’s not clear what the “Omega offers the decision in a correct-calculator world” event is, since we already know that Omega offers the decision in “even” worlds, in some of which “even” is correct, and in some of which it’s not (as far as you know), and 99% of “even” worlds are the ones where calculator is correct, while you clearly assign 50% as probability of your event.
When you speak of “worlds” here, do you mean the “world-programs” in the UDT1.1 formalism? If that is what you mean, then one of us is confused about how UDT1.1 formalizes probabilities. I’m not sure how to resolve this except to repeat my request that you give your own formalization of your problem in UDT1.1.
For my part, I am going to say some stuff on which I think that we agree. But, at some point, I will slide into saying stuff on which we disagree. Where is the point at which you start to disagree with the following?
UDT1.1 formalizes two different kinds of probability in two very different ways:
One kind of probability is applied to predicates of world-programs, especially predicates that might be satisfied by some of the world-programs while not being satisfied by the others. The probability (in the present sense) of such a predicate R is formalized as the measure of the set of world-programs satisfying R. (In particular, R is supposed to be a predicate such that whether a world-program satisfies R does not depend on the agent’s decisions.)
The other kind of probability comes from the probability M(f, E) that the agent’s mathematical intuition M assigns to the proposition that the sequence E of execution histories would occur if the agent were to implement input-output map f. This gives us probability measures P_f over sequences of execution histories: Given a predicate T of execution-history sequences, P_f(T) is the sum of the values M(f, E) as E ranges over the execution-history sequences satisfying predicate T.
I took the calculator’s 99% correctness rate to be a probability of the first kind. There is a correct calculator in 99% of the world-programs (the “correct-calculator worlds”) and an incorrect calculator in the remaining 1%.*
However, I took the probability of 1⁄2 that Q is even to be a probability of the second kind. It’s not as though Q is even in some of the execution histories, while that same Q is odd in some others. Either Q is even in all of the execution histories, or Q is odd in all of the execution histories.* But the agent’s mathematical intuition has no idea which is the case, so the induced probability distributions give P_f(even) = 1⁄2 (for all f), where even is the predicate such that, for all execution-history sequences E*,
even(E) = “The parity of Q is ultimately revealed by the grader to be even in all of the execution histories in E”
Likewise, I was referring to the second kind of probability when I wrote that, “according to the agent’s mathematical intuition, Omega is just as likely to offer the decision in a correct-calculator world as in the incorrect-calculator world”. The truth or falsity of “Omega offers the decision in a correct-calculator world” is a property of an entire execution-history sequence. This proposition is either true with respect to all the execution histories in the sequence, or false with respect to all of them.
The upshot is that, when you write “99% of ‘even’ worlds are the ones where calculator is correct, while you clearly assign 50% as probability of your event”, you are talking about two very different kinds of probabilities.
* Alternatively, this weighting can be incorporated into how the utility function over execution-history sequences responds to an event occurring in one world-program vs. another. If I had used this approach in my UDT1.1 formalization of your problem, I would have had just two world-programs: a correct-calculator world and an incorrect-calculator world. Then, having the correct parity on the answer sheet in the correct-calculator world would have been worth 99 times as much as having the correct parity in the incorrect-calculator world. But this would not have changed my computations. I don’t think that this issue is the locus of our present disagreement.
* You must be disagreeing with me by this point, because I have contradicted your claim that “Omega offers the decision in ‘even’ worlds, in some of which ‘even’ is correct, and in some of which it’s not*”. (Emphasis added.)
You must be disagreeing with me by this point, because I have contradicted your claim that “Omega offers the decision in ‘even’ worlds, in some of which ‘even’ is correct, and in some of which it’s not”. (Emphasis added.)
World-programs are a bad model for possible worlds. For all you know, there could be just one world-program (indeed you can consider an equivalent variant of the theory where it’s so: just have that single world program enumerate all outputs of all possible programs). The element of UDT analogous to possible worlds is execution histories. And some execution histories easily indicate that 2+2=5 (if we take execution histories to be enumerations of logical theories, with world-programs axiomatic definitions of theories). Observations, other background facts, and your actions are all elements that specify (sets/events of) execution histories. Utility function is defined on execution histories (and it’s usually defined on possible worlds). Probability given by mathematical intuition can be read as naming probability that given execution history (possible world) is an actual one.
It’s not clear what the “Omega offers the decision in a correct-calculator world” event is, since we already know that Omega offers the decision in “even” worlds, in some of which “even” is correct, and in some of which it’s not (as far as you know), and 99% of “even” worlds are the ones where calculator is correct, while you clearly assign 50% as probability of your event.
So, you intended that the equivalence
“Omega offers the decision” <==> “the calculator says ‘even’ ”
be known to the agent’s mathematical intuition? I didn’t realize that, but my solution still applies without change. It just means that, as far as the agent’s mathematical intuition is concerned, we have the following equivalences between predicates over sequences of execution histories:
“Omega offers the decision in a correct-calculator world”
is equivalent to
“The calculator says ‘even’ in the 99 correct-calculator worlds”,
while
“Omega offers the decision in an incorrect-calculator world”
is equivalent to
“The calculator says ‘even’ in the one incorrect-calculator world”.
Below, I give my guess at your UDT1.1 approach to the problem in the OP. If I’m right, then we use the UDT1.1 concepts differently, but the math amounts to just a rearrangement of terms. I see merits in each conceptual approach over the other. I haven’t decided which one I like best.
At any rate, here is my guess at your formalization: We have one world-program. We consider the following one-place predicates over possible execution histories for this program: Given any execution history E,
CalculatorIsCorrect(E) asserts that, in E, the calculator gives the correct parity for Q.
“even”(E) asserts that, in E, the calculator says “even”. Omega then appears to the agent and asks it what Omega should have written on the test sheet in an execution history in which (1) Omega blocks the agent from writing on the answer sheet and (2) the calculator says “odd”.
“odd”(E) asserts that, in E, the calculator says “odd”. Omega then (1) blocks the agent from writing on the test sheet and (2) computes what the agent would have said to Omega in an execution history F such that “even”(F). Omega then writes what the agent would say in F on the answer sheet in E.
Borrowing notation from my last comment, we make the following assumptions about the probability measures P_f. For all input-output maps f,
P_f(CalculatorIsCorrect) = 0.99,
P_f(“even”) = P_f(“odd”) = 1⁄2,
“even” and “odd” are uncorrelated with CalculatorIsCorrect under P_f.
The input-output maps to consider are
g: On seeing “even”, write “even” and tell Omega, “Write ‘even’.”
h: On seeing “even”, write “even” and tell Omega, “Write ‘odd’.”
The utility U(E) of an execution history E is 1 if the answer on the sheet in E is the true parity of Q. Otherwise, U(E) = 0.
The expected payoffs of g and h are then, respectively,
When you speak of “worlds” here, do you mean the “world-programs” in the UDT1.1 formalism? If that is what you mean, then one of us is confused about how UDT1.1 formalizes probabilities. I’m not sure how to resolve this except to repeat my request that you give your own formalization of your problem in UDT1.1.
For my part, I am going to say some stuff on which I think that we agree. But, at some point, I will slide into saying stuff on which we disagree. Where is the point at which you start to disagree with the following?
(I follow the notation in my write-up of UDT1.1 (pdf).)
UDT1.1 formalizes two different kinds of probability in two very different ways:
One kind of probability is applied to predicates of world-programs, especially predicates that might be satisfied by some of the world-programs while not being satisfied by the others. The probability (in the present sense) of such a predicate R is formalized as the measure of the set of world-programs satisfying R. (In particular, R is supposed to be a predicate such that whether a world-program satisfies R does not depend on the agent’s decisions.)
The other kind of probability comes from the probability M(f, E) that the agent’s mathematical intuition M assigns to the proposition that the sequence E of execution histories would occur if the agent were to implement input-output map f. This gives us probability measures P_f over sequences of execution histories: Given a predicate T of execution-history sequences, P_f(T) is the sum of the values M(f, E) as E ranges over the execution-history sequences satisfying predicate T.
I took the calculator’s 99% correctness rate to be a probability of the first kind. There is a correct calculator in 99% of the world-programs (the “correct-calculator worlds”) and an incorrect calculator in the remaining 1%.*
However, I took the probability of 1⁄2 that Q is even to be a probability of the second kind. It’s not as though Q is even in some of the execution histories, while that same Q is odd in some others. Either Q is even in all of the execution histories, or Q is odd in all of the execution histories.* But the agent’s mathematical intuition has no idea which is the case, so the induced probability distributions give P_f(even) = 1⁄2 (for all f), where even is the predicate such that, for all execution-history sequences E*,
even(E) = “The parity of Q is ultimately revealed by the grader to be even in all of the execution histories in E”
Likewise, I was referring to the second kind of probability when I wrote that, “according to the agent’s mathematical intuition, Omega is just as likely to offer the decision in a correct-calculator world as in the incorrect-calculator world”. The truth or falsity of “Omega offers the decision in a correct-calculator world” is a property of an entire execution-history sequence. This proposition is either true with respect to all the execution histories in the sequence, or false with respect to all of them.
The upshot is that, when you write “99% of ‘even’ worlds are the ones where calculator is correct, while you clearly assign 50% as probability of your event”, you are talking about two very different kinds of probabilities.
* Alternatively, this weighting can be incorporated into how the utility function over execution-history sequences responds to an event occurring in one world-program vs. another. If I had used this approach in my UDT1.1 formalization of your problem, I would have had just two world-programs: a correct-calculator world and an incorrect-calculator world. Then, having the correct parity on the answer sheet in the correct-calculator world would have been worth 99 times as much as having the correct parity in the incorrect-calculator world. But this would not have changed my computations. I don’t think that this issue is the locus of our present disagreement.
* You must be disagreeing with me by this point, because I have contradicted your claim that “Omega offers the decision in ‘even’ worlds, in some of which ‘even’ is correct, and in some of which it’s not*”. (Emphasis added.)
World-programs are a bad model for possible worlds. For all you know, there could be just one world-program (indeed you can consider an equivalent variant of the theory where it’s so: just have that single world program enumerate all outputs of all possible programs). The element of UDT analogous to possible worlds is execution histories. And some execution histories easily indicate that 2+2=5 (if we take execution histories to be enumerations of logical theories, with world-programs axiomatic definitions of theories). Observations, other background facts, and your actions are all elements that specify (sets/events of) execution histories. Utility function is defined on execution histories (and it’s usually defined on possible worlds). Probability given by mathematical intuition can be read as naming probability that given execution history (possible world) is an actual one.
So, you intended that the equivalence
“Omega offers the decision” <==> “the calculator says ‘even’ ”
be known to the agent’s mathematical intuition? I didn’t realize that, but my solution still applies without change. It just means that, as far as the agent’s mathematical intuition is concerned, we have the following equivalences between predicates over sequences of execution histories:
“Omega offers the decision in a correct-calculator world”
is equivalent to
“The calculator says ‘even’ in the 99 correct-calculator worlds”,
while
“Omega offers the decision in an incorrect-calculator world”
is equivalent to
“The calculator says ‘even’ in the one incorrect-calculator world”.
Below, I give my guess at your UDT1.1 approach to the problem in the OP. If I’m right, then we use the UDT1.1 concepts differently, but the math amounts to just a rearrangement of terms. I see merits in each conceptual approach over the other. I haven’t decided which one I like best.
At any rate, here is my guess at your formalization: We have one world-program. We consider the following one-place predicates over possible execution histories for this program: Given any execution history E,
CalculatorIsCorrect(E) asserts that, in E, the calculator gives the correct parity for Q.
“even”(E) asserts that, in E, the calculator says “even”. Omega then appears to the agent and asks it what Omega should have written on the test sheet in an execution history in which (1) Omega blocks the agent from writing on the answer sheet and (2) the calculator says “odd”.
“odd”(E) asserts that, in E, the calculator says “odd”. Omega then (1) blocks the agent from writing on the test sheet and (2) computes what the agent would have said to Omega in an execution history F such that “even”(F). Omega then writes what the agent would say in F on the answer sheet in E.
Borrowing notation from my last comment, we make the following assumptions about the probability measures P_f. For all input-output maps f,
P_f(CalculatorIsCorrect) = 0.99,
P_f(“even”) = P_f(“odd”) = 1⁄2,
“even” and “odd” are uncorrelated with CalculatorIsCorrect under P_f.
The input-output maps to consider are
g: On seeing “even”, write “even” and tell Omega, “Write ‘even’.”
h: On seeing “even”, write “even” and tell Omega, “Write ‘odd’.”
The utility U(E) of an execution history E is 1 if the answer on the sheet in E is the true parity of Q. Otherwise, U(E) = 0.
The expected payoffs of g and h are then, respectively,
EU(g) = P_g(“even” & CalculatorIsCorrect) 1 + P_g(“even” & ~CalculatorIsCorrect) 0 + P_g(“odd” & CalculatorIsCorrect) 0 + P_g(“odd” & ~CalculatorIsCorrect) 1 = 1⁄2 0.99 1 + 1⁄2 0.01 1 = 0.50.
EU(h) = P_h(“even” & CalculatorIsCorrect) 1 + P_h(“even” & ~CalculatorIsCorrect) 0 + P_h(“odd” & CalculatorIsCorrect) 1 + P_h(“odd” & ~CalculatorIsCorrect) 0 = 1⁄2 0.99 1 + 1⁄2 0.99 1 = 0.99.