It’s not clear what the “Omega offers the decision in a correct-calculator world” event is, since we already know that Omega offers the decision in “even” worlds, in some of which “even” is correct, and in some of which it’s not (as far as you know), and 99% of “even” worlds are the ones where calculator is correct, while you clearly assign 50% as probability of your event.
So, you intended that the equivalence
“Omega offers the decision” <==> “the calculator says ‘even’ ”
be known to the agent’s mathematical intuition? I didn’t realize that, but my solution still applies without change. It just means that, as far as the agent’s mathematical intuition is concerned, we have the following equivalences between predicates over sequences of execution histories:
“Omega offers the decision in a correct-calculator world”
is equivalent to
“The calculator says ‘even’ in the 99 correct-calculator worlds”,
while
“Omega offers the decision in an incorrect-calculator world”
is equivalent to
“The calculator says ‘even’ in the one incorrect-calculator world”.
Below, I give my guess at your UDT1.1 approach to the problem in the OP. If I’m right, then we use the UDT1.1 concepts differently, but the math amounts to just a rearrangement of terms. I see merits in each conceptual approach over the other. I haven’t decided which one I like best.
At any rate, here is my guess at your formalization: We have one world-program. We consider the following one-place predicates over possible execution histories for this program: Given any execution history E,
CalculatorIsCorrect(E) asserts that, in E, the calculator gives the correct parity for Q.
“even”(E) asserts that, in E, the calculator says “even”. Omega then appears to the agent and asks it what Omega should have written on the test sheet in an execution history in which (1) Omega blocks the agent from writing on the answer sheet and (2) the calculator says “odd”.
“odd”(E) asserts that, in E, the calculator says “odd”. Omega then (1) blocks the agent from writing on the test sheet and (2) computes what the agent would have said to Omega in an execution history F such that “even”(F). Omega then writes what the agent would say in F on the answer sheet in E.
Borrowing notation from my last comment, we make the following assumptions about the probability measures P_f. For all input-output maps f,
P_f(CalculatorIsCorrect) = 0.99,
P_f(“even”) = P_f(“odd”) = 1⁄2,
“even” and “odd” are uncorrelated with CalculatorIsCorrect under P_f.
The input-output maps to consider are
g: On seeing “even”, write “even” and tell Omega, “Write ‘even’.”
h: On seeing “even”, write “even” and tell Omega, “Write ‘odd’.”
The utility U(E) of an execution history E is 1 if the answer on the sheet in E is the true parity of Q. Otherwise, U(E) = 0.
The expected payoffs of g and h are then, respectively,
So, you intended that the equivalence
“Omega offers the decision” <==> “the calculator says ‘even’ ”
be known to the agent’s mathematical intuition? I didn’t realize that, but my solution still applies without change. It just means that, as far as the agent’s mathematical intuition is concerned, we have the following equivalences between predicates over sequences of execution histories:
“Omega offers the decision in a correct-calculator world”
is equivalent to
“The calculator says ‘even’ in the 99 correct-calculator worlds”,
while
“Omega offers the decision in an incorrect-calculator world”
is equivalent to
“The calculator says ‘even’ in the one incorrect-calculator world”.
Below, I give my guess at your UDT1.1 approach to the problem in the OP. If I’m right, then we use the UDT1.1 concepts differently, but the math amounts to just a rearrangement of terms. I see merits in each conceptual approach over the other. I haven’t decided which one I like best.
At any rate, here is my guess at your formalization: We have one world-program. We consider the following one-place predicates over possible execution histories for this program: Given any execution history E,
CalculatorIsCorrect(E) asserts that, in E, the calculator gives the correct parity for Q.
“even”(E) asserts that, in E, the calculator says “even”. Omega then appears to the agent and asks it what Omega should have written on the test sheet in an execution history in which (1) Omega blocks the agent from writing on the answer sheet and (2) the calculator says “odd”.
“odd”(E) asserts that, in E, the calculator says “odd”. Omega then (1) blocks the agent from writing on the test sheet and (2) computes what the agent would have said to Omega in an execution history F such that “even”(F). Omega then writes what the agent would say in F on the answer sheet in E.
Borrowing notation from my last comment, we make the following assumptions about the probability measures P_f. For all input-output maps f,
P_f(CalculatorIsCorrect) = 0.99,
P_f(“even”) = P_f(“odd”) = 1⁄2,
“even” and “odd” are uncorrelated with CalculatorIsCorrect under P_f.
The input-output maps to consider are
g: On seeing “even”, write “even” and tell Omega, “Write ‘even’.”
h: On seeing “even”, write “even” and tell Omega, “Write ‘odd’.”
The utility U(E) of an execution history E is 1 if the answer on the sheet in E is the true parity of Q. Otherwise, U(E) = 0.
The expected payoffs of g and h are then, respectively,
EU(g) = P_g(“even” & CalculatorIsCorrect) 1 + P_g(“even” & ~CalculatorIsCorrect) 0 + P_g(“odd” & CalculatorIsCorrect) 0 + P_g(“odd” & ~CalculatorIsCorrect) 1 = 1⁄2 0.99 1 + 1⁄2 0.01 1 = 0.50.
EU(h) = P_h(“even” & CalculatorIsCorrect) 1 + P_h(“even” & ~CalculatorIsCorrect) 0 + P_h(“odd” & CalculatorIsCorrect) 1 + P_h(“odd” & ~CalculatorIsCorrect) 0 = 1⁄2 0.99 1 + 1⁄2 0.99 1 = 0.99.