Hm, I think the difference in our model programs indicates something that I don’t understand about UDT, like a wrong assumption that justified an optimization. But it seems they both produce the same result for P(S(“you’re wrong”)), which is outcome=”die” for all S.
Do you agree that this problem is, and should remain, unsolvable? (I understand “should remain unsolvable” to mean that any supposed solution must represent some sort of confusion about the problem.)
The input to P is supposed to contain the physical randomness in the problem, so P(S(“you’re wrong”)) doesn’t make sense to me. The idea is that both P(“green”) and P(“red”) get run, and we can think of them as different universes in a multiverse. Actually in this case I should have wrote “def P():” since there is no random correct color.
wrong assumption that justified an optimization
I’m not quite sure what you mean here, but in general I suggest just translating the decision problem directly into a world program without trying to optimize it.
Do you agree that this problem is, and should remain, unsolvable? (I understand “should remain unsolvable” to mean that any supposed solution must represent some sort of confusion about the problem.)
No, like I said, it seems pretty straightforward to solve in UDT. It’s just that even in the optimal solution you still die.
The input to P is supposed to contain the physical randomness in the problem, so P(S(“you’re wrong”)) doesn’t make sense to me. The idea is that both P(“green”) and P(“red”) get run, and we can think of them as different universes in a multiverse. Actually in this case I should have wrote “def P():” since there is no random correct color.
Ok, now I understood why you wrote your program the way you did.
It’s just that even in the optimal solution you still die.
By solve, I meant find a way to win. I think that after getting past different word use, we agree on the nature of the problem.
Hm, I think the difference in our model programs indicates something that I don’t understand about UDT, like a wrong assumption that justified an optimization. But it seems they both produce the same result for P(S(“you’re wrong”)), which is outcome=”die” for all S.
Do you agree that this problem is, and should remain, unsolvable? (I understand “should remain unsolvable” to mean that any supposed solution must represent some sort of confusion about the problem.)
The input to P is supposed to contain the physical randomness in the problem, so P(S(“you’re wrong”)) doesn’t make sense to me. The idea is that both P(“green”) and P(“red”) get run, and we can think of them as different universes in a multiverse. Actually in this case I should have wrote “def P():” since there is no random correct color.
I’m not quite sure what you mean here, but in general I suggest just translating the decision problem directly into a world program without trying to optimize it.
No, like I said, it seems pretty straightforward to solve in UDT. It’s just that even in the optimal solution you still die.
Ok, now I understood why you wrote your program the way you did.
By solve, I meant find a way to win. I think that after getting past different word use, we agree on the nature of the problem.