Thanks for mentioning artificial agents. If they can run arbitrary computations, Omega itself isn’t implementable as a program due to the halting problem. Maybe this is relevant to Newcomb’s problem in general, I can’t tell.
Surely not a serious problem: if the agent is going to hang around until the universal heat death before picking a box, then Omega’s predcition of its actions doesn’t matter.
Thanks for mentioning artificial agents. If they can run arbitrary computations, Omega itself isn’t implementable as a program due to the halting problem. Maybe this is relevant to Newcomb’s problem in general, I can’t tell.
Surely not a serious problem: if the agent is going to hang around until the universal heat death before picking a box, then Omega’s predcition of its actions doesn’t matter.