If it were also the case that your friends all agreed with you, but the “mainstream/dominant position in modern philosophy and decision theory” disagreed with you, then yes, you should probably feel a bit worried.
Good point, my reply didn’t take it into account. It all depends on the depth of understanding, so to answer your remark consider e.g. supernatural, UFOs.
Is there really such a disagreement about Newcomb’s problem?
The issue seems to be whether agents can convincingly signal to a powerful agent that they will act in some way in the future—i.e. whether it is possible to make credible promises to such a powerful agent.
I think that this is possible—at least in principle. Eliezer also seems to think this is possible. I personally am not sure that such a powerful agent could achieve the proposed success rate on unmodified humans—but in the context of artificial agents, I see few problems—especially if Omega can leave the artificial agent with the boxes in a chosen controlled environment, where Omega can be fairly confident that they will not be interfered with by interested third parties.
Do many in “modern philosophy and decision theory” really disagree with that?
More to the point, do they have a coherent counter-argument?
Thanks for mentioning artificial agents. If they can run arbitrary computations, Omega itself isn’t implementable as a program due to the halting problem. Maybe this is relevant to Newcomb’s problem in general, I can’t tell.
Surely not a serious problem: if the agent is going to hang around until the universal heat death before picking a box, then Omega’s predcition of its actions doesn’t matter.
I’m not complaining, just observing. I see you are using the “royal we” again.
I wonder whether being surrounded by agents that agree with you is helping.
I agree with you that people shouldn’t drink fatal poison, and that 2+2=4. Should you feel worried because of that?
If it were also the case that your friends all agreed with you, but the “mainstream/dominant position in modern philosophy and decision theory” disagreed with you, then yes, you should probably feel a bit worried.
Good point, my reply didn’t take it into account. It all depends on the depth of understanding, so to answer your remark consider e.g. supernatural, UFOs.
Is there really such a disagreement about Newcomb’s problem?
The issue seems to be whether agents can convincingly signal to a powerful agent that they will act in some way in the future—i.e. whether it is possible to make credible promises to such a powerful agent.
I think that this is possible—at least in principle. Eliezer also seems to think this is possible. I personally am not sure that such a powerful agent could achieve the proposed success rate on unmodified humans—but in the context of artificial agents, I see few problems—especially if Omega can leave the artificial agent with the boxes in a chosen controlled environment, where Omega can be fairly confident that they will not be interfered with by interested third parties.
Do many in “modern philosophy and decision theory” really disagree with that?
More to the point, do they have a coherent counter-argument?
Thanks for mentioning artificial agents. If they can run arbitrary computations, Omega itself isn’t implementable as a program due to the halting problem. Maybe this is relevant to Newcomb’s problem in general, I can’t tell.
Surely not a serious problem: if the agent is going to hang around until the universal heat death before picking a box, then Omega’s predcition of its actions doesn’t matter.