If an agent only has cause to reason decision-theoretically if it is operating with uncertainty, then that might show that Omega itself has no need for decision theory. But even then it would do nothing to show that we have no need for decision theory. Knowing that some other agent has access to knowledge that we lack about our future decisions can’t erase the need for us to make a decision. This is basically the same reason decision theory works if we assume determinism; the fact that the universe is deterministic doesn’t matter to me so long as I myself am ignorant of the determining factors.
Also, if decision theorists think Newcomb’s Problem is incoherent on decision theory (i.e., it violates some basic assumption you need to do decision theory properly), then their response should be ‘Other’ or ‘I can’t answer that question’ or ‘That’s not a question’. It should never be ‘I take both boxes’. Taking both boxes is just admitting that you do think that decision theory outputs an answer to this question.
Most theisms are testable, and untestable statements can still be wrong (i.e., false, unreasonable to believe, etc.).
“Omega predicts that you take both boxes, but you are ignorant of the fact. What do you do, given that Omega predicted correctly?”
“Omega makes a prediction that you don’t know. What do you do, given that Omega predicted correctly?”
I fail to see the difference between the decision theory used in these two scenarios.
And can you give an example of an untestable statement that could be true but is objectively false? What does it mean for a statement to be objectively unreasonable to believe?
If an agent only has cause to reason decision-theoretically if it is operating with uncertainty, then that might show that Omega itself has no need for decision theory. But even then it would do nothing to show that we have no need for decision theory. Knowing that some other agent has access to knowledge that we lack about our future decisions can’t erase the need for us to make a decision. This is basically the same reason decision theory works if we assume determinism; the fact that the universe is deterministic doesn’t matter to me so long as I myself am ignorant of the determining factors.
Also, if decision theorists think Newcomb’s Problem is incoherent on decision theory (i.e., it violates some basic assumption you need to do decision theory properly), then their response should be ‘Other’ or ‘I can’t answer that question’ or ‘That’s not a question’. It should never be ‘I take both boxes’. Taking both boxes is just admitting that you do think that decision theory outputs an answer to this question.
Most theisms are testable, and untestable statements can still be wrong (i.e., false, unreasonable to believe, etc.).
“Omega predicts that you take both boxes, but you are ignorant of the fact. What do you do, given that Omega predicted correctly?”
“Omega makes a prediction that you don’t know. What do you do, given that Omega predicted correctly?”
I fail to see the difference between the decision theory used in these two scenarios.
And can you give an example of an untestable statement that could be true but is objectively false? What does it mean for a statement to be objectively unreasonable to believe?
The first is contradictory, you’ve just told me something, then told me I don’t know it, which is obviously false.