Philosophers working in decision theory are drastically worse at Newcomb than are other philosophers, two-boxing 70.38% of the time where non-specialists two-box 59.07% of the time (normalized after getting rid of ‘Other’ answers). Philosophers of religion are the most likely to get questions about religion wrong — 79.13% are theists (compared to 13.22% of non-specialists), and they tend strongly toward the Anti-Naturalism dimension. Non-aestheticians think aesthetic value is objective 53.64% of the time; aestheticians think it’s objective 73.88% of the time. Working in epistemology tends to make you an internalist, philosophy of science tends to make you a Humean, metaphysics a Platonist, ethics a deontologist. This isn’t always the case; but it’s genuinely troubling to see non-expertise emerge as a predictor of getting any important question in an academic field right.
Decision theory supposes free will (equivalently, unpredictability of a future agent’s decisions); Newcomb’s problem supposes the opposite: predictability of an agents decisions even when the prediction is a factor in the decision. It makes sense that their answer should differ from Newcomb’s. Theism is untestable and therefore not even wrong, rather than being objectively wrong. Likewise with objective aesthetics.
None of the specific things you say that experts get ‘wrong’ is objective or testable in a meaningful manner. Wouldn’t it be better to say that you generally disagree with experts?
If an agent only has cause to reason decision-theoretically if it is operating with uncertainty, then that might show that Omega itself has no need for decision theory. But even then it would do nothing to show that we have no need for decision theory. Knowing that some other agent has access to knowledge that we lack about our future decisions can’t erase the need for us to make a decision. This is basically the same reason decision theory works if we assume determinism; the fact that the universe is deterministic doesn’t matter to me so long as I myself am ignorant of the determining factors.
Also, if decision theorists think Newcomb’s Problem is incoherent on decision theory (i.e., it violates some basic assumption you need to do decision theory properly), then their response should be ‘Other’ or ‘I can’t answer that question’ or ‘That’s not a question’. It should never be ‘I take both boxes’. Taking both boxes is just admitting that you do think that decision theory outputs an answer to this question.
Most theisms are testable, and untestable statements can still be wrong (i.e., false, unreasonable to believe, etc.).
“Omega predicts that you take both boxes, but you are ignorant of the fact. What do you do, given that Omega predicted correctly?”
“Omega makes a prediction that you don’t know. What do you do, given that Omega predicted correctly?”
I fail to see the difference between the decision theory used in these two scenarios.
And can you give an example of an untestable statement that could be true but is objectively false? What does it mean for a statement to be objectively unreasonable to believe?
Decision theory supposes free will (equivalently, unpredictability of a future agent’s decisions); Newcomb’s problem supposes the opposite: predictability of an agents decisions even when the prediction is a factor in the decision. It makes sense that their answer should differ from Newcomb’s. Theism is untestable and therefore not even wrong, rather than being objectively wrong. Likewise with objective aesthetics.
None of the specific things you say that experts get ‘wrong’ is objective or testable in a meaningful manner. Wouldn’t it be better to say that you generally disagree with experts?
If an agent only has cause to reason decision-theoretically if it is operating with uncertainty, then that might show that Omega itself has no need for decision theory. But even then it would do nothing to show that we have no need for decision theory. Knowing that some other agent has access to knowledge that we lack about our future decisions can’t erase the need for us to make a decision. This is basically the same reason decision theory works if we assume determinism; the fact that the universe is deterministic doesn’t matter to me so long as I myself am ignorant of the determining factors.
Also, if decision theorists think Newcomb’s Problem is incoherent on decision theory (i.e., it violates some basic assumption you need to do decision theory properly), then their response should be ‘Other’ or ‘I can’t answer that question’ or ‘That’s not a question’. It should never be ‘I take both boxes’. Taking both boxes is just admitting that you do think that decision theory outputs an answer to this question.
Most theisms are testable, and untestable statements can still be wrong (i.e., false, unreasonable to believe, etc.).
“Omega predicts that you take both boxes, but you are ignorant of the fact. What do you do, given that Omega predicted correctly?”
“Omega makes a prediction that you don’t know. What do you do, given that Omega predicted correctly?”
I fail to see the difference between the decision theory used in these two scenarios.
And can you give an example of an untestable statement that could be true but is objectively false? What does it mean for a statement to be objectively unreasonable to believe?
The first is contradictory, you’ve just told me something, then told me I don’t know it, which is obviously false.