It’s not enough to say “the act of smoking”. What’s the causal pathway that leads from the lesion to the act of smoking?
Exactly, that’s part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That’s the hard edge of the problem, and even if the correct answer turns out to be ‘it depends’ or ‘the question doesn’t make sense’ or involves a dissolution of reference classes or whatever, then one paragraph isn’t going to provide a solution and cut through the confusions behind the question.
It seems like your argument proves too much because it would dismiss taking Newcomb’s problem seriously. ‘It’s not enough to say the act of two-boxing...’ I don’t think your attitude would have been productive for the progression of decision theory if people had applied it to other problems that are more mainstream.
It has a clear answer (smoking doesn’t cause cancer), and it’s only interesting because it can trip up attempts at mathematising decision theory.
That’s exactly the point Wei Dai is making in the post I linked!! Decision theory problems aren’t necessarily hard to find the correct specific answers to if we imagine ourselves in the situation. The point is that they are litmus tests for decision theories, and they make us draw up more robust general decision processes or illuminate our own decisions.
If you had said
If so, I don’t think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn’t apply to the smoker, as they’re using a different decision-making process.
in response to Newcomb’s problem, then most people here would see this as a flinch away from getting your hands dirty engaging with the problem. Maybe you’re right and this is a matter whereof we cannot speak, but simply stating that is not useful to those who do not already believe that, and given the world we live in, can come across as a way of bragging about your non-confusion or lowering their ‘status’ by making it look like they’re confused about an easily settled issue, even if that’s not what you’re (consciously) doing.
If you told a group building robot soccer players that beating their opponents is easy and a five-year-old could do it, or if you told them that they’re wasting their time since the robots are using a different soccer-playing process, then that would not be very helpful in actually figuring out how to make/write better soccer-playing robots!
Exactly, that’s part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That’s the hard edge of the problem, and even if the correct answer turns out to be ‘it depends’ or ‘the question doesn’t make sense’ or involves a dissolution of reference classes or whatever, then one paragraph isn’t going to provide a solution and cut through the confusions behind the question.
It seems like your argument proves too much because it would dismiss taking Newcomb’s problem seriously. ‘It’s not enough to say the act of two-boxing...’ I don’t think your attitude would have been productive for the progression of decision theory if people had applied it to other problems that are more mainstream.
That’s exactly the point Wei Dai is making in the post I linked!! Decision theory problems aren’t necessarily hard to find the correct specific answers to if we imagine ourselves in the situation. The point is that they are litmus tests for decision theories, and they make us draw up more robust general decision processes or illuminate our own decisions.
If you had said
in response to Newcomb’s problem, then most people here would see this as a flinch away from getting your hands dirty engaging with the problem. Maybe you’re right and this is a matter whereof we cannot speak, but simply stating that is not useful to those who do not already believe that, and given the world we live in, can come across as a way of bragging about your non-confusion or lowering their ‘status’ by making it look like they’re confused about an easily settled issue, even if that’s not what you’re (consciously) doing.
If you told a group building robot soccer players that beating their opponents is easy and a five-year-old could do it, or if you told them that they’re wasting their time since the robots are using a different soccer-playing process, then that would not be very helpful in actually figuring out how to make/write better soccer-playing robots!
I’m not interesting in discussing with this level of condescension.