Prisoner’s Dilemma relies on causality, Newcomb’s Paradox is anti-causality.
The contents of Newcomb’s boxes are caused by the kind of agent you are—which are (effectively by definition of what ‘kind of agent’ means) mapped directly to what decision you will take.
Newcomb’s paradox can only be called anti-causality only in some confused anti-compatibilist sense in which determinism is opposed to free will and therefore “the kind of agent you are” must be opposed to “the decisions you make”—instead of absolutely correlating to them.
In what way is Newcomb’s Problem “anti-causality”?
If you don’t like the superpowerful predictor, it works for human agents as well. Imagine you need to buy something but don’t have cash on you, so you tell the shopkeeper you’ll pay him tomorrow. If he thinks you’re telling the truth, he’ll give you the item now and let you come back tomorrow. If not, you lose a day’s worth of use, and so some utility.
So your best bet (if you’re selfish) is to tell him you’ll pay tomorrow, take the item, and never come back. But what if you’re a bad liar? Then you’ll blush or stammer or whatever, and you won’t get your good.
A regular Causal agent, however, having taken the item, will not come back the next day—and you know it, and it will show on your face. So in order to get what you want, you have to actually be the kind of person who respects their past selves decisions—a TDT agent, or a CDT agent with some pre-commitment system.
The above has the same attitude to causality as Newcomb’s Problem—specifically, it includes another agent rewarding you based that agent’s calculations of your future behaviour. But it’s a situation I’ve been in several times.
I actually have some sympathy for your position that Prisoner’s Dilemma is useful to study, but Newcomb’s Paradox isn’t. The way I would put it is, as the problems we study increase in abstraction from real world problems, there’s the benefit of isolating particular difficulties and insights, and making it easier to make theoretical progress, but also the danger that the problems we pay attention to are no longer relevant to the actual problems we face. (See another recent comment of mine making a similar point.)
Given that we have little more than intuition to guide on us on “how much abstraction is too much?”, it doesn’t seems unreasonable for people to disagree on this topic and and pursue different approaches, as long as the the possibility of real-world irrelevance isn’t completely overlooked.
Prisoner’s Dilemma relies on causality, Newcomb’s Paradox is anti-causality.
So, you consider this notion of “causality” more important than actually succeeding? If I showed up in a time machine, would you complain I was cheating?
Also, dammit, karma toll. Sorry, anyone who wants to answer me.
Prisoner’s Dilemma relies on causality, Newcomb’s Paradox is anti-causality. They’re as close to each other as astronomy and astrology.
The contents of Newcomb’s boxes are caused by the kind of agent you are—which are (effectively by definition of what ‘kind of agent’ means) mapped directly to what decision you will take.
Newcomb’s paradox can only be called anti-causality only in some confused anti-compatibilist sense in which determinism is opposed to free will and therefore “the kind of agent you are” must be opposed to “the decisions you make”—instead of absolutely correlating to them.
In what way is Newcomb’s Problem “anti-causality”?
If you don’t like the superpowerful predictor, it works for human agents as well. Imagine you need to buy something but don’t have cash on you, so you tell the shopkeeper you’ll pay him tomorrow. If he thinks you’re telling the truth, he’ll give you the item now and let you come back tomorrow. If not, you lose a day’s worth of use, and so some utility.
So your best bet (if you’re selfish) is to tell him you’ll pay tomorrow, take the item, and never come back. But what if you’re a bad liar? Then you’ll blush or stammer or whatever, and you won’t get your good.
A regular Causal agent, however, having taken the item, will not come back the next day—and you know it, and it will show on your face. So in order to get what you want, you have to actually be the kind of person who respects their past selves decisions—a TDT agent, or a CDT agent with some pre-commitment system.
The above has the same attitude to causality as Newcomb’s Problem—specifically, it includes another agent rewarding you based that agent’s calculations of your future behaviour. But it’s a situation I’ve been in several times.
EDIT: Grammar.
This example is much like Parfit’s Hitchhiker in less extreme form.
I actually have some sympathy for your position that Prisoner’s Dilemma is useful to study, but Newcomb’s Paradox isn’t. The way I would put it is, as the problems we study increase in abstraction from real world problems, there’s the benefit of isolating particular difficulties and insights, and making it easier to make theoretical progress, but also the danger that the problems we pay attention to are no longer relevant to the actual problems we face. (See another recent comment of mine making a similar point.)
Given that we have little more than intuition to guide on us on “how much abstraction is too much?”, it doesn’t seems unreasonable for people to disagree on this topic and and pursue different approaches, as long as the the possibility of real-world irrelevance isn’t completely overlooked.
So, you consider this notion of “causality” more important than actually succeeding? If I showed up in a time machine, would you complain I was cheating?
Also, dammit, karma toll. Sorry, anyone who wants to answer me.