Your analysis doesn’t give any reason to think it’s wrong, and doesn’t engage with its central point. Do you (1) deny that “on arriving in the city [the CDT agent] will have no further reason to pay”, or (2) deny that it has the consequences I say it has, or (3) something else, and why?
I am not assuming it won’t pay. I am deducing that it won’t pay from the fact that it is a CDT agent, which by definition means that whenever it has a choice to make it does whatever maximizes its utility given the choice it makes. (In some not necessarily perfectly clear counterfactual sense.)
(If you are supposing that we have a CDT agent provided with some means of making binding commitments regarding its future behaviour then I agree that such an agent can pay in the Parfit-hitchhiker situation. But that isn’t what Parfit’s example is about. At least, not as I’ve heard it presented around here; I don’t think I’ve read Parfit’s original presentation of the scenario.)
CDT agents perform poorly in Parfit’s hitchhiker dilemma if they can’t bind themselves to act a certain way in the future, and perform well if they can make binding commitments. For an example of a problem where CDT agents perform poorly regardless of whether they can make binding commitments, see retro blackmail in https://arxiv.org/pdf/1507.01986.pdf.
An agent capable of making such commitments is either not a CDT agent (because when making its later choice, it considers not only the causal consequences of that choice but also its prior commitments) or more than a CDT agent (because it has some extra mechanism that binds it and forces it to make a particular decision even though its causal consequences are bad).
I hoped it might prove useful to look up the original context of Parfit’s thought experiment. It’s on page 7 in my copy of Reasons and Persons, a fact I mention in the hope that some future person wanting to look it up will find this comment and be saved some effort; he doesn’t use the term “hitchhiker”, though his example does involve being stranded in the desert. As it happens, Parfit’s purposes in considering the scenario aren’t quite those of evaluating CDT, though they’re not a million miles off, and I don’t think they help clarify whether or not we should consider CDT agents to be able to bind their future selves. (He’s considering whether it is better to be “never self-denying”, which means “I never do what I believe will be worse for me”, and whether a certain “self-interest principle” he’s considering should be understood as telling people to be never self-denying. He doesn’t couch his discussion in terms of decision theories, and doesn’t e.g. consider anything much like the CDT-versus-EDT-versus-UDT-versus-TDT-etc. questions that are popular around here, though he does have things to say about how agents might select their dispositions, and might deliberately choose to be disposed to make non-future-optimizing choices to avoid bad consequences in the hitchhiker case.)
Your analysis doesn’t give any reason to think it’s wrong, and doesn’t engage with its central point. Do you (1) deny that “on arriving in the city [the CDT agent] will have no further reason to pay”, or (2) deny that it has the consequences I say it has, or (3) something else, and why?
I am not assuming it won’t pay. I am deducing that it won’t pay from the fact that it is a CDT agent, which by definition means that whenever it has a choice to make it does whatever maximizes its utility given the choice it makes. (In some not necessarily perfectly clear counterfactual sense.)
(If you are supposing that we have a CDT agent provided with some means of making binding commitments regarding its future behaviour then I agree that such an agent can pay in the Parfit-hitchhiker situation. But that isn’t what Parfit’s example is about. At least, not as I’ve heard it presented around here; I don’t think I’ve read Parfit’s original presentation of the scenario.)
CDT agents perform poorly in Parfit’s hitchhiker dilemma if they can’t bind themselves to act a certain way in the future, and perform well if they can make binding commitments. For an example of a problem where CDT agents perform poorly regardless of whether they can make binding commitments, see retro blackmail in https://arxiv.org/pdf/1507.01986.pdf.
An agent capable of making such commitments is either not a CDT agent (because when making its later choice, it considers not only the causal consequences of that choice but also its prior commitments) or more than a CDT agent (because it has some extra mechanism that binds it and forces it to make a particular decision even though its causal consequences are bad).
I hoped it might prove useful to look up the original context of Parfit’s thought experiment. It’s on page 7 in my copy of Reasons and Persons, a fact I mention in the hope that some future person wanting to look it up will find this comment and be saved some effort; he doesn’t use the term “hitchhiker”, though his example does involve being stranded in the desert. As it happens, Parfit’s purposes in considering the scenario aren’t quite those of evaluating CDT, though they’re not a million miles off, and I don’t think they help clarify whether or not we should consider CDT agents to be able to bind their future selves. (He’s considering whether it is better to be “never self-denying”, which means “I never do what I believe will be worse for me”, and whether a certain “self-interest principle” he’s considering should be understood as telling people to be never self-denying. He doesn’t couch his discussion in terms of decision theories, and doesn’t e.g. consider anything much like the CDT-versus-EDT-versus-UDT-versus-TDT-etc. questions that are popular around here, though he does have things to say about how agents might select their dispositions, and might deliberately choose to be disposed to make non-future-optimizing choices to avoid bad consequences in the hitchhiker case.)