You probably do not understand me, because I have no idea what is meant by “the (subjunctive) outcome specification is more realistic” nor by “the output is posited to be accurate”.
What I am saying is that akrasia is perfectly well modeled by hyperbolic discounting, and that the fix for akrasia is simply CDT with exponential discounting. And that the other, truely Newcomb-like problems require a belief in this mysterious ‘acausal influence’ if you are going to ‘solve’ them as they are presented—as one-time decision problems.
What I am saying is that akrasia is perfectly well modeled by hyperbolic discounting, and that the fix for akrasia is simply CDT with exponential discounting.
...seems to be saying that hyperbolic discounting is the rational result of modelling some kinds of uncertainty about future payoffts. Is it really something that needs to be fixed? Should it not be viewed as a useful heuristic?
Yes, it needs to be fixed, because it is not a rational analysis.
You are assuming, to start, that the probability of something happening is going to increase with time. So the probability of it happening tomorrow is small, but the probability of it happening in two days is larger.
So then a day passes without the thing happening. That it hasn’t happened yet is the only new information. But, following that bizarre analysis, I am supposed to reduce my probability assignments that it will happen tomorrow, simply because what used to be two days out is now tomorrow. That is not rational at all!
Hmm. The article cited purports to derive hyperbolic discounting from a rational analysis. Maybe it is sometimes used inappropriately, but I figure creatures probably don’t use hyperbolic discounting because of a biases, but because it is a more appropriate heuristic than exponential discounting, under common circumstances.
The article cited (pdf) purports to derive hyperbolic discounting from a rational analysis.
But it does not do that. Sozou obviously doesn’t understand what (irrational) ‘time-preference reversal’ means. He writes:
I may appear to be temporally inconsistent if, for example, I prefer the promise of a bottle of wine in three months over the promise of a cake in two months, but I prefer a cake immediately over a promise of a bottle of wine in one month.
That is incorrect. What he should have said is: “I am temporally inconsistent if, for example, I prefer the promise of a bottle of wine in three months over the promise of a cake in two months, but two months from now I prefer a cake immediately over a promise of a bottle of wine in one month.”
A person whose time preferences predictably change in this way can be money pumped. If he started with a promise of a cake in two months, he would pay to exchange it for a promise of wine in three months. But then two months later, he would pay again to exchange promise of wine in another month for an immediate cake.
Edit: Corrected above sentence.
There is nothing irrational in having the probabilities ‘s’ in his Table 1 at a particular point in time (1, 1⁄2, 1⁄3, 1⁄4). What is irrational and constitutes hyperbolic discounting is to still have the same ‘s’ numbers two months later. If the original estimates were rational, then two months later the current ‘s’ schedule for a Bayesian would begin (1, 3⁄4, 3⁄5, …). And the Bayesian would still prefer the promise of wine.
The article cited (pdf) purports to derive hyperbolic discounting from a rational analysis.
But it does not do that. Sozou obviously doesn’t understand what (irrational) ‘time-preference reversal’ means. He writes:
I may appear to be temporally inconsistent if, for example, I prefer the promise of a bottle of wine in three months over the promise of a cake in two months, but I prefer a cake immediately over a promise of a bottle of wine in one month.
That is incorrect. What he should have said is: “I am temporally inconsistent if, for example, I prefer the promise of a bottle of wine in three months over the promise of a cake in two months, but two months from now I prefer a cake immediately over a promise of a bottle of wine in one month.”
Uh, no. Sozou is just assuming that all else is equal—i.e. it isn’t your birthday, and you have no special preference for cake or wine on any particular date. Your objection is a quibble—not a real criticism. Perhaps try harder for a sympathetic reading. The author did not use the same items with the same temporal spacing just for fun.
People prefer rewards now partly because they know from experience that rewards in the future are more uncertain. Promises by the experimenter that they really really will get paid are treated with scepticism. Subjects are factoring such uncertainty in—and that results in hyperbolic discounting.
It can be seen from the table that a cake immediately is worth more than a promise of wine after a month, while a promise of wine after three months is
worth more than a promise of cake after two months. So my preferences are indeed consistent with maximizing my expected reward.
“the (subjunctive) outcome specification is more realistic” = It is more realistic to say that you will suffer a consquence from hazing your future self than from hazing the next generation.
“the output is posited to be accurate” = In Newcomb’s Problem, Omega’s accuracy is posited by the problem, while Omega’s counterparts in other instances is taken to have whatever accuracy it does in real life.
What I am saying is that akrasia is perfectly well modeled by hyperbolic discounting, and that the fix for akrasia is simply CDT with exponential discounting.
That would be wrong though—the same symmetry can persist through time with exponential discounting. Exponential discounting is equivalent to a period-invariant discount factor. Yet you can still find yourself wishing your previous (symmetric) self did what your current self does not wish to.
And that the other, truely Newcomb-like problems require a belief in this mysterious ‘acausal influence’ if you are going to ‘solve’ them as they are presented—as one-time decision problems.
I thought we had this discussion on the Parfitian filter article. You can have Newcomb’s problem without acausal infuences: just take yourself to be the Omega where a computer program plays against you. There’s no acausal information flow, yet the winning programs act isomorphically to those that “believe in” an acausal influence.
You probably do not understand me, because I have no idea what is meant by “the (subjunctive) outcome specification is more realistic” nor by “the output is posited to be accurate”.
What I am saying is that akrasia is perfectly well modeled by hyperbolic discounting, and that the fix for akrasia is simply CDT with exponential discounting. And that the other, truely Newcomb-like problems require a belief in this mysterious ‘acausal influence’ if you are going to ‘solve’ them as they are presented—as one-time decision problems.
http://en.wikipedia.org/wiki/Hyperbolic_discounting#Explanations
...seems to be saying that hyperbolic discounting is the rational result of modelling some kinds of uncertainty about future payoffts. Is it really something that needs to be fixed? Should it not be viewed as a useful heuristic?
Yes, it needs to be fixed, because it is not a rational analysis.
You are assuming, to start, that the probability of something happening is going to increase with time. So the probability of it happening tomorrow is small, but the probability of it happening in two days is larger.
So then a day passes without the thing happening. That it hasn’t happened yet is the only new information. But, following that bizarre analysis, I am supposed to reduce my probability assignments that it will happen tomorrow, simply because what used to be two days out is now tomorrow. That is not rational at all!
Hmm. The article cited purports to derive hyperbolic discounting from a rational analysis. Maybe it is sometimes used inappropriately, but I figure creatures probably don’t use hyperbolic discounting because of a biases, but because it is a more appropriate heuristic than exponential discounting, under common circumstances.
But it does not do that. Sozou obviously doesn’t understand what (irrational) ‘time-preference reversal’ means. He writes:
That is incorrect. What he should have said is: “I am temporally inconsistent if, for example, I prefer the promise of a bottle of wine in three months over the promise of a cake in two months, but two months from now I prefer a cake immediately over a promise of a bottle of wine in one month.”
A person whose time preferences predictably change in this way can be money pumped. If he started with a promise of a cake in two months, he would pay to exchange it for a promise of wine in three months. But then two months later, he would pay again to exchange promise of wine in another month for an immediate cake. Edit: Corrected above sentence.
There is nothing irrational in having the probabilities ‘s’ in his Table 1 at a particular point in time (1, 1⁄2, 1⁄3, 1⁄4). What is irrational and constitutes hyperbolic discounting is to still have the same ‘s’ numbers two months later. If the original estimates were rational, then two months later the current ‘s’ schedule for a Bayesian would begin (1, 3⁄4, 3⁄5, …). And the Bayesian would still prefer the promise of wine.
Uh, no. Sozou is just assuming that all else is equal—i.e. it isn’t your birthday, and you have no special preference for cake or wine on any particular date. Your objection is a quibble—not a real criticism. Perhaps try harder for a sympathetic reading. The author did not use the same items with the same temporal spacing just for fun.
People prefer rewards now partly because they know from experience that rewards in the future are more uncertain. Promises by the experimenter that they really really will get paid are treated with scepticism. Subjects are factoring such uncertainty in—and that results in hyperbolic discounting.
“the (subjunctive) outcome specification is more realistic” = It is more realistic to say that you will suffer a consquence from hazing your future self than from hazing the next generation.
“the output is posited to be accurate” = In Newcomb’s Problem, Omega’s accuracy is posited by the problem, while Omega’s counterparts in other instances is taken to have whatever accuracy it does in real life.
That would be wrong though—the same symmetry can persist through time with exponential discounting. Exponential discounting is equivalent to a period-invariant discount factor. Yet you can still find yourself wishing your previous (symmetric) self did what your current self does not wish to.
I thought we had this discussion on the Parfitian filter article. You can have Newcomb’s problem without acausal infuences: just take yourself to be the Omega where a computer program plays against you. There’s no acausal information flow, yet the winning programs act isomorphically to those that “believe in” an acausal influence.