Showing that you can control such things* doesn’t seem to disprove CDT. It seems to motivate different CDT dynamics. (In case that’s a source of confusion, it could be called something else like Control Decision Theory.)
*taking this as given
Instead of picking one option you could randomize. (If Newcomb can read my mind, then a coin flip should be no problem.)
Are you really supposed to just leave it there, sitting in the attic? What sort of [madness] is that?
If it’s for someone else...
Sometimes, one-boxers object: if two-boxers are so rational, why do the one-boxers end up so much richer? But two-boxers can answer: because Omega has chosen to give better options to agents who will choose irrationally. Two-boxers make the best of a worse situation: they almost always face a choice between nothing or $1K, and they, rationally, choose $1K. One-boxers, by contrast, make the worse of a better situation: they almost always face a choice between $1M or $1M+$1K, and they, irrationally, choose $1M.
There’s also evolutionary approaches. You could always switch strategies. (Especially if you get to play multiple times. Also, what is all this inflation doing to the economy?)
If Omega will give you millions if you believe that Paris is in Ohio,
Paris, Ohio. (Might have been easier/cheaper to build in the past if it didn’t exist.)
Parfit’s hitchhiker
Violence is the answer.
Gratitude? $11,000 is a ridiculous sum—daylight robbery. (An entirely different question is:
a) It’s a charity. They save the lives of people stuck in the desert for free (for the beneficiaries). They run on donations, though.
b) It’s an Uber. How much do you tip? (A trip into the desert is awful during the day, terribly hot. It’s a long trip, you were way out there. And you suspect the powerful (and expensive) air conditioning saved your life. The food and water definitely did.))
Counterfactual mugging: Omega doesn’t know whether the X-th digit of pi is even or odd. Before finding out, she makes the following commitment. If the X-th digit of pi is odd, she will ask you for a thousand dollars. If the X-th digit is even, she will predict whether you would’ve given her the thousand had the X-th digit been odd, and she will give you a million if she predicts “yes.” The X-th digit is odd, and Omega asks you for the thousand. Should you pay?
$1,000? Try $100, scale the cost and the payout down by a factor of 10. (Also, by studying math, I might conclude those are bad odds, for pi-digits.)
The issue is that you’re trying to improve the past at all.
The framing leading up to it was weird, but this makes sense. (There’s also wanting to improve the future as a motivation, too. And if population increases over time, a small effect could have a huge impact. (Even before we adjust for impacts that won’t be linear.))*
*This also leads to ‘should you be nice to people after an apocalypse (before which, population was higher), because in a large population, even a small effect on the past would...’
Also ‘be nice’? What about...breaking people out? Be the change you wanted to see in the world.
Once you’ve started trying to acausally influence the behavior of aliens throughout the multiverse, though, one starts to wonder even more about the whole lost-your-marbles thing.
Randomize for reduced cost. (Also might allow for improving expected value, while adjusting for transaction costs in a reasonable fashion.)
(A lot of this has been covered before.)
Showing that you can control such things* doesn’t seem to disprove CDT. It seems to motivate different CDT dynamics. (In case that’s a source of confusion, it could be called something else like Control Decision Theory.)
*taking this as given
Instead of picking one option you could randomize. (If Newcomb can read my mind, then a coin flip should be no problem.)
If it’s for someone else...
There’s also evolutionary approaches. You could always switch strategies. (Especially if you get to play multiple times. Also, what is all this inflation doing to the economy?)
Paris, Ohio. (Might have been easier/cheaper to build in the past if it didn’t exist.)
Parfit’s hitchhiker
Violence is the answer.
Gratitude? $11,000 is a ridiculous sum—daylight robbery. (An entirely different question is:
a) It’s a charity. They save the lives of people stuck in the desert for free (for the beneficiaries). They run on donations, though.
b) It’s an Uber. How much do you tip? (A trip into the desert is awful during the day, terribly hot. It’s a long trip, you were way out there. And you suspect the powerful (and expensive) air conditioning saved your life. The food and water definitely did.))
$1,000? Try $100, scale the cost and the payout down by a factor of 10. (Also, by studying math, I might conclude those are bad odds, for pi-digits.)
The framing leading up to it was weird, but this makes sense. (There’s also wanting to improve the future as a motivation, too. And if population increases over time, a small effect could have a huge impact. (Even before we adjust for impacts that won’t be linear.))*
*This also leads to ‘should you be nice to people after an apocalypse (before which, population was higher), because in a large population, even a small effect on the past would...’
Also ‘be nice’? What about...breaking people out? Be the change you wanted to see in the world.
Randomize for reduced cost. (Also might allow for improving expected value, while adjusting for transaction costs in a reasonable fashion.)