I’d tell him to give me the cash and bugger off. If he wants me to put any effort into his sadistic schemes he can omnisciently win some lottery and get some real cash to offer. I value the extra day that will be gone from my life without me remembering it at well over 25 pounds and to be honest I’m a bit wary of his nasty mind altering drugs.
Considering pounds as a reliable measure of utilons:
A standard way to approach these types of problems it to act as if you didn’t know whether you were the real you or the simulated you. This avoids a lot of complications and gets you to the heart of the problem. Here, if you decide to give Omega the cash, there are three situations you can be in: the simulation, reality on the first day, or reality on the second day. The Dutch book odds of being in any of these three situations is the same, 1⁄3. So the expected return is 1/3(£260-£100-£100) = £20, twenty of her majesty’s finest English pounds.
There’s no way I’m going to go around merilly adding simulated realities motivated by coin tosses with chemically induced repetitive decision making. That’s crazy. If I did that I’d end up making silly mistakes such as weighing the decisions based on ‘tails’ coming up as twice as important as those that come after ‘heads’. Why on earth would I expect that to work?
Add a Newcombish problem to a sleeping beauty problem if you want, but you cannot just add all decisions each of them implies together, divide by three and expect to come up with sane decisions.
I’m either in the ‘heads sim’ or I’m in ‘tails real’.
If I Omega got heads and my decision would have been cooperate then I gain £260.
If I Omega got heads and my decision would have been defect then I gain £0.
If I Omega got tails and my decision would have been cooperate then I lose £200.
If I Omega got tails and my decision would have been defect then I gain £50.
Given that heads and tails are equally important, when I make my decision I’ll end up with a nice simple 0.5 x £260 + 0.5 x (-£100 - £100) = £30 vs 0.5 x £0 + 0.5 x £50 = £25. I’ve got no particular inclination to divide by 3.
If Omega got carried away with his amnesiatic drug fetish and decided to instead ask for £20 ten days running then my math would be: 0.5 x £0 + 0.5 x £50 = £25 vs 0.5 x £260 + 0.5 x (-£20 - £20 - £20 - £20 - £20 - £20 - £20 - £20 - £20 - £20) = £30. I’m definitely not going to decide to divide by eleven and weigh the 10 inevitable but trivial decisions of the tails getting cooperator as collectively 10 times more significant than the single choice of the more fortunate cooperative sim.
If my decision were to change based on how many times the penalty for unfortunate cooperation is arbitrarily divided then it would suggest my decision making strategy is bogus. No 1⁄3 or 1⁄11 for me!
There’s no way I’m going to go around merilly adding simulated realities motivated by coin tosses with chemically induced repetitive decision making. That’s crazy. If I did that I’d end up making silly mistakes such as weighing the decisions based on ‘tails’ coming up as twice as important as those that come after ‘heads’. Why on earth would I expect that to work?
Because it generally does. Adding simulated realities motivated by coin tosses with chemically induced repetitive decision making gives you the right answer nearly always—and any other method gives you the wrong answer (give me your method and I’ll show you).
The key to the paradox here is not the simulated realities, or even the sleeping beauty part—it’s the fact that the amount of times you are awoken depends upon your decision! That’s what breaks it; if it were not the case, it doesn’t fall apart. If, say, Omega were to ask you on the second day whatever happens (but not give you the extra £50 on the second day, to keep the same setup) then your expectations are accept: £20, refuse £50/3, which is what you’d expect.
Because it generally does. Adding simulated realities motivated by coin tosses with chemically induced repetitive decision making gives you the right answer nearly always
You have identified a shortcut that seems to rely on a certain assumption. It sounds like you have identified a way to violate that assumption and will hopefully not make that mistake again. There’s no paradox. Just lazy math.
and any other method gives you the wrong answer (give me your method and I’ll show you).
Method? I didn’t particularly have a cached algorithm to fall back on. So my method was “Read problem. Calculate outcomes for cooperate and defect in each situation. Multiply by appropriate weights. Try not to do anything stupid and definitely don’t consider tails worth more than heads based on a gimmick.”
If you have an example where most calculations people make would give the wrong answer then I’d be happy to tackle it.
I’d tell him to give me the cash and bugger off. If he wants me to put any effort into his sadistic schemes he can omnisciently win some lottery and get some real cash to offer. I value the extra day that will be gone from my life without me remembering it at well over 25 pounds and to be honest I’m a bit wary of his nasty mind altering drugs.
Considering pounds as a reliable measure of utilons:
There’s no way I’m going to go around merilly adding simulated realities motivated by coin tosses with chemically induced repetitive decision making. That’s crazy. If I did that I’d end up making silly mistakes such as weighing the decisions based on ‘tails’ coming up as twice as important as those that come after ‘heads’. Why on earth would I expect that to work?
Add a Newcombish problem to a sleeping beauty problem if you want, but you cannot just add all decisions each of them implies together, divide by three and expect to come up with sane decisions.
I’m either in the ‘heads sim’ or I’m in ‘tails real’.
If I Omega got heads and my decision would have been cooperate then I gain £260.
If I Omega got heads and my decision would have been defect then I gain £0.
If I Omega got tails and my decision would have been cooperate then I lose £200.
If I Omega got tails and my decision would have been defect then I gain £50.
Given that heads and tails are equally important, when I make my decision I’ll end up with a nice simple 0.5 x £260 + 0.5 x (-£100 - £100) = £30 vs 0.5 x £0 + 0.5 x £50 = £25. I’ve got no particular inclination to divide by 3.
If Omega got carried away with his amnesiatic drug fetish and decided to instead ask for £20 ten days running then my math would be: 0.5 x £0 + 0.5 x £50 = £25 vs 0.5 x £260 + 0.5 x (-£20 - £20 - £20 - £20 - £20 - £20 - £20 - £20 - £20 - £20) = £30. I’m definitely not going to decide to divide by eleven and weigh the 10 inevitable but trivial decisions of the tails getting cooperator as collectively 10 times more significant than the single choice of the more fortunate cooperative sim.
If my decision were to change based on how many times the penalty for unfortunate cooperation is arbitrarily divided then it would suggest my decision making strategy is bogus. No 1⁄3 or 1⁄11 for me!
There’s no way I’m going to go around merilly adding simulated realities motivated by coin tosses with chemically induced repetitive decision making. That’s crazy. If I did that I’d end up making silly mistakes such as weighing the decisions based on ‘tails’ coming up as twice as important as those that come after ‘heads’. Why on earth would I expect that to work?
Because it generally does. Adding simulated realities motivated by coin tosses with chemically induced repetitive decision making gives you the right answer nearly always—and any other method gives you the wrong answer (give me your method and I’ll show you).
The key to the paradox here is not the simulated realities, or even the sleeping beauty part—it’s the fact that the amount of times you are awoken depends upon your decision! That’s what breaks it; if it were not the case, it doesn’t fall apart. If, say, Omega were to ask you on the second day whatever happens (but not give you the extra £50 on the second day, to keep the same setup) then your expectations are accept: £20, refuse £50/3, which is what you’d expect.
(Small style note: it’d be better if you quoted that text using a ‘>’, or used real italics, which in Markdown are underscores ‘_’ instead of tags.)
You have identified a shortcut that seems to rely on a certain assumption. It sounds like you have identified a way to violate that assumption and will hopefully not make that mistake again. There’s no paradox. Just lazy math.
Method? I didn’t particularly have a cached algorithm to fall back on. So my method was “Read problem. Calculate outcomes for cooperate and defect in each situation. Multiply by appropriate weights. Try not to do anything stupid and definitely don’t consider tails worth more than heads based on a gimmick.”
If you have an example where most calculations people make would give the wrong answer then I’d be happy to tackle it.