1) I would one-box. Here’s where I think the standard two-boxer argument breaks down. It’s the idea of making a decision. The two-boxer idea is that once the boxes have been fixed the course of action that makes the most money is taking both boxes. Unless there is reverse causality going on here, I don’t think that anyone disputes this. If at that moment you could make a choice totally independently of everything leading up to that point you would two-box. Unfortunately, the very existence of Omega implies that such a feat is impossible.
2) A mildly silly argument for one-boxing: Omega plausibly makes his decision by running a simulation of you. If you are the real copy, it might be best to two-box, but if you are the simulation then one-boxing earns real-you $1000000. Since you can’t distinguish whether this is real-you or simulation-you, you should one-box.
3) Would it change things for people if instead of $1000000 vs $1000 it were $1001 vs $1000? Where is the line drawn?
4) Eliezer: just curious about how you deal with paradoxes about infinity in your utility function. If for each n, on day n you are offered to sacrifice one unit of utility that day to gain one unit of utility on day 2n and one unit on day 2n+1 what do you do? Each time you do it you seem to gain a unit of utility, but if you do it every day you end up worse than you started.
4) Eliezer: just curious about how you deal with paradoxes about infinity in your utility function. If for each n, on day n you are offered to sacrifice one unit of utility that day to gain one unit of utility on day 2n and one unit on day 2n+1 what do you do? Each time you do it you seem to gain a unit of utility, but if you do it every day you end up worse than you started.
dankane, Eliezer answered your question in this comment, and maybe somewhere else, too, that I don’t yet know of.
If he wasn’t really talking about infinities, how would you parse this comment (the living forever part):
“There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever.”
At very least this should imply that for every N there is an f(N) so that he would rather have a 50% chance of living f(N) years and a 50% chance of dying instantly than having a 100% chance of living for N years. We could then consider the game where if he is going to live for N years he is repeatedly offered the chance to instead live f(N) years with 50% probability and 0 years with 50% probability. Taking the bet n+1 times clearly does better than taking it n times, but the strategy “take the bet until you lose” guarantees him a very short life expectancy.
If your utility function is unbounded you can run into paradoxes like this.
1) I would one-box. Here’s where I think the standard two-boxer argument breaks down. It’s the idea of making a decision. The two-boxer idea is that once the boxes have been fixed the course of action that makes the most money is taking both boxes. Unless there is reverse causality going on here, I don’t think that anyone disputes this. If at that moment you could make a choice totally independently of everything leading up to that point you would two-box. Unfortunately, the very existence of Omega implies that such a feat is impossible.
2) A mildly silly argument for one-boxing: Omega plausibly makes his decision by running a simulation of you. If you are the real copy, it might be best to two-box, but if you are the simulation then one-boxing earns real-you $1000000. Since you can’t distinguish whether this is real-you or simulation-you, you should one-box.
3) Would it change things for people if instead of $1000000 vs $1000 it were $1001 vs $1000? Where is the line drawn?
4) Eliezer: just curious about how you deal with paradoxes about infinity in your utility function. If for each n, on day n you are offered to sacrifice one unit of utility that day to gain one unit of utility on day 2n and one unit on day 2n+1 what do you do? Each time you do it you seem to gain a unit of utility, but if you do it every day you end up worse than you started.
dankane, Eliezer answered your question in this comment, and maybe somewhere else, too, that I don’t yet know of.
If he wasn’t really talking about infinities, how would you parse this comment (the living forever part):
“There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever.”
At very least this should imply that for every N there is an f(N) so that he would rather have a 50% chance of living f(N) years and a 50% chance of dying instantly than having a 100% chance of living for N years. We could then consider the game where if he is going to live for N years he is repeatedly offered the chance to instead live f(N) years with 50% probability and 0 years with 50% probability. Taking the bet n+1 times clearly does better than taking it n times, but the strategy “take the bet until you lose” guarantees him a very short life expectancy.
If your utility function is unbounded you can run into paradoxes like this.