You’re not answering the problem as it actually stands, you’re instead using perceived similarities to argue it’s some other problem, or to posit further elements (like simulated versions of yourself) that would affect the situation drastically.
With Newcomb’s problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you’re acausally affecting the likelihood the non-transparent box has 1.000.000. This works even for Omegas with less than 100% probability of predictive success.
With this problem, your existence is a certain fact. You don’t need to entagle anything, because you exist and you’ll keep existing—in any universe where you’re actually making a decision, YOU EXIST. You only need to grab two boxes, and you’ll have them both with no negative consequences.
This has absolutely NOTHING to do with Quantum suicide. These decisions don’t even require a belief in MWI.
On the other hand, your argument essentially says that if your mother was a a Boston Celtics fan who birthed you because she was 99.9% certain you’d support the Boston Celtics, then even if you hate both her and the Celtics you must nonetheless support them, because you value your existence.
Or if your parents birthed you because they were 99.9% certain you’d be an Islamist jihadi, you must therefore go jihad. Even if you hate them, even if you don’t believe in Islam, even if they have become secular atheists in the meantime. Because you value your existence.
That’s insane.
You’re not doing anything but invoking the concept of some imaginary debt to your ancestors. “We produced you, because we thought you’d act like this, so even if you hate our guts you must act like this, if you value your existence.”
Nonsense. This is nothing but a arbitrary deontological demand, that has nothing to do with utility. I will one-box in the normal Newcomb’s problem, and I can honorably decide to pay the driver in the Parfit’s Hitchhiker’s problem, and I can commit to taking Kavka’s toxin—but I have no motivation to commit to one-boxing in this problem. I exist. My existence is not in doubt. And I only have a moral obligation to those that created me under a very limited set of circumstances that don’t apply here.
With Newcomb’s problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you’re acausally affecting the likelihood the non-transparent box has 1.000.000. This works even for Omegas with less than 100% probability of predictive success.
You should still one-box in Newcomb’s problem with transparent boxes. There’s no unknown state there. And if you think you shouldn’t: when Omega presents you with two transparent boxes, one of them containing 1000$ and the other empty—won’t you regret being the kind of person who two-boxes in that problem?
Can you link to a description of the Newcomb’s problem with both boxes transparent?
If the problem is how you imply it to be, I don’t know what Omega would do if I one-boxed in the case of an empty transparent box, and I two-boxed in the case of a full one. That seems an exceptionally easy way to contradict Omega’s prediction, which in turn goes against the principle of Omega being Omega.
Also, what you’re doing seems to be substituting an uncertainty of the content for the box with an uncertainty of whether Omega will appear to me and offer me a empty or full box. But there’s an infinite number of hypothetical quasi-deities that might appear to me, and I can’t commit to all their hypothetical arbitrary demands in advance.
I’m slightly lost by all the different variations of “Newcomb’s problem with transparent boxes”, but for what it’s worth, one can easily write down a version of “Newcomb’s problem with transparent boxes” that is equivalent to Parfit’s Hitchhiker:
First, Omega judges whether, if both boxes are full, you will take both or just one. Then it fills the first box accordingly. (To make it strictly ‘isomorphic’ we can stipulate so that Omega will leave both boxes empty if you decide to two-box, but this doesn’t affect the decision theory.)
No doubt you will say that the difference between this and the “Prometheus problem” is that in the latter, you exist no matter what, and both boxes are full no matter what.
I agree that this seems intuitively to make all the difference in the world but consider this possibility: Perhaps the only way that Prometheus can predict your behaviour is by running a conscious simulation of you. If so, then choosing to two-box could cause your immediate “death” due to the fact that the simulation will be stopped, and Prometheus will not create a ‘real world’ copy of you.
(On the other hand, if Prometheus’ prediction is based entirely on ‘hard-wired’ factors beyond your conscious control, like your genetic makeup or whatever, then the same logic that says you must smoke in the ‘smoking lesion’ problem can be used to say that you must two-box after all.)
Both rules work. In both games, one-boxing no matter what is the winning strategy.
I designed my rules have the feature that by one-boxing upon seeing an empty box B you visibly prove Omega wrong. In the version you linked to, you don’t necessarily: maybe Omega left box B empty because you would have two-boxed if it was full.
So both problems can be reasonably called “Transparent Newcomb”. The one you linked to was invented first and is simpler, though.
I see. Thank you, but I’m unimpressed—by committing to one-boxing in the Transparent NewComb one still entagles uncertainty but just entagles the uncertainty of if and how Omega will appear to him. Now knowing the rules I can commit to one-boxing, thus increasing the chances Omega will appear to me—but that’s as meaningful as an Omega that says to people “I would have given you a million dollars, if you’d only worn a green hat”, and therefore I’d have to wear a green hat. It’s nothing but a meaningless modified Pascal’s wager.
Transparent Newcomb therefore again isn’t similar to the situation described in this thread. In this situation the decider exists no matter what: there’s no uncertainty.
You know the rules. You choose your strategy with full knowledge. If you lose, it’s your fault, you knowingly chose a bad strategy. Nothing arbitrary or meaningless here.
Bongo, you didn’t understand my objection:
In the classical Newcomb, Omega explains the rules to you when he appears, and there’s one uncertain element (the contents of the opaque box). You determine the strategy, which by Omega’s predictive power has already been entangled with the uncertain contents of the box.
In the transparent Newcomb, you either don’t know the rules (so you can’t precommit to anything, and you can’t commit to any strategy in which Omega2 wouldn’t require the opposite strategy) or you know the rules in advance and therefore you can determine the strategy, which by Omega’s predictive power has already been entangled with the uncertain element of whether he’ll appear to you, and with how much money in the boxes.
In the problem that’s posed on this thread however, there’s no uncertainty whatsoever. You exist and that’s certain. The entanglement has been already resolved in favor of your existence. You don’t need to satisfy your mom’s expectations of you in order to keep on existing. You don’t need to become a musician if your dad expected you to be a musician, You don’t need to be a scientist if your mom expected you to be a scientist. In ANY universe where you get to decide a strategy, YOU EXIST. Or you wouldn’t be deciding anything.
People hopefully do understand that instead of “Omega and Prometheus speak of their predictions” we can quite easily have “Your mom and dad tell you of their pre-birth expectations for you”
If anyone here honestly thinks that by failing their parents’ expectations they’ll stop existing, then they’re literally insane. It’s exactly the same as with them foiling Prometheus’ expectations.
The only resolution for either scenario I can think of is that there is a very high chance that regardless of what you precommit to do here or otherwise, at the moment of decision, be compelled to choose to 1-box, or be unable to pull out.
But aside from that improbable outcome, these, along with transparent Newcomb, are nonsense; they’re intractable. I can simply precommit to use the strategy that contradicts what Prometheus/Omega/Azathoth predicted, a la halting problem.
And because of the three, Azathoth is the one that most nearly exists, I am actually very likely to have children. An overwhelming majority of men actually do highly value sleeping with many women; the only reason this doesn’t result in massive uncontrollable pregnancy is because Azathoth, being the slow thinker he is, hasn’t had time to adjust for birth control. Plus I can’t think of an outcome Azathoth would prefer to us creating AGI and proliferating across the universe.
With Newcomb’s problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you’re acausally affecting the likelihood the non-transparent box has 1.000.000.
Hence the reference to Transparent Newcomb’s*, in which the money is visible and yet, by some decision theories, it is still irrational to two-box. (Similar reasoning pertains to certain time-travel scenarios—is it rational to try and avoid driving if you know you will die in a car crash?)
*The reference:
For others, it’s easy because you take both boxes in the variant of Newcomb where the boxes are transparent and you can see the million dollars; just as you would know that you had the million dollars no matter what, in this case you know that you exist no matter what.
I’ve since decided that one-boxing in Transparent Newcomb is the correct decision—because being the sort of agent that one-boxes is to be the sort of agent that gets given more frequently a filled first box (I think I only fully realized this after reading Eliezer’s paper on TDT, which I hadn’t at the time of this thread).
So the individual “losing” decision is actually part of a decision theory which is winning *overall”. And is therefore the correct decision no matter how counterintuitive.
Mind you, as a practical matter, I think it’s significantly harder for a human to choose to one-box in the case of Transparent Newcomb. I don’t know if I could manage it if I was actually presented with the situation, though I don’t think I’d have a problem with the case of classical Newcomb.
You’re not answering the problem as it actually stands, you’re instead using perceived similarities to argue it’s some other problem, or to posit further elements (like simulated versions of yourself) that would affect the situation drastically.
With Newcomb’s problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you’re acausally affecting the likelihood the non-transparent box has 1.000.000. This works even for Omegas with less than 100% probability of predictive success.
With this problem, your existence is a certain fact. You don’t need to entagle anything, because you exist and you’ll keep existing—in any universe where you’re actually making a decision, YOU EXIST. You only need to grab two boxes, and you’ll have them both with no negative consequences.
This has absolutely NOTHING to do with Quantum suicide. These decisions don’t even require a belief in MWI.
On the other hand, your argument essentially says that if your mother was a a Boston Celtics fan who birthed you because she was 99.9% certain you’d support the Boston Celtics, then even if you hate both her and the Celtics you must nonetheless support them, because you value your existence.
Or if your parents birthed you because they were 99.9% certain you’d be an Islamist jihadi, you must therefore go jihad. Even if you hate them, even if you don’t believe in Islam, even if they have become secular atheists in the meantime. Because you value your existence.
That’s insane.
You’re not doing anything but invoking the concept of some imaginary debt to your ancestors. “We produced you, because we thought you’d act like this, so even if you hate our guts you must act like this, if you value your existence.”
Nonsense. This is nothing but a arbitrary deontological demand, that has nothing to do with utility. I will one-box in the normal Newcomb’s problem, and I can honorably decide to pay the driver in the Parfit’s Hitchhiker’s problem, and I can commit to taking Kavka’s toxin—but I have no motivation to commit to one-boxing in this problem. I exist. My existence is not in doubt. And I only have a moral obligation to those that created me under a very limited set of circumstances that don’t apply here.
You should still one-box in Newcomb’s problem with transparent boxes. There’s no unknown state there. And if you think you shouldn’t: when Omega presents you with two transparent boxes, one of them containing 1000$ and the other empty—won’t you regret being the kind of person who two-boxes in that problem?
Can you link to a description of the Newcomb’s problem with both boxes transparent?
If the problem is how you imply it to be, I don’t know what Omega would do if I one-boxed in the case of an empty transparent box, and I two-boxed in the case of a full one. That seems an exceptionally easy way to contradict Omega’s prediction, which in turn goes against the principle of Omega being Omega.
Also, what you’re doing seems to be substituting an uncertainty of the content for the box with an uncertainty of whether Omega will appear to me and offer me a empty or full box. But there’s an infinite number of hypothetical quasi-deities that might appear to me, and I can’t commit to all their hypothetical arbitrary demands in advance.
I’m slightly lost by all the different variations of “Newcomb’s problem with transparent boxes”, but for what it’s worth, one can easily write down a version of “Newcomb’s problem with transparent boxes” that is equivalent to Parfit’s Hitchhiker:
First, Omega judges whether, if both boxes are full, you will take both or just one. Then it fills the first box accordingly. (To make it strictly ‘isomorphic’ we can stipulate so that Omega will leave both boxes empty if you decide to two-box, but this doesn’t affect the decision theory.)
No doubt you will say that the difference between this and the “Prometheus problem” is that in the latter, you exist no matter what, and both boxes are full no matter what.
I agree that this seems intuitively to make all the difference in the world but consider this possibility: Perhaps the only way that Prometheus can predict your behaviour is by running a conscious simulation of you. If so, then choosing to two-box could cause your immediate “death” due to the fact that the simulation will be stopped, and Prometheus will not create a ‘real world’ copy of you.
(On the other hand, if Prometheus’ prediction is based entirely on ‘hard-wired’ factors beyond your conscious control, like your genetic makeup or whatever, then the same logic that says you must smoke in the ‘smoking lesion’ problem can be used to say that you must two-box after all.)
Rules of (one version of) Transparent Newcomb.
Incorrect rules. You don’t need the “don’t invite to his games” one, and you don’t need randomization. Corrected here.
Both rules work. In both games, one-boxing no matter what is the winning strategy.
I designed my rules have the feature that by one-boxing upon seeing an empty box B you visibly prove Omega wrong. In the version you linked to, you don’t necessarily: maybe Omega left box B empty because you would have two-boxed if it was full.
So both problems can be reasonably called “Transparent Newcomb”. The one you linked to was invented first and is simpler, though.
I see. Thank you, but I’m unimpressed—by committing to one-boxing in the Transparent NewComb one still entagles uncertainty but just entagles the uncertainty of if and how Omega will appear to him. Now knowing the rules I can commit to one-boxing, thus increasing the chances Omega will appear to me—but that’s as meaningful as an Omega that says to people “I would have given you a million dollars, if you’d only worn a green hat”, and therefore I’d have to wear a green hat. It’s nothing but a meaningless modified Pascal’s wager.
Transparent Newcomb therefore again isn’t similar to the situation described in this thread. In this situation the decider exists no matter what: there’s no uncertainty.
You know the rules. You choose your strategy with full knowledge. If you lose, it’s your fault, you knowingly chose a bad strategy. Nothing arbitrary or meaningless here.
Bongo, you didn’t understand my objection: In the classical Newcomb, Omega explains the rules to you when he appears, and there’s one uncertain element (the contents of the opaque box). You determine the strategy, which by Omega’s predictive power has already been entangled with the uncertain contents of the box.
In the transparent Newcomb, you either don’t know the rules (so you can’t precommit to anything, and you can’t commit to any strategy in which Omega2 wouldn’t require the opposite strategy) or you know the rules in advance and therefore you can determine the strategy, which by Omega’s predictive power has already been entangled with the uncertain element of whether he’ll appear to you, and with how much money in the boxes.
In the problem that’s posed on this thread however, there’s no uncertainty whatsoever. You exist and that’s certain. The entanglement has been already resolved in favor of your existence. You don’t need to satisfy your mom’s expectations of you in order to keep on existing. You don’t need to become a musician if your dad expected you to be a musician, You don’t need to be a scientist if your mom expected you to be a scientist. In ANY universe where you get to decide a strategy, YOU EXIST. Or you wouldn’t be deciding anything.
People hopefully do understand that instead of “Omega and Prometheus speak of their predictions” we can quite easily have “Your mom and dad tell you of their pre-birth expectations for you”
If anyone here honestly thinks that by failing their parents’ expectations they’ll stop existing, then they’re literally insane. It’s exactly the same as with them foiling Prometheus’ expectations.
This.
The only resolution for either scenario I can think of is that there is a very high chance that regardless of what you precommit to do here or otherwise, at the moment of decision, be compelled to choose to 1-box, or be unable to pull out.
But aside from that improbable outcome, these, along with transparent Newcomb, are nonsense; they’re intractable. I can simply precommit to use the strategy that contradicts what Prometheus/Omega/Azathoth predicted, a la halting problem.
And because of the three, Azathoth is the one that most nearly exists, I am actually very likely to have children. An overwhelming majority of men actually do highly value sleeping with many women; the only reason this doesn’t result in massive uncontrollable pregnancy is because Azathoth, being the slow thinker he is, hasn’t had time to adjust for birth control. Plus I can’t think of an outcome Azathoth would prefer to us creating AGI and proliferating across the universe.
Hence the reference to Transparent Newcomb’s*, in which the money is visible and yet, by some decision theories, it is still irrational to two-box. (Similar reasoning pertains to certain time-travel scenarios—is it rational to try and avoid driving if you know you will die in a car crash?)
*The reference:
EDIT: whoops, ninja’d. By almost two years.
Do you still two-box in this situation?
I’ve since decided that one-boxing in Transparent Newcomb is the correct decision—because being the sort of agent that one-boxes is to be the sort of agent that gets given more frequently a filled first box (I think I only fully realized this after reading Eliezer’s paper on TDT, which I hadn’t at the time of this thread).
So the individual “losing” decision is actually part of a decision theory which is winning *overall”. And is therefore the correct decision no matter how counterintuitive.
Mind you, as a practical matter, I think it’s significantly harder for a human to choose to one-box in the case of Transparent Newcomb. I don’t know if I could manage it if I was actually presented with the situation, though I don’t think I’d have a problem with the case of classical Newcomb.