“You were created by a god: a being called Prometheus. Prometheus was neither omniscient nor particularly benevolent. He was given a large set of blueprints for possible human embryos, and for each blueprint that pleased him he created that embryo and implanted it in a human woman. Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created. If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused. Prometheus’s predictive ability was not perfect, but it was very strong; he was the god, after all, of Foresight.”
Either I don’t get it, or you are misapplying a cached thought. Please explain to me where my reasoning is wrong (or perhaps where I misunderstand the problem):
When answering Newcomb’s problem, we believe Omega is a reliable predictor of what we will do, and based on that prediction places money accordingly.
In this problem, Prometheus always believes (by construction!) that we will one-box, and so will always place money according to that belief. In that case, the allocation of money will be the same for people who one-box (most people, since Prometheus is a good predictor), and the people who two-box.
You could make an alternate argument that even if you want to two-box, Prometheus’ near infallibility means you are unlikely to (after all, if everyone did, he would be a terrible predictor), but that’s different than answering what you should do in this situation.
Either I don’t get it, or you are misapplying a cached thought. Please explain to me where my reasoning is wrong (or perhaps where I misunderstand the problem)
It’s not about the money this time—but the implications to utility are the same. The ‘million dollars’ in Newcomb’s problem is allocated in the same way that life is allocated in this problem. In this problem the money is basically irrelevant because it is never part of Prometheus’ decision. But existence in the world is part of the stakes.
The problem feels different to Newcomb’s because the traditional problem was constructed to prompt the intuition ‘but one boxers get the money!‘. Then the intuition goes ahead and dredges up reasoning strategies (TDT for example) that are able to win the $1,000,000 rather than the the $1,000. But people’s intuitions are notoriously baffled by anthropic like situations. No intuition “um, for some reason making the ‘rational choice’ is making me worse off” is prompted and so they merrily revert to CDT and fail.
Another way to look at that many people find helpful when considering standard Newcomb’s it is that you don’t know whether you are the actual person or the simulated person (or reasoning) that is occurring when Omega/Prometheus is allocating $1,000,000/life.
If consistent decision making strategy is applied for both Newcomb’s and this problem then those who one box Newcomb’s but two box in this problem are making the same intuitive mistake as those who think Quantum Suicide is a good idea based off MWI assumptions.
You’re not answering the problem as it actually stands, you’re instead using perceived similarities to argue it’s some other problem, or to posit further elements (like simulated versions of yourself) that would affect the situation drastically.
With Newcomb’s problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you’re acausally affecting the likelihood the non-transparent box has 1.000.000. This works even for Omegas with less than 100% probability of predictive success.
With this problem, your existence is a certain fact. You don’t need to entagle anything, because you exist and you’ll keep existing—in any universe where you’re actually making a decision, YOU EXIST. You only need to grab two boxes, and you’ll have them both with no negative consequences.
This has absolutely NOTHING to do with Quantum suicide. These decisions don’t even require a belief in MWI.
On the other hand, your argument essentially says that if your mother was a a Boston Celtics fan who birthed you because she was 99.9% certain you’d support the Boston Celtics, then even if you hate both her and the Celtics you must nonetheless support them, because you value your existence.
Or if your parents birthed you because they were 99.9% certain you’d be an Islamist jihadi, you must therefore go jihad. Even if you hate them, even if you don’t believe in Islam, even if they have become secular atheists in the meantime. Because you value your existence.
That’s insane.
You’re not doing anything but invoking the concept of some imaginary debt to your ancestors. “We produced you, because we thought you’d act like this, so even if you hate our guts you must act like this, if you value your existence.”
Nonsense. This is nothing but a arbitrary deontological demand, that has nothing to do with utility. I will one-box in the normal Newcomb’s problem, and I can honorably decide to pay the driver in the Parfit’s Hitchhiker’s problem, and I can commit to taking Kavka’s toxin—but I have no motivation to commit to one-boxing in this problem. I exist. My existence is not in doubt. And I only have a moral obligation to those that created me under a very limited set of circumstances that don’t apply here.
With Newcomb’s problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you’re acausally affecting the likelihood the non-transparent box has 1.000.000. This works even for Omegas with less than 100% probability of predictive success.
You should still one-box in Newcomb’s problem with transparent boxes. There’s no unknown state there. And if you think you shouldn’t: when Omega presents you with two transparent boxes, one of them containing 1000$ and the other empty—won’t you regret being the kind of person who two-boxes in that problem?
Can you link to a description of the Newcomb’s problem with both boxes transparent?
If the problem is how you imply it to be, I don’t know what Omega would do if I one-boxed in the case of an empty transparent box, and I two-boxed in the case of a full one. That seems an exceptionally easy way to contradict Omega’s prediction, which in turn goes against the principle of Omega being Omega.
Also, what you’re doing seems to be substituting an uncertainty of the content for the box with an uncertainty of whether Omega will appear to me and offer me a empty or full box. But there’s an infinite number of hypothetical quasi-deities that might appear to me, and I can’t commit to all their hypothetical arbitrary demands in advance.
I’m slightly lost by all the different variations of “Newcomb’s problem with transparent boxes”, but for what it’s worth, one can easily write down a version of “Newcomb’s problem with transparent boxes” that is equivalent to Parfit’s Hitchhiker:
First, Omega judges whether, if both boxes are full, you will take both or just one. Then it fills the first box accordingly. (To make it strictly ‘isomorphic’ we can stipulate so that Omega will leave both boxes empty if you decide to two-box, but this doesn’t affect the decision theory.)
No doubt you will say that the difference between this and the “Prometheus problem” is that in the latter, you exist no matter what, and both boxes are full no matter what.
I agree that this seems intuitively to make all the difference in the world but consider this possibility: Perhaps the only way that Prometheus can predict your behaviour is by running a conscious simulation of you. If so, then choosing to two-box could cause your immediate “death” due to the fact that the simulation will be stopped, and Prometheus will not create a ‘real world’ copy of you.
(On the other hand, if Prometheus’ prediction is based entirely on ‘hard-wired’ factors beyond your conscious control, like your genetic makeup or whatever, then the same logic that says you must smoke in the ‘smoking lesion’ problem can be used to say that you must two-box after all.)
Both rules work. In both games, one-boxing no matter what is the winning strategy.
I designed my rules have the feature that by one-boxing upon seeing an empty box B you visibly prove Omega wrong. In the version you linked to, you don’t necessarily: maybe Omega left box B empty because you would have two-boxed if it was full.
So both problems can be reasonably called “Transparent Newcomb”. The one you linked to was invented first and is simpler, though.
I see. Thank you, but I’m unimpressed—by committing to one-boxing in the Transparent NewComb one still entagles uncertainty but just entagles the uncertainty of if and how Omega will appear to him. Now knowing the rules I can commit to one-boxing, thus increasing the chances Omega will appear to me—but that’s as meaningful as an Omega that says to people “I would have given you a million dollars, if you’d only worn a green hat”, and therefore I’d have to wear a green hat. It’s nothing but a meaningless modified Pascal’s wager.
Transparent Newcomb therefore again isn’t similar to the situation described in this thread. In this situation the decider exists no matter what: there’s no uncertainty.
You know the rules. You choose your strategy with full knowledge. If you lose, it’s your fault, you knowingly chose a bad strategy. Nothing arbitrary or meaningless here.
Bongo, you didn’t understand my objection:
In the classical Newcomb, Omega explains the rules to you when he appears, and there’s one uncertain element (the contents of the opaque box). You determine the strategy, which by Omega’s predictive power has already been entangled with the uncertain contents of the box.
In the transparent Newcomb, you either don’t know the rules (so you can’t precommit to anything, and you can’t commit to any strategy in which Omega2 wouldn’t require the opposite strategy) or you know the rules in advance and therefore you can determine the strategy, which by Omega’s predictive power has already been entangled with the uncertain element of whether he’ll appear to you, and with how much money in the boxes.
In the problem that’s posed on this thread however, there’s no uncertainty whatsoever. You exist and that’s certain. The entanglement has been already resolved in favor of your existence. You don’t need to satisfy your mom’s expectations of you in order to keep on existing. You don’t need to become a musician if your dad expected you to be a musician, You don’t need to be a scientist if your mom expected you to be a scientist. In ANY universe where you get to decide a strategy, YOU EXIST. Or you wouldn’t be deciding anything.
People hopefully do understand that instead of “Omega and Prometheus speak of their predictions” we can quite easily have “Your mom and dad tell you of their pre-birth expectations for you”
If anyone here honestly thinks that by failing their parents’ expectations they’ll stop existing, then they’re literally insane. It’s exactly the same as with them foiling Prometheus’ expectations.
The only resolution for either scenario I can think of is that there is a very high chance that regardless of what you precommit to do here or otherwise, at the moment of decision, be compelled to choose to 1-box, or be unable to pull out.
But aside from that improbable outcome, these, along with transparent Newcomb, are nonsense; they’re intractable. I can simply precommit to use the strategy that contradicts what Prometheus/Omega/Azathoth predicted, a la halting problem.
And because of the three, Azathoth is the one that most nearly exists, I am actually very likely to have children. An overwhelming majority of men actually do highly value sleeping with many women; the only reason this doesn’t result in massive uncontrollable pregnancy is because Azathoth, being the slow thinker he is, hasn’t had time to adjust for birth control. Plus I can’t think of an outcome Azathoth would prefer to us creating AGI and proliferating across the universe.
With Newcomb’s problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you’re acausally affecting the likelihood the non-transparent box has 1.000.000.
Hence the reference to Transparent Newcomb’s*, in which the money is visible and yet, by some decision theories, it is still irrational to two-box. (Similar reasoning pertains to certain time-travel scenarios—is it rational to try and avoid driving if you know you will die in a car crash?)
*The reference:
For others, it’s easy because you take both boxes in the variant of Newcomb where the boxes are transparent and you can see the million dollars; just as you would know that you had the million dollars no matter what, in this case you know that you exist no matter what.
I’ve since decided that one-boxing in Transparent Newcomb is the correct decision—because being the sort of agent that one-boxes is to be the sort of agent that gets given more frequently a filled first box (I think I only fully realized this after reading Eliezer’s paper on TDT, which I hadn’t at the time of this thread).
So the individual “losing” decision is actually part of a decision theory which is winning *overall”. And is therefore the correct decision no matter how counterintuitive.
Mind you, as a practical matter, I think it’s significantly harder for a human to choose to one-box in the case of Transparent Newcomb. I don’t know if I could manage it if I was actually presented with the situation, though I don’t think I’d have a problem with the case of classical Newcomb.
Another way to look at that many people find helpful when considering standard Newcomb’s it is that you don’t know whether you are the actual person or the simulated person
So the question is: is Prometheus running this simulation? If so, he will create you only if you one-box.
So it’s not that you were created by Prometheus, it’s that you might currently be being created by Prometheus, in which case you want to get Prometheus to keep on creating you.
Or less specifically; if I enter into a situation which involves an acausal negotiation with my creator, I want to agree with my creator so as to be created. This type of decision is likely to increase my measure.
Due to my current beliefs about metaverses I would still two-box, but I now understand how different metaverse theories would lead me to one-box; because I assign a nontrivial chance that I will later be convinced of other theories, I’m wondering if a mixed strategy would be best… I don’t really know.
So the question is: is Prometheus running this simulation? If so, he will create you only if you one-box.
Lest my words be a source of confusion note that I use ‘simulation’ as an example or ‘proof of concept’ for how the superintelligence may be doing the deciding. He may be using some other rule of inference that accurately models my decision making. But that doesn’t matter to me.
I agree with you here I believe. I didn’t mean to imply that Prometheus was literally running the simulation, just that phrasing it in this way made the whole thing “click” for me.
I think my phrasing is the potential source of confusion.
Well, I definitely am confused. What utility are you gaining or losing?
Is this an issue about your belief that you are created by Prometheus? Is this an issue about your belief in Omega or Prometheus’ honesty? I’m very unclear what I can possibly stand to gain or lose by being in a universe where Prometheus is wrong versus one where he is right.
The allocation of money is unspecified in this version, but has nothing to do with anyone’s predictions. You don’t get more money by one-boxing. I’ll edit to make that clearer.
Thanks. Unfortunately, now I’m horrendously confused. What’s the point of choosing either? Unless Prometheus is apt to feel vengeful (or generous), it doesn’t seem like there is any reason to prefer one course of action over another.
My understanding is that you get $200 by two-boxing and $100 by one-boxing, but with the caveat that you were created by Prometheus, God of One-Boxers. The allocation of money doesn’t change based on Prometheus’s predictions, because by Omega’s testimony you already know what set of Newcomblike predictions you belong to: your choice is whether or not to subvert that prediction.
I one-box on standard Newcomb, but I’d choose two boxes here.
I take one box. Normal Newcomblike reasoning.
Either I don’t get it, or you are misapplying a cached thought. Please explain to me where my reasoning is wrong (or perhaps where I misunderstand the problem):
When answering Newcomb’s problem, we believe Omega is a reliable predictor of what we will do, and based on that prediction places money accordingly.
In this problem, Prometheus always believes (by construction!) that we will one-box, and so will always place money according to that belief. In that case, the allocation of money will be the same for people who one-box (most people, since Prometheus is a good predictor), and the people who two-box.
You could make an alternate argument that even if you want to two-box, Prometheus’ near infallibility means you are unlikely to (after all, if everyone did, he would be a terrible predictor), but that’s different than answering what you should do in this situation.
It’s not about the money this time—but the implications to utility are the same. The ‘million dollars’ in Newcomb’s problem is allocated in the same way that life is allocated in this problem. In this problem the money is basically irrelevant because it is never part of Prometheus’ decision. But existence in the world is part of the stakes.
The problem feels different to Newcomb’s because the traditional problem was constructed to prompt the intuition ‘but one boxers get the money!‘. Then the intuition goes ahead and dredges up reasoning strategies (TDT for example) that are able to win the $1,000,000 rather than the the $1,000. But people’s intuitions are notoriously baffled by anthropic like situations. No intuition “um, for some reason making the ‘rational choice’ is making me worse off” is prompted and so they merrily revert to CDT and fail.
Another way to look at that many people find helpful when considering standard Newcomb’s it is that you don’t know whether you are the actual person or the simulated person (or reasoning) that is occurring when Omega/Prometheus is allocating $1,000,000/life.
If consistent decision making strategy is applied for both Newcomb’s and this problem then those who one box Newcomb’s but two box in this problem are making the same intuitive mistake as those who think Quantum Suicide is a good idea based off MWI assumptions.
You’re not answering the problem as it actually stands, you’re instead using perceived similarities to argue it’s some other problem, or to posit further elements (like simulated versions of yourself) that would affect the situation drastically.
With Newcomb’s problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you’re acausally affecting the likelihood the non-transparent box has 1.000.000. This works even for Omegas with less than 100% probability of predictive success.
With this problem, your existence is a certain fact. You don’t need to entagle anything, because you exist and you’ll keep existing—in any universe where you’re actually making a decision, YOU EXIST. You only need to grab two boxes, and you’ll have them both with no negative consequences.
This has absolutely NOTHING to do with Quantum suicide. These decisions don’t even require a belief in MWI.
On the other hand, your argument essentially says that if your mother was a a Boston Celtics fan who birthed you because she was 99.9% certain you’d support the Boston Celtics, then even if you hate both her and the Celtics you must nonetheless support them, because you value your existence.
Or if your parents birthed you because they were 99.9% certain you’d be an Islamist jihadi, you must therefore go jihad. Even if you hate them, even if you don’t believe in Islam, even if they have become secular atheists in the meantime. Because you value your existence.
That’s insane.
You’re not doing anything but invoking the concept of some imaginary debt to your ancestors. “We produced you, because we thought you’d act like this, so even if you hate our guts you must act like this, if you value your existence.”
Nonsense. This is nothing but a arbitrary deontological demand, that has nothing to do with utility. I will one-box in the normal Newcomb’s problem, and I can honorably decide to pay the driver in the Parfit’s Hitchhiker’s problem, and I can commit to taking Kavka’s toxin—but I have no motivation to commit to one-boxing in this problem. I exist. My existence is not in doubt. And I only have a moral obligation to those that created me under a very limited set of circumstances that don’t apply here.
You should still one-box in Newcomb’s problem with transparent boxes. There’s no unknown state there. And if you think you shouldn’t: when Omega presents you with two transparent boxes, one of them containing 1000$ and the other empty—won’t you regret being the kind of person who two-boxes in that problem?
Can you link to a description of the Newcomb’s problem with both boxes transparent?
If the problem is how you imply it to be, I don’t know what Omega would do if I one-boxed in the case of an empty transparent box, and I two-boxed in the case of a full one. That seems an exceptionally easy way to contradict Omega’s prediction, which in turn goes against the principle of Omega being Omega.
Also, what you’re doing seems to be substituting an uncertainty of the content for the box with an uncertainty of whether Omega will appear to me and offer me a empty or full box. But there’s an infinite number of hypothetical quasi-deities that might appear to me, and I can’t commit to all their hypothetical arbitrary demands in advance.
I’m slightly lost by all the different variations of “Newcomb’s problem with transparent boxes”, but for what it’s worth, one can easily write down a version of “Newcomb’s problem with transparent boxes” that is equivalent to Parfit’s Hitchhiker:
First, Omega judges whether, if both boxes are full, you will take both or just one. Then it fills the first box accordingly. (To make it strictly ‘isomorphic’ we can stipulate so that Omega will leave both boxes empty if you decide to two-box, but this doesn’t affect the decision theory.)
No doubt you will say that the difference between this and the “Prometheus problem” is that in the latter, you exist no matter what, and both boxes are full no matter what.
I agree that this seems intuitively to make all the difference in the world but consider this possibility: Perhaps the only way that Prometheus can predict your behaviour is by running a conscious simulation of you. If so, then choosing to two-box could cause your immediate “death” due to the fact that the simulation will be stopped, and Prometheus will not create a ‘real world’ copy of you.
(On the other hand, if Prometheus’ prediction is based entirely on ‘hard-wired’ factors beyond your conscious control, like your genetic makeup or whatever, then the same logic that says you must smoke in the ‘smoking lesion’ problem can be used to say that you must two-box after all.)
Rules of (one version of) Transparent Newcomb.
Incorrect rules. You don’t need the “don’t invite to his games” one, and you don’t need randomization. Corrected here.
Both rules work. In both games, one-boxing no matter what is the winning strategy.
I designed my rules have the feature that by one-boxing upon seeing an empty box B you visibly prove Omega wrong. In the version you linked to, you don’t necessarily: maybe Omega left box B empty because you would have two-boxed if it was full.
So both problems can be reasonably called “Transparent Newcomb”. The one you linked to was invented first and is simpler, though.
I see. Thank you, but I’m unimpressed—by committing to one-boxing in the Transparent NewComb one still entagles uncertainty but just entagles the uncertainty of if and how Omega will appear to him. Now knowing the rules I can commit to one-boxing, thus increasing the chances Omega will appear to me—but that’s as meaningful as an Omega that says to people “I would have given you a million dollars, if you’d only worn a green hat”, and therefore I’d have to wear a green hat. It’s nothing but a meaningless modified Pascal’s wager.
Transparent Newcomb therefore again isn’t similar to the situation described in this thread. In this situation the decider exists no matter what: there’s no uncertainty.
You know the rules. You choose your strategy with full knowledge. If you lose, it’s your fault, you knowingly chose a bad strategy. Nothing arbitrary or meaningless here.
Bongo, you didn’t understand my objection: In the classical Newcomb, Omega explains the rules to you when he appears, and there’s one uncertain element (the contents of the opaque box). You determine the strategy, which by Omega’s predictive power has already been entangled with the uncertain contents of the box.
In the transparent Newcomb, you either don’t know the rules (so you can’t precommit to anything, and you can’t commit to any strategy in which Omega2 wouldn’t require the opposite strategy) or you know the rules in advance and therefore you can determine the strategy, which by Omega’s predictive power has already been entangled with the uncertain element of whether he’ll appear to you, and with how much money in the boxes.
In the problem that’s posed on this thread however, there’s no uncertainty whatsoever. You exist and that’s certain. The entanglement has been already resolved in favor of your existence. You don’t need to satisfy your mom’s expectations of you in order to keep on existing. You don’t need to become a musician if your dad expected you to be a musician, You don’t need to be a scientist if your mom expected you to be a scientist. In ANY universe where you get to decide a strategy, YOU EXIST. Or you wouldn’t be deciding anything.
People hopefully do understand that instead of “Omega and Prometheus speak of their predictions” we can quite easily have “Your mom and dad tell you of their pre-birth expectations for you”
If anyone here honestly thinks that by failing their parents’ expectations they’ll stop existing, then they’re literally insane. It’s exactly the same as with them foiling Prometheus’ expectations.
This.
The only resolution for either scenario I can think of is that there is a very high chance that regardless of what you precommit to do here or otherwise, at the moment of decision, be compelled to choose to 1-box, or be unable to pull out.
But aside from that improbable outcome, these, along with transparent Newcomb, are nonsense; they’re intractable. I can simply precommit to use the strategy that contradicts what Prometheus/Omega/Azathoth predicted, a la halting problem.
And because of the three, Azathoth is the one that most nearly exists, I am actually very likely to have children. An overwhelming majority of men actually do highly value sleeping with many women; the only reason this doesn’t result in massive uncontrollable pregnancy is because Azathoth, being the slow thinker he is, hasn’t had time to adjust for birth control. Plus I can’t think of an outcome Azathoth would prefer to us creating AGI and proliferating across the universe.
Hence the reference to Transparent Newcomb’s*, in which the money is visible and yet, by some decision theories, it is still irrational to two-box. (Similar reasoning pertains to certain time-travel scenarios—is it rational to try and avoid driving if you know you will die in a car crash?)
*The reference:
EDIT: whoops, ninja’d. By almost two years.
Do you still two-box in this situation?
I’ve since decided that one-boxing in Transparent Newcomb is the correct decision—because being the sort of agent that one-boxes is to be the sort of agent that gets given more frequently a filled first box (I think I only fully realized this after reading Eliezer’s paper on TDT, which I hadn’t at the time of this thread).
So the individual “losing” decision is actually part of a decision theory which is winning *overall”. And is therefore the correct decision no matter how counterintuitive.
Mind you, as a practical matter, I think it’s significantly harder for a human to choose to one-box in the case of Transparent Newcomb. I don’t know if I could manage it if I was actually presented with the situation, though I don’t think I’d have a problem with the case of classical Newcomb.
I didn’t get it until I read this line:
So the question is: is Prometheus running this simulation? If so, he will create you only if you one-box.
So it’s not that you were created by Prometheus, it’s that you might currently be being created by Prometheus, in which case you want to get Prometheus to keep on creating you.
Or less specifically; if I enter into a situation which involves an acausal negotiation with my creator, I want to agree with my creator so as to be created. This type of decision is likely to increase my measure.
Due to my current beliefs about metaverses I would still two-box, but I now understand how different metaverse theories would lead me to one-box; because I assign a nontrivial chance that I will later be convinced of other theories, I’m wondering if a mixed strategy would be best… I don’t really know.
Lest my words be a source of confusion note that I use ‘simulation’ as an example or ‘proof of concept’ for how the superintelligence may be doing the deciding. He may be using some other rule of inference that accurately models my decision making. But that doesn’t matter to me.
I agree with you here I believe. I didn’t mean to imply that Prometheus was literally running the simulation, just that phrasing it in this way made the whole thing “click” for me.
I think my phrasing is the potential source of confusion.
Well, I definitely am confused. What utility are you gaining or losing?
Is this an issue about your belief that you are created by Prometheus? Is this an issue about your belief in Omega or Prometheus’ honesty? I’m very unclear what I can possibly stand to gain or lose by being in a universe where Prometheus is wrong versus one where he is right.
The allocation of money is unspecified in this version, but has nothing to do with anyone’s predictions. You don’t get more money by one-boxing. I’ll edit to make that clearer.
Thanks. Unfortunately, now I’m horrendously confused. What’s the point of choosing either? Unless Prometheus is apt to feel vengeful (or generous), it doesn’t seem like there is any reason to prefer one course of action over another.
My understanding is that you get $200 by two-boxing and $100 by one-boxing, but with the caveat that you were created by Prometheus, God of One-Boxers. The allocation of money doesn’t change based on Prometheus’s predictions, because by Omega’s testimony you already know what set of Newcomblike predictions you belong to: your choice is whether or not to subvert that prediction.
I one-box on standard Newcomb, but I’d choose two boxes here.