This shouldn’t be tough. He gives you the box. You flip a coin. You open or don’t. He saw that coming. You get what he gave you.
Fancy talk doesn’t change his ability to know what you are gonna do. You might as well say that another version of you had a heart attack before they could open any boxes, so your plan is bad as say that another version of you tricked Omega so your plan is good.
Consider that down voted. You’re totally strawmanning. You’re not taking this seriously, and you’re not listening, because you’re not responding to what I actually said. Did you even read the OP? What are you even talking about?
I’m simplifying, but I don’t think it’s really strawmanning.
There exists no procedure that the Chooser can perform after Omega sets down the box and before they open it that will cause Omega to reward a two boxer or fail to reward a one boxer. Not X-raying the boxes, not pulling a TRUE RANDOMIZER out of a portable hole. Omega is defined as part of the problem, and fighting the hypothetical doesn’t change anything.
He correctly rewards your actions in exactly the same way that the law in Prisoner’s Dilemma hands you your points. Writing long articles about how you could use a spoon to tunnel through and overhear the other prisoner, and that if anyone doesn’t have spoons in their answers they are doing something wrong...isn’t even wrong, it’s solving the wrong problem.
What you are fighting, Omega’s defined perfection, doesn’t exist. Sinking effort into fighting it is dumb. The idea that people need to ‘take seriously’ your shadow boxing is even more silly.
Like, say we all agree that Omega can’t handle ‘quantum coin flips’, or, heck, dice. You can just repose the problem with Omega2, who alters reality such that nothing that interferes with his experiment can work. Or walls that are unspoonable, to drive the point home.
Writing long articles about how you could use a spoon to tunnel through and overhear the other prisoner, and that if anyone doesn’t have spoons in their answers they are doing something wrong...
Another strawman. Strawman arguments may work on some gullible humans, but don’t expect it to sway a rationalist.
You can just repose the problem with Omega2, who alters reality
You’re not being very clear, but it sounds like you’re assuming a contradiction. You can’t assert that Omega2 both does and does not alter the reality of the boxes after the choice. If you allow a contradiction you can do whatever you want, but it’s not math anymore. We’re not talking about anything useful. Making stuff up with numbers and the constraint of logic is math. Making stuff up with numbers and no logic is just numerology.
not pulling a TRUE RANDOMIZER out
I think this is the crux of your objection: I think agents based on real-world physics are the default, and an agent - QRNG (quantum random number generator) problem is an additional constraint. A special case. You think that classical-only agents are the default, and classical + QRNG is the special case.
Recall how an algorithm feels from the inside. Once we know all the relevant details about Pluto, you can still ask, “But it really a planet?”. But at this point understand we’re not talking about Pluto. We’re talking about our own language. Thus which is really the default should be irrelevant. We should be able to taboo “planet”, use alternate names, and talk intelligently about either case. But recall the OP specifically assumes a QRNG:
This is regardless of how well Omega can predict your choice. Given quantum dice, Newcomb’s problem is not Newcomblike.
While it’s a useful intuition pump, the above argument doesn’t appear to require the Many Worlds interpretation to work. (Though Many Worlds is probably correct.) The dice may not even have to be quantum. They just have to be unpredictably random.
The qualification “given quantum dice” is not vacuous. A simple computer algorithm isn’t good enough against a sufficiently advanced predictor. Pseudorandom sequences can be reproduced and predicted. The argument requires actual hardware.
Pretending that I didn’t assume that, when I specifically stated that I had, is logically rude.
What you are fighting, Omega’s defined perfection, doesn’t exist. Sinking effort into fighting it is dumb. The idea that people need to ‘take seriously’ your shadow boxing is even more silly.
Why do we care about Newcomblike problems? Because they apply to real-world agents. Like AIs. It’s useful to consider.
Omniscience doesn’t exist. Omega is only the limiting case, but Newcomblike reasoning applies even in the face of an imperfect predictor, so Newcomblike reasoning still applies in the real world. QRNGs, do exist in the real world, and IF your decision theory can’t account for them, and use them appropriately, then it’s the wrong decision theory for the real world.classical + QRNG is useful to think about. It isn’t being silly to ask other rationalists to take it seriously, and I’m starting to suspect you’re trolling me here.
But we should be able to talk intelligently about the other case. Are there situations where it’s useful to consider agent - QRNG? Sure, if the rules of the game stipulate that the Chooser promises not to do that. That’s clearly a different game than in the OP, but perhaps closer to the original formulation in the paper that g_pepper pointed out. In that case, you one-box. We could even say that Omega claims to never offer a deal to those he cannot predict accurately. If you know this, you may be motivated to be more predictable. Again, a different game.
But can it look like the game in the OP to the Chooser? Can the Chooser think it’s in classical + QRNG, when, in fact, it is not? Perhaps, but it’s contrived. It is unrealistic to think a real-world superintelligence can’t build a QRNG, given access to real-world actuators. But if you boxed the AI Chooser in a simulated world (denying it real actuators), you could provide it with a “simulated QRNG”, that is not, in fact, quantum. Maybe you generate a list of numbers in advance, then you could create a “simulated Omega” that can predict the “simulated QRNG” due outside-the-box information, but of course, not a real one.
But this isn’t The Deal either. This is isomorphic to the case where Omega cheats by loading your dice to match his prediction after presenting the choice (or an accomplice does this for him), thus violating The Deal. The Chooser must choose, not Omega, or there’s no point. With enough examples the Chooser may suspect it’s in a simulation. (This would probably make it much less useful as an oracle, or more likely to escape the box.)
If Omega can’t predict dice, then he isn’t Omega. How can you spend so many words on this without getting it? If your ‘solution’ to the problem involves Omega being unable to predict whether you one or two box, because of dice or anything else, then you aren’t talking about the same problem as everyone else. I’ve pointed this out like 4 times. Oscar and Mr Mind pointed it out too.
you aren’t talking about the same problem as everyone else.
facepalm
Duh. You’re the one not talking about the games in the OP! I was talking about certain very specific modifications to Newcomb’s problem (Starting from the formulation I quoted from the wiki at the top of the OP, not from Nozick’s paper!). I even put “cheating” in the title, as in “breaking the rules” and also in the sense of “cheating on a deal”. I used both of these senses in the OP:
I just cheated Omega out of an expected $250 over one-boxing.
(Cheating a deal.)
The argument requires actual hardware. Hardware that some humans posses, but not innately. If you want to call that cheating, see the title
(Breaking the rules.)
I also spoke of several alternative ways the game could be modified, both in the OP and in this thread. Don’t pretend I said they’re all the same game! That’s logically rude.
Taboo “Newcomb’s Problem”. It’s clearly biasing your thinking if anyone is disrespecting your sacred cow. We’re talking about The Deal. While we’re at it, Taboo “Omega” too, in case he is also a cow.
###Game A
We’re talking about The Deal in a MWI universe. There’s a near-omniscient Being offering the deal. Wait, Taboo “omniscient”, it’s probably ill defined, like everything else about God. Said being knows the wavefunction of the entire universe of discourse (if that’s even consistent), including both the Chooser and the dice, and can predict how it will evolve. This means the Being knows the whole future. The being CAN predict the outcome of a quantum coin flip. Look! He even drew a picture for you:
prediction
|
set up box
|
flip!
|\
| \
H T
The Being correctly predicts both Everett branches. Both a Head and Tail happen. Remember, I said “MWI”. Both branches are equally real. But the Being can either fill the box, or not, before the flip, per The Deal. That any given Chooser copy only sees one “random” outcome is a subjective illusion. This is not random. MWI is deterministic.
The optimal strategy for the Chooser in Game A is to use a QRNG to one-box with just over 50% measure, and two-box with just under 50% measure.
The optimal strategy for the Being before the split (predicting this) to maximize the measure of accurately set-up boxes after the split, is to put the million in.
If you want to dispute Game A, because I missed some important logically valid reason, I’d like to know what that is, but it’s certainly nothing you’ve said so far. And call it “Game A” to be clear.
###Game B
We don’t know if we’re in MWI or not. There’s a different Being offering The Deal. This being has a track record of predicting significantly better than chance, so Newcomblike reasoning applies. But said Being certainly isn’t omniscient. You have access to a randomizer the Being can’t predict. (You can always do this in principle, for any physically realizable Being. For a sufficiently weak predictor, ordinary dice will suffice, but even vs a Being with a Jupiter Brain in his back pocket, a QRNG is sufficient.)
It’s logically rude to take the ordinary dice from Game B and the Being that knows the wavefunction from Game A, and pretend that’s the only game.
Game B is a weakened version useful to think about since it would also apply to superintelligent AGIs. We can use the strategies developed in Game A to help us think about Game B. We can use it to help develop and test a good decision theory. Does your decision theory handle Game B properly? If so, great! More evidence it would work in the real world. If not, you should update and rethink your decision theory.
Since “your decision theory” may also be a cow, note that I never defined what your decision theory is, not here, not in the OP. I also never said your decision theory necessarily fails this test. I said,
Having proven a strategy superior to one-boxing, I can claim that if your decision theory just one-boxes without pre-committing to use quantum dice, something is wrong with it. [added emphasis on if]
Of course, this claim is for Game A!
Now have we sufficiently established what we both think Pluto is, or is there more you wanted to say? Or do I have to Taboo more cows?
Look, surely in your diagram there is more than just the one fork, right? You could have a heart attack, get struck by a meteor, commit suicide, take a sudden vow of poverty or whatever. Point is, there’s a zero box fork, right? So what does the cow do when you will zero box?
See the trick? The whole one-box or two-box was a false binary all along. How has the everett fork where you had your heart attack play out? No quantum dice required, the cow has always been a fraud!
Except not, because it’s just a logic puzzle. It doesn’t need to consider the fork where you zero box. You are given as a profit maximizer. The cow is given as able to discern your future actions (including futile efforts at randomization). These are just parts of the question, same as the jail being inescapable in prisoner’s dilemma.
It feels like you are circling (grazing?) around to being right. Like, earlier when you got to “In that case, you one-box.”, you were there. ‘That case’ is the base case, the case that we all mean when we say ″Cow’s problem’.
This shouldn’t be tough. He gives you the box. You flip a coin. You open or don’t. He saw that coming. You get what he gave you.
Fancy talk doesn’t change his ability to know what you are gonna do. You might as well say that another version of you had a heart attack before they could open any boxes, so your plan is bad as say that another version of you tricked Omega so your plan is good.
Consider that down voted. You’re totally strawmanning. You’re not taking this seriously, and you’re not listening, because you’re not responding to what I actually said. Did you even read the OP? What are you even talking about?
I’m simplifying, but I don’t think it’s really strawmanning.
There exists no procedure that the Chooser can perform after Omega sets down the box and before they open it that will cause Omega to reward a two boxer or fail to reward a one boxer. Not X-raying the boxes, not pulling a TRUE RANDOMIZER out of a portable hole. Omega is defined as part of the problem, and fighting the hypothetical doesn’t change anything.
He correctly rewards your actions in exactly the same way that the law in Prisoner’s Dilemma hands you your points. Writing long articles about how you could use a spoon to tunnel through and overhear the other prisoner, and that if anyone doesn’t have spoons in their answers they are doing something wrong...isn’t even wrong, it’s solving the wrong problem.
What you are fighting, Omega’s defined perfection, doesn’t exist. Sinking effort into fighting it is dumb. The idea that people need to ‘take seriously’ your shadow boxing is even more silly.
Like, say we all agree that Omega can’t handle ‘quantum coin flips’, or, heck, dice. You can just repose the problem with Omega2, who alters reality such that nothing that interferes with his experiment can work. Or walls that are unspoonable, to drive the point home.
Another strawman. Strawman arguments may work on some gullible humans, but don’t expect it to sway a rationalist.
You’re not being very clear, but it sounds like you’re assuming a contradiction. You can’t assert that Omega2 both does and does not alter the reality of the boxes after the choice. If you allow a contradiction you can do whatever you want, but it’s not math anymore. We’re not talking about anything useful. Making stuff up with numbers and the constraint of logic is math. Making stuff up with numbers and no logic is just numerology.
I think this is the crux of your objection: I think agents based on real-world physics are the default, and an
agent - QRNG
(quantum random number generator) problem is an additional constraint. A special case. You think that classical-only agents are the default, andclassical + QRNG
is the special case.Recall how an algorithm feels from the inside. Once we know all the relevant details about Pluto, you can still ask, “But it really a planet?”. But at this point understand we’re not talking about Pluto. We’re talking about our own language. Thus which is really the default should be irrelevant. We should be able to taboo “planet”, use alternate names, and talk intelligently about either case. But recall the OP specifically assumes a QRNG:
Pretending that I didn’t assume that, when I specifically stated that I had, is logically rude.
Why do we care about Newcomblike problems? Because they apply to real-world agents. Like AIs. It’s useful to consider.
Omniscience doesn’t exist. Omega is only the limiting case, but Newcomblike reasoning applies even in the face of an imperfect predictor, so Newcomblike reasoning still applies in the real world. QRNGs, do exist in the real world, and IF your decision theory can’t account for them, and use them appropriately, then it’s the wrong decision theory for the real world.
classical + QRNG
is useful to think about. It isn’t being silly to ask other rationalists to take it seriously, and I’m starting to suspect you’re trolling me here.But we should be able to talk intelligently about the other case. Are there situations where it’s useful to consider
agent - QRNG
? Sure, if the rules of the game stipulate that the Chooser promises not to do that. That’s clearly a different game than in the OP, but perhaps closer to the original formulation in the paper that g_pepper pointed out. In that case, you one-box. We could even say that Omega claims to never offer a deal to those he cannot predict accurately. If you know this, you may be motivated to be more predictable. Again, a different game.But can it look like the game in the OP to the Chooser? Can the Chooser think it’s in
classical + QRNG
, when, in fact, it is not? Perhaps, but it’s contrived. It is unrealistic to think a real-world superintelligence can’t build a QRNG, given access to real-world actuators. But if you boxed the AI Chooser in a simulated world (denying it real actuators), you could provide it with a “simulated QRNG”, that is not, in fact, quantum. Maybe you generate a list of numbers in advance, then you could create a “simulated Omega” that can predict the “simulated QRNG” due outside-the-box information, but of course, not a real one.But this isn’t The Deal either. This is isomorphic to the case where Omega cheats by loading your dice to match his prediction after presenting the choice (or an accomplice does this for him), thus violating The Deal. The Chooser must choose, not Omega, or there’s no point. With enough examples the Chooser may suspect it’s in a simulation. (This would probably make it much less useful as an oracle, or more likely to escape the box.)
“They just have to be unpredictably random.”
eyeroll
If Omega can’t predict dice, then he isn’t Omega. How can you spend so many words on this without getting it? If your ‘solution’ to the problem involves Omega being unable to predict whether you one or two box, because of dice or anything else, then you aren’t talking about the same problem as everyone else. I’ve pointed this out like 4 times. Oscar and Mr Mind pointed it out too.
More logical rudeness!
facepalm
Duh. You’re the one not talking about the games in the OP! I was talking about certain very specific modifications to Newcomb’s problem (Starting from the formulation I quoted from the wiki at the top of the OP, not from Nozick’s paper!). I even put “cheating” in the title, as in “breaking the rules” and also in the sense of “cheating on a deal”. I used both of these senses in the OP:
(Cheating a deal.)
(Breaking the rules.)
I also spoke of several alternative ways the game could be modified, both in the OP and in this thread. Don’t pretend I said they’re all the same game! That’s logically rude.
Taboo “Newcomb’s Problem”. It’s clearly biasing your thinking if anyone is disrespecting your sacred cow. We’re talking about The Deal. While we’re at it, Taboo “Omega” too, in case he is also a cow.
###Game A We’re talking about The Deal in a MWI universe. There’s a near-omniscient Being offering the deal. Wait, Taboo “omniscient”, it’s probably ill defined, like everything else about God. Said being knows the wavefunction of the entire universe of discourse (if that’s even consistent), including both the Chooser and the dice, and can predict how it will evolve. This means the Being knows the whole future. The being CAN predict the outcome of a quantum coin flip. Look! He even drew a picture for you:
The Being correctly predicts both Everett branches. Both a Head and Tail happen. Remember, I said “MWI”. Both branches are equally real. But the Being can either fill the box, or not, before the flip, per The Deal. That any given Chooser copy only sees one “random” outcome is a subjective illusion. This is not random. MWI is deterministic.
The optimal strategy for the Chooser in Game A is to use a QRNG to one-box with just over 50% measure, and two-box with just under 50% measure.
The optimal strategy for the Being before the split (predicting this) to maximize the measure of accurately set-up boxes after the split, is to put the million in.
If you want to dispute Game A, because I missed some important logically valid reason, I’d like to know what that is, but it’s certainly nothing you’ve said so far. And call it “Game A” to be clear.
###Game B We don’t know if we’re in MWI or not. There’s a different Being offering The Deal. This being has a track record of predicting significantly better than chance, so Newcomblike reasoning applies. But said Being certainly isn’t omniscient. You have access to a randomizer the Being can’t predict. (You can always do this in principle, for any physically realizable Being. For a sufficiently weak predictor, ordinary dice will suffice, but even vs a Being with a Jupiter Brain in his back pocket, a QRNG is sufficient.)
It’s logically rude to take the ordinary dice from Game B and the Being that knows the wavefunction from Game A, and pretend that’s the only game.
Game B is a weakened version useful to think about since it would also apply to superintelligent AGIs. We can use the strategies developed in Game A to help us think about Game B. We can use it to help develop and test a good decision theory. Does your decision theory handle Game B properly? If so, great! More evidence it would work in the real world. If not, you should update and rethink your decision theory.
Since “your decision theory” may also be a cow, note that I never defined what your decision theory is, not here, not in the OP. I also never said your decision theory necessarily fails this test. I said,
Of course, this claim is for Game A!
Now have we sufficiently established what we both think Pluto is, or is there more you wanted to say? Or do I have to Taboo more cows?
Look, surely in your diagram there is more than just the one fork, right? You could have a heart attack, get struck by a meteor, commit suicide, take a sudden vow of poverty or whatever. Point is, there’s a zero box fork, right? So what does the cow do when you will zero box?
See the trick? The whole one-box or two-box was a false binary all along. How has the everett fork where you had your heart attack play out? No quantum dice required, the cow has always been a fraud!
Except not, because it’s just a logic puzzle. It doesn’t need to consider the fork where you zero box. You are given as a profit maximizer. The cow is given as able to discern your future actions (including futile efforts at randomization). These are just parts of the question, same as the jail being inescapable in prisoner’s dilemma.
It feels like you are circling (grazing?) around to being right. Like, earlier when you got to “In that case, you one-box.”, you were there. ‘That case’ is the base case, the case that we all mean when we say ″Cow’s problem’.
It’s not a cow, it’s a bull :-D