If you really remove the option to cheat then how will the street magician to be able to accurately predict whether the philosopher two-boxes?
There are people who might learn with practice to have a high degree of accuracy in classify people as one-boxers or two boxers if they had months of practice but that’s not a skillset that your average street magician possesses.
Finding a suitable person and then training the person to have that skillset is probably a bigger issue than securing the necessary funds for a trial.
I still believe that it will be increadibly hard to set up an experiment that makes an atheist philosopher or an average Lesswrong participant follower think that the predictions are genuine predictions.
I don’t know how philosphers would react. I one-box the question when posed on a theoretical level.
If you would put my in a practical situation against a person who’s doing genuine prediction I might try to do some form of occlumency to hide that I’m two-boxing.
This is a bit like playing Werewolf. In the round with people with NLP training in which I’m playing Werewolf there are a bunch of people who are good enough to a bunch of people when they play Werewolf with “normal” people.
On the other hand in that round with NLP people nearly everyone has good control of his own state and doesn’t let other people read them.
The last time I played one girl afterwards told me that I was very authentic even when playing a fake role.
I’m not exactly sure about the occlumency skills of the average philosophy major but I would guess that there are many philosophy majors who believe that they themselves have decent occlumency skills.
As a sidenote any good attempt at finding out whether someone is one-boxing or two-boxing might change whether he’s one-boxing or two-boxing.
Unless I’ve misunderstood this it isn’t an adversarial game.
you’re not trying to trick the predictor if you’re one boxing.
if anything you want the predictor to know that with as much certainty as possible.
wearing your heart on your sleeve is good for you.
the first description I came across with this had a huge difference between boxes A and B on the order of 1000 vs 1,000,000.
At that level there doesn’t seem much point even intending to 2 box, better to let the predictor have his good record as a predictor while I get the million. an improvement of an extra 1000 just isn’t convincing.
though restated with a smaller difference like 2000 in one box, 1000 in the other and the choice of 2 boxing for 3000 vs 2000 is more appealing.
The easiest way is to have the result be determined by the decision; the magician arranges the scenario such that the money is under box A IFF you select only box A. That is only cheating if you can catch him.
The details of how that is done are a trade secret, I’m afraid.
Wheter or not it’s cheating doesn’t depend on whether you catch him. A smart person will think “The magician is cheating” when faced with a street magician even if he doesn’t get the exact trick.
I don’t know exactly how David Copperfield flies around but I don’t think that’s he’s really can do levitation.
Why doesn’t a smart person think that Omega is cheating? What’s the difference between the observations one has of Omega and the observations one has of the street magician?
By the way, if I think of Omega as equivalent to the street magician, I change to a consistent one-boxer from a much more complicated position.
Why doesn’t a smart person think that Omega is cheating?
Because Omega has per definition the ability to predict. Street magicians on the other hand are in the deception business.
That means that a smart person has different priors about both classes.
Yes, as long as we can only observe the end result.
Priors matter when you have incomplete knowledge and guess the principle that lead to a particular result.
Believing that a particular principle led to an observed result helps make future predictions about that result when the principle that we believe is relevant;
If we believe that the street magician is cheating, but he claims to be predicting, is each case in which we see the prediction and result match evidence that he is predicting or evidence that he is cheating? Is it evidence that when our turn comes up, we should one-box, or is it evidence that the players before us are colluding with the magician?
If we believe that Omega is a perfect predictor, does that change the direction in which the evidence points?
Is it just that we have a much higher prior that everybody we see is colluding with the magician (or that the magician is cheating in some other way) than that everybody is colluding with Omega, or that Omega is cheating?
Suppose that the magician is known to be playing with house money, and is getting paid based on how accurately rewards are allocated to contestants (leaving the question open as to whether he is cheating or predicting, but keeping the payoff matrix the same). Is the reasoning for one-boxing for the magician identical to the reasoning for one-boxing for Omega, or is there some key difference that I’m missing?
Is the reasoning for one-boxing for the magician identical to the reasoning for one-boxing for Omega, or is there some key difference that I’m missing?
If a magician is cheating than there a direct causal link between the subject choosing to one-box and the money being in the box.
Causality matters for philosophers who analyse Newcomb’s problem.
I don’t know whether one can meaningfully speak about decision theory for a world without causal links.
If your actions don’t cause anything how can one decision be better than another?
If I’m wet because it rains there a causal link between the two.
If I kick a ball and the ball moves there a causal link between me kicking the ball and the ball moving.
If you really remove the option to cheat then how will the street magician to be able to accurately predict whether the philosopher two-boxes?
There are people who might learn with practice to have a high degree of accuracy in classify people as one-boxers or two boxers if they had months of practice but that’s not a skillset that your average street magician possesses.
Finding a suitable person and then training the person to have that skillset is probably a bigger issue than securing the necessary funds for a trial.
So… if I point out a reasonably easy way to verifiably implement this without cheating, would you agree with my original premise?
I still believe that it will be increadibly hard to set up an experiment that makes an atheist philosopher or an average Lesswrong participant follower think that the predictions are genuine predictions.
I don’t know how philosphers would react. I one-box the question when posed on a theoretical level. If you would put my in a practical situation against a person who’s doing genuine prediction I might try to do some form of occlumency to hide that I’m two-boxing.
This is a bit like playing Werewolf. In the round with people with NLP training in which I’m playing Werewolf there are a bunch of people who are good enough to a bunch of people when they play Werewolf with “normal” people. On the other hand in that round with NLP people nearly everyone has good control of his own state and doesn’t let other people read them. The last time I played one girl afterwards told me that I was very authentic even when playing a fake role.
I’m not exactly sure about the occlumency skills of the average philosophy major but I would guess that there are many philosophy majors who believe that they themselves have decent occlumency skills.
As a sidenote any good attempt at finding out whether someone is one-boxing or two-boxing might change whether he’s one-boxing or two-boxing.
Unless I’ve misunderstood this it isn’t an adversarial game.
you’re not trying to trick the predictor if you’re one boxing. if anything you want the predictor to know that with as much certainty as possible. wearing your heart on your sleeve is good for you.
He’s trying to trick the predictor into thinking that he’s going to one-box, but then to actually two-box.
I see now.
the first description I came across with this had a huge difference between boxes A and B on the order of 1000 vs 1,000,000.
At that level there doesn’t seem much point even intending to 2 box, better to let the predictor have his good record as a predictor while I get the million. an improvement of an extra 1000 just isn’t convincing.
though restated with a smaller difference like 2000 in one box, 1000 in the other and the choice of 2 boxing for 3000 vs 2000 is more appealing.
The easiest way is to have the result be determined by the decision; the magician arranges the scenario such that the money is under box A IFF you select only box A. That is only cheating if you can catch him.
The details of how that is done are a trade secret, I’m afraid.
Wheter or not it’s cheating doesn’t depend on whether you catch him. A smart person will think “The magician is cheating” when faced with a street magician even if he doesn’t get the exact trick.
I don’t know exactly how David Copperfield flies around but I don’t think that’s he’s really can do levitation.
Why doesn’t a smart person think that Omega is cheating? What’s the difference between the observations one has of Omega and the observations one has of the street magician?
By the way, if I think of Omega as equivalent to the street magician, I change to a consistent one-boxer from a much more complicated position.
Because Omega has per definition the ability to predict. Street magicians on the other hand are in the deception business. That means that a smart person has different priors about both classes.
The expected observations are identical in either case, right?
Yes, as long as we can only observe the end result. Priors matter when you have incomplete knowledge and guess the principle that lead to a particular result.
Believing that a particular principle led to an observed result helps make future predictions about that result when the principle that we believe is relevant;
If we believe that the street magician is cheating, but he claims to be predicting, is each case in which we see the prediction and result match evidence that he is predicting or evidence that he is cheating? Is it evidence that when our turn comes up, we should one-box, or is it evidence that the players before us are colluding with the magician?
If we believe that Omega is a perfect predictor, does that change the direction in which the evidence points?
Is it just that we have a much higher prior that everybody we see is colluding with the magician (or that the magician is cheating in some other way) than that everybody is colluding with Omega, or that Omega is cheating?
Suppose that the magician is known to be playing with house money, and is getting paid based on how accurately rewards are allocated to contestants (leaving the question open as to whether he is cheating or predicting, but keeping the payoff matrix the same). Is the reasoning for one-boxing for the magician identical to the reasoning for one-boxing for Omega, or is there some key difference that I’m missing?
If a magician is cheating than there a direct causal link between the subject choosing to one-box and the money being in the box.
Causality matters for philosophers who analyse Newcomb’s problem.
So the magician can only cheat in worlds where causal links happen?
I don’t know whether one can meaningfully speak about decision theory for a world without causal links. If your actions don’t cause anything how can one decision be better than another?
So, if the magician is cheating there is a causal link between the decision and the contents of the box, and if he isn’t there is still a causal link.
How is that a difference?
If I’m wet because it rains there a causal link between the two. If I kick a ball and the ball moves there a causal link between me kicking the ball and the ball moving.
How’s that a difference?
Did you kick the ball because it was raining, or are you wet because you kicked the ball?