I still believe that it will be increadibly hard to set up an experiment that makes an atheist philosopher or an average Lesswrong participant follower think that the predictions are genuine predictions.
I don’t know how philosphers would react. I one-box the question when posed on a theoretical level.
If you would put my in a practical situation against a person who’s doing genuine prediction I might try to do some form of occlumency to hide that I’m two-boxing.
This is a bit like playing Werewolf. In the round with people with NLP training in which I’m playing Werewolf there are a bunch of people who are good enough to a bunch of people when they play Werewolf with “normal” people.
On the other hand in that round with NLP people nearly everyone has good control of his own state and doesn’t let other people read them.
The last time I played one girl afterwards told me that I was very authentic even when playing a fake role.
I’m not exactly sure about the occlumency skills of the average philosophy major but I would guess that there are many philosophy majors who believe that they themselves have decent occlumency skills.
As a sidenote any good attempt at finding out whether someone is one-boxing or two-boxing might change whether he’s one-boxing or two-boxing.
Unless I’ve misunderstood this it isn’t an adversarial game.
you’re not trying to trick the predictor if you’re one boxing.
if anything you want the predictor to know that with as much certainty as possible.
wearing your heart on your sleeve is good for you.
the first description I came across with this had a huge difference between boxes A and B on the order of 1000 vs 1,000,000.
At that level there doesn’t seem much point even intending to 2 box, better to let the predictor have his good record as a predictor while I get the million. an improvement of an extra 1000 just isn’t convincing.
though restated with a smaller difference like 2000 in one box, 1000 in the other and the choice of 2 boxing for 3000 vs 2000 is more appealing.
So… if I point out a reasonably easy way to verifiably implement this without cheating, would you agree with my original premise?
I still believe that it will be increadibly hard to set up an experiment that makes an atheist philosopher or an average Lesswrong participant follower think that the predictions are genuine predictions.
I don’t know how philosphers would react. I one-box the question when posed on a theoretical level. If you would put my in a practical situation against a person who’s doing genuine prediction I might try to do some form of occlumency to hide that I’m two-boxing.
This is a bit like playing Werewolf. In the round with people with NLP training in which I’m playing Werewolf there are a bunch of people who are good enough to a bunch of people when they play Werewolf with “normal” people. On the other hand in that round with NLP people nearly everyone has good control of his own state and doesn’t let other people read them. The last time I played one girl afterwards told me that I was very authentic even when playing a fake role.
I’m not exactly sure about the occlumency skills of the average philosophy major but I would guess that there are many philosophy majors who believe that they themselves have decent occlumency skills.
As a sidenote any good attempt at finding out whether someone is one-boxing or two-boxing might change whether he’s one-boxing or two-boxing.
Unless I’ve misunderstood this it isn’t an adversarial game.
you’re not trying to trick the predictor if you’re one boxing. if anything you want the predictor to know that with as much certainty as possible. wearing your heart on your sleeve is good for you.
He’s trying to trick the predictor into thinking that he’s going to one-box, but then to actually two-box.
I see now.
the first description I came across with this had a huge difference between boxes A and B on the order of 1000 vs 1,000,000.
At that level there doesn’t seem much point even intending to 2 box, better to let the predictor have his good record as a predictor while I get the million. an improvement of an extra 1000 just isn’t convincing.
though restated with a smaller difference like 2000 in one box, 1000 in the other and the choice of 2 boxing for 3000 vs 2000 is more appealing.