So I took the test. (thanks Morendil) And then I said “huh,” and got some friends to take the test.
I got 50%/36%, but my friends got normal numbers, one quite below chance, around 30%/42%.
So this generates some hypotheses:
The sensible hypothesis: Due to extra steps and human interaction, the variance of the the results is higher than I or the author anticipated, leading to normal fluctuations getting called “statistically significant.”
The fun hypothesis: I’m psychic and my friends aren’t.
The obligatory hypothesis: The computer program is flawed, either accidentally or intentionally.
The computer program is flawed, either accidentally or intentionally.
This is clearly a case where I’d want to see the source code. (ETA: seems to be in one of the sub-folders, if I can figure out how what app to open it with.)
But you can fool around with an interesting question: if it was you writing the program with the explicit intent of producing results seeming to clinch the psi hypothesis, by exploiting ambiguities in the verbal description of the experimental setup, how would you do it?
(ETA: one interesting observation, on re-running the program, is that the order of presentation of words the first time through seems not to be randomized.)
When I took the test I got 50⁄50. My first thought was—“how lucky that I should happen to get, by chance, a result that so clearly reinforces my original beliefs”.
How about doing a Bayesian analysis of the experiment?
Out of curiosity I did this for the first experiment (anticipating erotic images). He had 100 people in the experiment, 40 of them did 12 trials with erotic images, and 60 did 18 trials. So there were 1560 trials total.
You can get a likelihood by taking P(observed results | precognitive power is 53%)/P(observed results | precognitive power is 50%). This ends up being (.53^827 * .47^733) / (.5^1560) = ~17
So if you had prior odds 1:100 against people had precognitive power of 53%, then after seeing the results of the experiment you should have posterior odds of about 1:6 against. So you can see that this by itself is not earth-shattering evidence, but it is significant.
Try doing analyses for the other experiments if you’re interested!
...I don’t think this calculation would be right even if we actually factored in all the Psi studies that didn’t achieve any statistically significant result. Shifting your belief in PSI from 1% to something like 16% based on one lousy study while ignoring every single respectable study that didn’t show any result is madness.
To be more specific, first of all you didn’t know whether PSI existed or not (50/50), but then for hopefully good reasons you corrected your prior odds down to 1⁄100 (which is still ridiculously high). Now one lousy study comes along and you give this one lousy datapoint the same weight as every single datapoint combined, that up until now you considered to be evidence against PSI. The mistake should be obvious. The effect of this new evidence on your expected likelihood of the existence of PSI should be infinitesimal and your expected odds should stay right where they are, until these dubious findings can be shown to be readily replicated… which by virtue of my current prior odds I confidently predict most surely won’t happen.
So I took the test. (thanks Morendil) And then I said “huh,” and got some friends to take the test.
I got 50%/36%, but my friends got normal numbers, one quite below chance, around 30%/42%.
So this generates some hypotheses: The sensible hypothesis: Due to extra steps and human interaction, the variance of the the results is higher than I or the author anticipated, leading to normal fluctuations getting called “statistically significant.”
The fun hypothesis: I’m psychic and my friends aren’t.
The obligatory hypothesis: The computer program is flawed, either accidentally or intentionally.
Testing time!
This is clearly a case where I’d want to see the source code. (ETA: seems to be in one of the sub-folders, if I can figure out how what app to open it with.)
But you can fool around with an interesting question: if it was you writing the program with the explicit intent of producing results seeming to clinch the psi hypothesis, by exploiting ambiguities in the verbal description of the experimental setup, how would you do it?
(ETA: one interesting observation, on re-running the program, is that the order of presentation of words the first time through seems not to be randomized.)
When I took the test I got 50⁄50. My first thought was—“how lucky that I should happen to get, by chance, a result that so clearly reinforces my original beliefs”.
How about doing a Bayesian analysis of the experiment?
Out of curiosity I did this for the first experiment (anticipating erotic images). He had 100 people in the experiment, 40 of them did 12 trials with erotic images, and 60 did 18 trials. So there were 1560 trials total.
You can get a likelihood by taking P(observed results | precognitive power is 53%)/P(observed results | precognitive power is 50%). This ends up being (.53^827 * .47^733) / (.5^1560) = ~17
So if you had prior odds 1:100 against people had precognitive power of 53%, then after seeing the results of the experiment you should have posterior odds of about 1:6 against. So you can see that this by itself is not earth-shattering evidence, but it is significant.
Try doing analyses for the other experiments if you’re interested!
...I don’t think this calculation would be right even if we actually factored in all the Psi studies that didn’t achieve any statistically significant result. Shifting your belief in PSI from 1% to something like 16% based on one lousy study while ignoring every single respectable study that didn’t show any result is madness.
To be more specific, first of all you didn’t know whether PSI existed or not (50/50), but then for hopefully good reasons you corrected your prior odds down to 1⁄100 (which is still ridiculously high). Now one lousy study comes along and you give this one lousy datapoint the same weight as every single datapoint combined, that up until now you considered to be evidence against PSI. The mistake should be obvious. The effect of this new evidence on your expected likelihood of the existence of PSI should be infinitesimal and your expected odds should stay right where they are, until these dubious findings can be shown to be readily replicated… which by virtue of my current prior odds I confidently predict most surely won’t happen.