When I took the test I got 50⁄50. My first thought was—“how lucky that I should happen to get, by chance, a result that so clearly reinforces my original beliefs”.
How about doing a Bayesian analysis of the experiment?
Out of curiosity I did this for the first experiment (anticipating erotic images). He had 100 people in the experiment, 40 of them did 12 trials with erotic images, and 60 did 18 trials. So there were 1560 trials total.
You can get a likelihood by taking P(observed results | precognitive power is 53%)/P(observed results | precognitive power is 50%). This ends up being (.53^827 * .47^733) / (.5^1560) = ~17
So if you had prior odds 1:100 against people had precognitive power of 53%, then after seeing the results of the experiment you should have posterior odds of about 1:6 against. So you can see that this by itself is not earth-shattering evidence, but it is significant.
Try doing analyses for the other experiments if you’re interested!
...I don’t think this calculation would be right even if we actually factored in all the Psi studies that didn’t achieve any statistically significant result. Shifting your belief in PSI from 1% to something like 16% based on one lousy study while ignoring every single respectable study that didn’t show any result is madness.
To be more specific, first of all you didn’t know whether PSI existed or not (50/50), but then for hopefully good reasons you corrected your prior odds down to 1⁄100 (which is still ridiculously high). Now one lousy study comes along and you give this one lousy datapoint the same weight as every single datapoint combined, that up until now you considered to be evidence against PSI. The mistake should be obvious. The effect of this new evidence on your expected likelihood of the existence of PSI should be infinitesimal and your expected odds should stay right where they are, until these dubious findings can be shown to be readily replicated… which by virtue of my current prior odds I confidently predict most surely won’t happen.
When I took the test I got 50⁄50. My first thought was—“how lucky that I should happen to get, by chance, a result that so clearly reinforces my original beliefs”.
How about doing a Bayesian analysis of the experiment?
Out of curiosity I did this for the first experiment (anticipating erotic images). He had 100 people in the experiment, 40 of them did 12 trials with erotic images, and 60 did 18 trials. So there were 1560 trials total.
You can get a likelihood by taking P(observed results | precognitive power is 53%)/P(observed results | precognitive power is 50%). This ends up being (.53^827 * .47^733) / (.5^1560) = ~17
So if you had prior odds 1:100 against people had precognitive power of 53%, then after seeing the results of the experiment you should have posterior odds of about 1:6 against. So you can see that this by itself is not earth-shattering evidence, but it is significant.
Try doing analyses for the other experiments if you’re interested!
...I don’t think this calculation would be right even if we actually factored in all the Psi studies that didn’t achieve any statistically significant result. Shifting your belief in PSI from 1% to something like 16% based on one lousy study while ignoring every single respectable study that didn’t show any result is madness.
To be more specific, first of all you didn’t know whether PSI existed or not (50/50), but then for hopefully good reasons you corrected your prior odds down to 1⁄100 (which is still ridiculously high). Now one lousy study comes along and you give this one lousy datapoint the same weight as every single datapoint combined, that up until now you considered to be evidence against PSI. The mistake should be obvious. The effect of this new evidence on your expected likelihood of the existence of PSI should be infinitesimal and your expected odds should stay right where they are, until these dubious findings can be shown to be readily replicated… which by virtue of my current prior odds I confidently predict most surely won’t happen.