I’m worried that I found the study far more convincing than I should have. If I recall, it was something like “this would be awesome if it replicates. Regression toward the mean suggests the effect size will shrink, but still.” This thought didn’t stop me from still updating substantially, though.
I remember being vaguely annoyed at them just throwing out the timeout losses, but didn’t discard the whole thing after reading that. Perhaps I should have.
I know about confirmation bias and p-hacking and half a dozen other such things, but none of that stopped me from overupdating on evidence I wanted to believe. So, thanks for your comment.
An interesting concept—un-updating. Should happen when you updated on evidence that turned out to be wrong/mistaken, so you need to update back and I suspect that some biases will be involved here :-/
I’m worried that I found the study far more convincing than I should have. If I recall, it was something like “this would be awesome if it replicates. Regression toward the mean suggests the effect size will shrink, but still.” This thought didn’t stop me from still updating substantially, though.
I remember being vaguely annoyed at them just throwing out the timeout losses, but didn’t discard the whole thing after reading that. Perhaps I should have.
I know about confirmation bias and p-hacking and half a dozen other such things, but none of that stopped me from overupdating on evidence I wanted to believe. So, thanks for your comment.
An interesting concept—un-updating. Should happen when you updated on evidence that turned out to be wrong/mistaken, so you need to update back and I suspect that some biases will be involved here :-/