It is a bit beyond my access and resources, but I’d love to see a graph/chart showing the percentage of scientific studies which become invalid or the percent which remain valid as we reduce the p <0.05.
So it would start with 100% of journal articles (take a sampling from the top 3 journals across various disciplines then break them down between Social Science, STEMS, etc.) with p <0.05.
Then we reduce that to p <0.04 down to 0.01 then go logarithmic to show 0.009 on downwards or however it makes sense to represent the data.
I’d be very curious to see the total and differences between fields as the acceptable value for p went down and down. At what point would we loose more than 50% of human knowledge if we had to be more certain about it? I think experimental design is allowed to be more lax than it could be because we aim for the minimum acceptable goal when taking so many competing priorities into consideration. Obviously this doesn’t speak entirely to the validity of the knowledge due to wide variance in methodology and review processes, but we could at least gain an idea of how much we think we are certain about with various tolerances. Perhaps this work has been done before and someone will enlighten me to some study of which I am not aware or do not have access.
Just a passing thought, I’m new to the forums, but I take it that the open thread is the place to post wild ideas like this which are not ready for prime time.
Ok, my utility is probably low considering this open thread closes in 3 days :(
Anyhow, I had a thought when reading the Beautiful Probabilities in the Sequences. http://lesswrong.com/lw/mt/beautiful_probability/
It is a bit beyond my access and resources, but I’d love to see a graph/chart showing the percentage of scientific studies which become invalid or the percent which remain valid as we reduce the p <0.05.
So it would start with 100% of journal articles (take a sampling from the top 3 journals across various disciplines then break them down between Social Science, STEMS, etc.) with p <0.05.
Then we reduce that to p <0.04 down to 0.01 then go logarithmic to show 0.009 on downwards or however it makes sense to represent the data.
I’d be very curious to see the total and differences between fields as the acceptable value for p went down and down. At what point would we loose more than 50% of human knowledge if we had to be more certain about it? I think experimental design is allowed to be more lax than it could be because we aim for the minimum acceptable goal when taking so many competing priorities into consideration. Obviously this doesn’t speak entirely to the validity of the knowledge due to wide variance in methodology and review processes, but we could at least gain an idea of how much we think we are certain about with various tolerances. Perhaps this work has been done before and someone will enlighten me to some study of which I am not aware or do not have access.
Just a passing thought, I’m new to the forums, but I take it that the open thread is the place to post wild ideas like this which are not ready for prime time.
Cheers!