If you wanted to know the truth, you would need to start with all the raw data and recalculate your conclusions.
But since the raw data is often old, and only “positive” data was published and “negative” data not, and collected usually by poorly paid humans...you need to throw it all out and start over.
Would help to have enormous amounts of robotics to make this possible in a short timespan.
I assume that this is what you would need to do to solve “difficult” problems, such as biology.
I think about this periodically in the context of likelihood functions. My bet for the biggest problem is that there isn’t prior work for people to cite or data for them to build on, so it seems to me a good thing to do would be to things like:
Find good public datasets to run likelihood functions against.
Run very basic experiments in huge numbers to get strong effect sizes for fundamental findings.
I think the robotic laboratories would be a great fit for 2.
If you wanted to know the truth, you would need to start with all the raw data and recalculate your conclusions.
But since the raw data is often old, and only “positive” data was published and “negative” data not, and collected usually by poorly paid humans...you need to throw it all out and start over.
Would help to have enormous amounts of robotics to make this possible in a short timespan.
I assume that this is what you would need to do to solve “difficult” problems, such as biology.
I think about this periodically in the context of likelihood functions. My bet for the biggest problem is that there isn’t prior work for people to cite or data for them to build on, so it seems to me a good thing to do would be to things like:
Find good public datasets to run likelihood functions against.
Run very basic experiments in huge numbers to get strong effect sizes for fundamental findings.
I think the robotic laboratories would be a great fit for 2.