“It might seem like the diseases listed in the quote (I would add Alzheimer’s disease, Parkinson’s disease, Bipolar disorder and Schizophrenia) are unlikely nominees, but that’s always what it feels like when you are trapped in Hades.”
Regarding the apparent lack of progress in Alzheimer’s specifically, I’m aware of an interesting explanation. To paraphrase, there have been enough trials done on various experimental treatments that the lack of sufficiently positive results is statistically unusual (prior to 2023 it had been decades since a new drug was approved for treating Alzheimer’s), despite hundreds of trials. With a target P of 0.05 there should have been on average one ‘successful study’ per 20 due to type-1 error, which caused certain researchers to suspect something about the underlying methodology and/or evaluation process was masking both type 1 error and the potential for actual success.
To attempt a summary: the approval requirements set by the FDA were requiring success on two axis of measurement rather than just one, which was raising the needed effect of any treatment by a significant degree. (Follow the link for the actual explanation by the statistician who presented on it. It is likely I have explained it poorly.)
Context: I am an undergrad software developer working for a pharmaceutical statistics-related firm, not a statistician. Regarding advanced statistics I can only repeat what I’ve had explained to me. I’ll leave to those with better understanding of the fields involved whether this example is a good parallel to the principal of Epistemic Hell as described in the post.
“It might seem like the diseases listed in the quote (I would add Alzheimer’s disease, Parkinson’s disease, Bipolar disorder and Schizophrenia) are unlikely nominees, but that’s always what it feels like when you are trapped in Hades.”
Regarding the apparent lack of progress in Alzheimer’s specifically, I’m aware of an interesting explanation. To paraphrase, there have been enough trials done on various experimental treatments that the lack of sufficiently positive results is statistically unusual (prior to 2023 it had been decades since a new drug was approved for treating Alzheimer’s), despite hundreds of trials. With a target P of 0.05 there should have been on average one ‘successful study’ per 20 due to type-1 error, which caused certain researchers to suspect something about the underlying methodology and/or evaluation process was masking both type 1 error and the potential for actual success.
To attempt a summary: the approval requirements set by the FDA were requiring success on two axis of measurement rather than just one, which was raising the needed effect of any treatment by a significant degree. (Follow the link for the actual explanation by the statistician who presented on it. It is likely I have explained it poorly.)
Context: I am an undergrad software developer working for a pharmaceutical statistics-related firm, not a statistician. Regarding advanced statistics I can only repeat what I’ve had explained to me. I’ll leave to those with better understanding of the fields involved whether this example is a good parallel to the principal of Epistemic Hell as described in the post.