It also seems to me more and more silly to believe that the blind man sees more and that blinding in general is the key to knowledge gathering. It’s one of those things, were a kid in a hundred years will have a hard time understanding history because the idea is just so silly. Just like we today have a hard time understanding what people in the middle ages used to believe.
This is a straw man. Blinding is used where it can be used. It’s not necessary for doing medical science, and nonblinded trials are definitely accepted by doctors as a weaker form of evidence in cases where blinding isn’t possible. Many surgical procedures can’t be blinded for example. Blinding doesn’t mean not observing patients, it has a much more specific meaning than that. Because of your background in bioinformatics I think you know this, and are stretching the meaning on purpose.
A medical professor usually teaches the “evidence-based method” with teaching methods for which he as no evidence that they work.
You’re making sweeping generalizations with nothing to back them up.
If a doctor gives his patients a drug from a big pharma company that company invites him to a fancy all-costs payed luxury vacation conference. It’s not as bad as it used to be, but it was bad over decades and that made certain memes win memetic competition.
This is strictly illegal in many (most?) countries.
Blinding doesn’t mean not observing patients, it has a much more specific meaning than that.
Blinding is used where it can be used.
I can cross the street with a blindfold. That doesn’t mean that’s a good idea.
The general idea of blinding in medical science is that on average the human pattern matching ability produces more harm than good.
Good medical treatment in the evidence-based paradigm is supposed to be treatment by the book.
People do things like putting box-plots in their scientific papers instead of providing plots of raw data to hide the messiness of real world data from their eyes. That happens in a culture that values blindness.
That culture of blindness leads to many unknowns unknowns that mess with your process in complex ways.
There are many assumption about how learning and how knowledge work that are just assumed to be true.
One example is measure lung function. I have seen papers on Asthma medication that use FEV1 as metric of success.
I have measured FEV1 daily for over a year and one day before I got the flu I felt restricted breathing. My FEV1 was
still at the normal value.
That’s a reference experience that increases my knowledge about the subject. Involved interaction with the subject matter
leads to knowledge. You don’t get reference experiences by reading journal papers and text books.
You usually also don’t learn new phenomenological primitives that way.
Oscar Wilde wrote: “Nothing that is worth knowing can be taught.”
“Nothing” might be an exaggeration but certain knowledge is just really really hard to transfer.
But you can set up conditions that are conductive to learning.
You’re making sweeping generalizations with nothing to back them up.
Are you arguing that professors are using teaching methods for whom they have published evidence that those teaching methods work?
This is strictly illegal in many (most?) countries.
Today yes. 20 years ago no. Today Big Pharma can’t bribe as much doctors anymore, their business model is in crisis and they have to lay of a lot of workers. Of course it might just be correlation and no causation between the separate observations.
I have seen papers on Asthma medication that use FEV1 as metric of success. I have measured FEV1 daily for over a year and one day before I got the flu I felt restricted breathing. My FEV1 was still at the normal value.
A measure that is wrong in one particular case may still be the best measure available on a statistical level. I highly doubt that doctors would get better ideas of which therapies are good if they discarded this measure and instead used “does the patient claim to feel restricted breathing”.
Furthermore, you haven’t convinced me the measure was wrong even in your case. Measurements are rarely yes or no things; most measurements fall within a range and there is not a sharp cutoff between healthy and unhealthy on the end of the range. You could have been at some point that was far enough within the range to be considered okay, yet still not be 100% okay.
You could have been at some point that was far enough within the range to be considered okay, yet still not be 100% okay.
It’s a measurement I did every day I know how the value fluctuates and it was in the middle of the normal range.
A measure that is wrong in one particular case may still be the best measure available on a statistical level. I highly doubt that doctors would get better ideas of which therapies are good if they discarded this measure and instead used “does the patient claim to feel restricted breathing”.
I don’t claim that doctors should just replace FEV1 with “does the patient claim to feel restricted breathing”.
That’s the kind of thing that doesn’t need any reference experiences and is easily communicable via text.
I claim that the actual experience of interacting with a measurement in a involved way is important to train your intuition to be able to understand a measurement. If you don’t have that understanding you are going to make mistakes.
If someone would give me a million dollars I might also produce a device that measures something better than FEV1 but that’s not the main point of the argument. But that would be me wearing a bioinformatics hat and that’s not the main hat I’m wearing in this discussion.
This is a straw man. Blinding is used where it can be used. It’s not necessary for doing medical science, and nonblinded trials are definitely accepted by doctors as a weaker form of evidence in cases where blinding isn’t possible. Many surgical procedures can’t be blinded for example. Blinding doesn’t mean not observing patients, it has a much more specific meaning than that. Because of your background in bioinformatics I think you know this, and are stretching the meaning on purpose.
You’re making sweeping generalizations with nothing to back them up.
This is strictly illegal in many (most?) countries.
I can cross the street with a blindfold. That doesn’t mean that’s a good idea.
The general idea of blinding in medical science is that on average the human pattern matching ability produces more harm than good. Good medical treatment in the evidence-based paradigm is supposed to be treatment by the book.
People do things like putting box-plots in their scientific papers instead of providing plots of raw data to hide the messiness of real world data from their eyes. That happens in a culture that values blindness.
That culture of blindness leads to many unknowns unknowns that mess with your process in complex ways.
There are many assumption about how learning and how knowledge work that are just assumed to be true.
One example is measure lung function. I have seen papers on Asthma medication that use FEV1 as metric of success. I have measured FEV1 daily for over a year and one day before I got the flu I felt restricted breathing. My FEV1 was still at the normal value.
That’s a reference experience that increases my knowledge about the subject. Involved interaction with the subject matter leads to knowledge. You don’t get reference experiences by reading journal papers and text books. You usually also don’t learn new phenomenological primitives that way.
Oscar Wilde wrote: “Nothing that is worth knowing can be taught.” “Nothing” might be an exaggeration but certain knowledge is just really really hard to transfer. But you can set up conditions that are conductive to learning.
Are you arguing that professors are using teaching methods for whom they have published evidence that those teaching methods work?
Today yes. 20 years ago no. Today Big Pharma can’t bribe as much doctors anymore, their business model is in crisis and they have to lay of a lot of workers. Of course it might just be correlation and no causation between the separate observations.
A measure that is wrong in one particular case may still be the best measure available on a statistical level. I highly doubt that doctors would get better ideas of which therapies are good if they discarded this measure and instead used “does the patient claim to feel restricted breathing”.
Furthermore, you haven’t convinced me the measure was wrong even in your case. Measurements are rarely yes or no things; most measurements fall within a range and there is not a sharp cutoff between healthy and unhealthy on the end of the range. You could have been at some point that was far enough within the range to be considered okay, yet still not be 100% okay.
It’s a measurement I did every day I know how the value fluctuates and it was in the middle of the normal range.
I don’t claim that doctors should just replace FEV1 with “does the patient claim to feel restricted breathing”. That’s the kind of thing that doesn’t need any reference experiences and is easily communicable via text.
I claim that the actual experience of interacting with a measurement in a involved way is important to train your intuition to be able to understand a measurement. If you don’t have that understanding you are going to make mistakes.
If someone would give me a million dollars I might also produce a device that measures something better than FEV1 but that’s not the main point of the argument. But that would be me wearing a bioinformatics hat and that’s not the main hat I’m wearing in this discussion.