Historians of science like Thomas Kuhn generally assume that most scientists don’t have a good explicit model of what they are doing when they are doing science. The explicit model for the scientific process isn’t required.
History is indeed descriptive, while my article is prescriptive :
Description. The explicit model isn’t required, science worked without it, up to a point. And even that has to be nuanced, pre-experimental science and pre-popper science are very different from current sciences. The foundational crisis wasn’t purely philosophical.
Prescription. Putting more stuff in the explicit part of the epistemology is better. People abuse the implicit part : replication crisis in social sciences, epistemic fails in bionformatics etc.
About the second part of your post, I can’t find a clear thesis. I’ll try to answer you nevertheless.
It’s easy to say that someone is not even wrong when you don’t understand their position.
The “not even wrong” section is not the section “bayesianism”.
That post isn’t simply about applying Bayes rule to real life situations. It doesn’t provide a clear definition of Bayesianism but it provides information about what it happens to be in the eyes of the author.
I already read the post you linked, and I don’t see what I wrote that contradicts what you are saying. The author states lessons learnt from Bayesianism, and is interested in lessons other learnt.
Funny, in the post you linked, you can read stuff like :
Many bits of “common sense” rationality can be precisely stated and easily proved within the austere framework of Bayesian probability.
Not being defined makes it easier to use it in various different contexts. And to be wrong. They aren’t proved in an “austere framework”, but with strong axioms.
And even that has to be nuanced, pre-experimental science and pre-popper science are very different from current sciences.
Most scientists haven’t read Popper and those people in history of science that analyze what scientists actually do, don’t find that scientists follow Popper’s maxims.
I agree that for psychologists and many people in biology there isn’t enough explicit attention paid to epistemology. On the other hand it’s still import to be aware that you will never get 100% explicit.
Given LessWrong base rates I’m also not sure whether it makes sense to encourage this crowd to focus more to be explicit about the meta-level.
The “not even wrong” section is not the section “bayesianism”.
It seems I haven’t fully understood your criticism of what you perceive to be bayesianism.
Most scientists haven’t read Popper and those people in history of science that analyze what scientists actually do, don’t find that scientists follow Popper’s maxims.
I agree that for psychologists and many people in biology there isn’t enough explicit attention paid to epistemology. On the other hand it’s still import to be aware that you will never get 100% explicit.
I don’t see what is your criterium to agree with my point on a given field.
Also, my point isn’t about “100% explicit”. My point is that if a field of study is interesting enough, defining an epistemy becomes primary. Else, too much time will be wasted. Similarly, in some existing fields, an epistemy that is too implicit / loosely specified leads to noise production at best, and to counter-productive efforts at worst.
Considering the opportunity cost of having very smart people working on useless things, this is bad.
It seems I haven’t fully understood your criticism of what you perceive to be bayesianism.
Indeed, I think I was too brief and that it could have been an article in itself. I might write one if you are interested. If you aren’t, basically : bayesianism as the core of an approach to the world is too losely specified. It isn’t a complete epistemy, nor even a complete logic.
On the article you sent, the author tried to find uses of bayesianism. However, substituting religion for bayesianism leads to the same epistemic problems : “I did that, that and that because of religion. And religion even proves some bits of rationalist common sense !”
There’s little attempt to falsify most core positions. Most core positions aren’t falsifiable. Physicists generally don’t reject their belief in string theory because a particular experiment didn’t produce the results they hoped for.
In Evidence-Based Medicine, nobody cares about falsifying the core tenets of Evidence-Based Medicine. The paper that proposes the term Evidence-Based Medicine doesn’t discuss the Rand study.
The project of writing the DSM-V didn’t include running experiments to try to falsify the DSM.
The Wikipedia article towards which you linked doesn’t point to a single instance where someone tried to falsify Popper’s theory unsuccessfully.
It isn’t a complete epistemy, nor even a complete logic.
What do you mean with “complete” if you don’t mean “100% explicit”?
History is indeed descriptive, while my article is prescriptive :
Description. The explicit model isn’t required, science worked without it, up to a point. And even that has to be nuanced, pre-experimental science and pre-popper science are very different from current sciences. The foundational crisis wasn’t purely philosophical.
Prescription. Putting more stuff in the explicit part of the epistemology is better. People abuse the implicit part : replication crisis in social sciences, epistemic fails in bionformatics etc.
About the second part of your post, I can’t find a clear thesis. I’ll try to answer you nevertheless.
The “not even wrong” section is not the section “bayesianism”.
I already read the post you linked, and I don’t see what I wrote that contradicts what you are saying. The author states lessons learnt from Bayesianism, and is interested in lessons other learnt.
Funny, in the post you linked, you can read stuff like :
Not being defined makes it easier to use it in various different contexts. And to be wrong. They aren’t proved in an “austere framework”, but with strong axioms.
Most scientists haven’t read Popper and those people in history of science that analyze what scientists actually do, don’t find that scientists follow Popper’s maxims.
I agree that for psychologists and many people in biology there isn’t enough explicit attention paid to epistemology. On the other hand it’s still import to be aware that you will never get 100% explicit.
Given LessWrong base rates I’m also not sure whether it makes sense to encourage this crowd to focus more to be explicit about the meta-level.
It seems I haven’t fully understood your criticism of what you perceive to be bayesianism.
As far as I know, this is still subject of debates. cf https://en.wikipedia.org/wiki/Falsifiability
I don’t see what is your criterium to agree with my point on a given field. Also, my point isn’t about “100% explicit”. My point is that if a field of study is interesting enough, defining an epistemy becomes primary. Else, too much time will be wasted. Similarly, in some existing fields, an epistemy that is too implicit / loosely specified leads to noise production at best, and to counter-productive efforts at worst.
Considering the opportunity cost of having very smart people working on useless things, this is bad.
Indeed, I think I was too brief and that it could have been an article in itself. I might write one if you are interested. If you aren’t, basically : bayesianism as the core of an approach to the world is too losely specified. It isn’t a complete epistemy, nor even a complete logic.
On the article you sent, the author tried to find uses of bayesianism. However, substituting religion for bayesianism leads to the same epistemic problems : “I did that, that and that because of religion. And religion even proves some bits of rationalist common sense !”
There’s little attempt to falsify most core positions. Most core positions aren’t falsifiable. Physicists generally don’t reject their belief in string theory because a particular experiment didn’t produce the results they hoped for.
In Evidence-Based Medicine, nobody cares about falsifying the core tenets of Evidence-Based Medicine. The paper that proposes the term Evidence-Based Medicine doesn’t discuss the Rand study.
The project of writing the DSM-V didn’t include running experiments to try to falsify the DSM.
The Wikipedia article towards which you linked doesn’t point to a single instance where someone tried to falsify Popper’s theory unsuccessfully.
What do you mean with “complete” if you don’t mean “100% explicit”?