Psychologists are not statisticians, though. Generally they are relatively naive users of stats methods (as are a lot of other applied folks, e.g. doctors that publish, cognitive scientists, social scientists, epidemiologists, etc.) Ideally, methods folks and applied folks collaborate, but this does not always happen.
You can fish for positive findings with B methods just fine, the issue isn’t F vs B, the issue is bad publication incentives.
There is also a little bit of “there is a huge replication crisis on, long story short, we should read this random dude’s blog (with apologies to the OP).”
Pearl is, apparently, only half Bayesian.
I am wrong a lot—I can point you to some errors in my papers if you want.
The replication crysis is decomposable into many pieces, two of which are surely bad incentives and relative inexperience of the “applied folks”. Another though is, that’s the main point, that frequentist methods are a set of ad-hoc, poorly explained, poorly understood heuristics. No wonder that they are used improperly. On the other hand, I’ve seen the crysis explained mostly by Bayesian statisticians, so I’m possibly in a bubble. If you can point me to a frequentist explanation I would be glad to pop it.
I am wrong a lot—I can point you to some errors in my papers if you want.
Apparently though, cousin_it thinks you cannot be criticized or argued against...
“Another though[t] is, that’s the main point, that frequentist methods are a set of ad-hoc, poorly explained, poorly understood heuristics.”
I don’t think so. This is what LW repeatedly gets wrong, and I am kind of tired of talking about it. How are you so confident re: what frequentist methods really are about, if you aren’t a statistician? This is incredibly bizarre to me.
Rather than argue about it constantly, which I am very very tired of doing (see above “negative externalities”), I can point you to Larry Wasserman’s book “All of Statistics.” It’s a nice frequentist book. Start there, perhaps. Larry is very smart, one of the smartest statisticians alive, I think.
Apparently though, cousin_it thinks you cannot be criticized or argued against...
My culture thrives on peer review, as much as we grumble about it. Emphasis on “peer,” of course.
You should probably be a bit more charitable to cousin_it, he’s very smart too.
what frequentist methods really are about, if you aren’t a statistician?
I was under the impression that it was sufficient to read statistics books. Apparently though, you need also to be anointed by another statistician to even talk about the subject.
My culture thrives on peer review, as much as we grumble about it. Emphasis on “peer,” of course.
You seem to imply that no statistician has ever criticized frequentist methods. LW is just parroting what others, more expert men already said.
You should probably be a bit more charitable to cousin_it, he’s very smart too.
Isn’t it, as long as you’re making an incorrect statement, irrelevant how intelligent you are? Jaynes was wrong about quantum mechanics. Einstein was wrong about the unified field. Everybody can be wrong, no matter how respected or intelligent they are.
“I was under the impression that it was sufficient to read statistics books.”
Ok, what have you read?
I am not the “blogging police,” I am just saying, based on past experience, that when people who aren’t statisticians talk about these issues, the result is very low quality. So low that it would have been better to stay silent. Statistics is a very mathematical field. These types of arguments are akin to “should we think about mathematics topologically or algebraically?”
“You seem to imply that no statistician has ever criticized frequentist methods.”
The pretty standard Bayesian curriculum: De Finetti, Jaynes-Bretthorst, Sivia.
See “Tom Knight and the LISP machine”:
I love Lisp koans much more than I love Lisp… Anyway, it’s still a question of knowing a subject, not being part of a cabal.
Sure is, but how certain are you it’s incorrect? If uncertain, intelligence is useful information you should Bayes Theorem in.
Well, I prefer evidence to signalling: if the problems is only my tediousness, refusing to accept a settled argument, someone can simply point me to a paper, a blog post or a book saying “here, this shows clearly that the replication crysis happened for this reason, not because of the opaqueness of frequentist methods”. I am willing to update. I have done it in the past many times, I’m confident I can do this time too.
Here, all this “He is very intelligent! No, you are very intelligent!” is… sad.
I guess the natural question is—what about standard Frequentist curriculum? Lots of stuff is neither B or F in stats (for example the book my group and I are going through now).
“it’s still a question of knowing a subject”
Indeed. That’s exactly the point.
The most common way I see “fishing” manifest with Bayesian methods is changing the prior until you get the signal you want. In fact, the “clarity” of Bayesian machinery is even aiding and abetting this type of practice.
You say you are willing to update—don’t you find it weird that basically the only place people still talk about B vs F is here on LW? Professional statisticians moved on from this argument decades ago.
The charitable view is LW likes arguing about unsettled philosophy, but aren’t up to speed on what real philosophical arguments are in the field. (In my field, for example, one argument is about testability, and how much should causal models assume). The uncharitable view is LW is addicted to online wankery.
Let me retrace the steps of this conversation, so that we have at least a direction to move towards. The OP argued that we keep a careful eye so that we don’t drift from Bayesianism as the only correct mathematical form of inference. You try to silence him saying that if he is not a statistician, he should not talk about that. I point out that those who routinely use frequentists statistics are commonly fucking it up (the disaster about the RDA of vitamin D is another easily mockable mistakes of frequentist statisticians). The conversation then degenerates on dick-size measuring, only with IQ or academic credentials.
So, let me regroup what I believe to be true, so that specific parts of what I believe to be true can be attacked (but if it’s just: “you don’t have the credentials to talk about that” or “other intelligent people think differently”, please refrain).
1 the only correct foundation for inference and probability is Bayesian 2 Bayesian probability has a broader applicability than frequentist probability 3 basic frequentist statistics can be and should be reformulated from a Bayesian point of view 4 frequentist statistics is taught badly and applied even worse 5 point 4 bears a no small responsability in famous scientific mistakes 6 nor Bayesian or frequentist statiscs bound dishonest scientists 7 advanced statistics has much more in common with functional analysis and measure theory, so that whether it’s expressed in one or the other form is less important 8 LW has the merit of insisting on Bayes because frequentist statiscs, being the academic tradition, has a higher status, and no amount mistakes derived from it seems able to make a dent in its reputation 9 Bayes theorem is the basis of the first formally defined artificial intelligence
I hope this list can keep the discussion productive.
“The conversation then degenerates on dick-size measuring.”
“I hope this list can keep the discussion productive.”
Alright then, Bayes away!
Generic advice for others: the growth mindset for stats (which is a very hard mathematical subject) is to be more like a grad student, e.g. work very very hard and read a lot, and maybe even try to publish. Leave arguing about philosophy to undergrads.
The Wikipedia page for replication crisis doesn’t mention frequentism or Bayesianism. The main reasons are more like the file drawer effect, publish or perish, etc. Of course an honest Bayesian wouldn’t be vulnerable to those, but neither would an honest frequentist.
Psychologists are not statisticians, though. Generally they are relatively naive users of stats methods (as are a lot of other applied folks, e.g. doctors that publish, cognitive scientists, social scientists, epidemiologists, etc.) Ideally, methods folks and applied folks collaborate, but this does not always happen.
You can fish for positive findings with B methods just fine, the issue isn’t F vs B, the issue is bad publication incentives.
There is also a little bit of “there is a huge replication crisis on, long story short, we should read this random dude’s blog (with apologies to the OP).”
Pearl is, apparently, only half Bayesian.
I am wrong a lot—I can point you to some errors in my papers if you want.
The replication crysis is decomposable into many pieces, two of which are surely bad incentives and relative inexperience of the “applied folks”. Another though is, that’s the main point, that frequentist methods are a set of ad-hoc, poorly explained, poorly understood heuristics. No wonder that they are used improperly.
On the other hand, I’ve seen the crysis explained mostly by Bayesian statisticians, so I’m possibly in a bubble. If you can point me to a frequentist explanation I would be glad to pop it.
Apparently though, cousin_it thinks you cannot be criticized or argued against...
“Another though[t] is, that’s the main point, that frequentist methods are a set of ad-hoc, poorly explained, poorly understood heuristics.”
I don’t think so. This is what LW repeatedly gets wrong, and I am kind of tired of talking about it. How are you so confident re: what frequentist methods really are about, if you aren’t a statistician? This is incredibly bizarre to me.
Rather than argue about it constantly, which I am very very tired of doing (see above “negative externalities”), I can point you to Larry Wasserman’s book “All of Statistics.” It’s a nice frequentist book. Start there, perhaps. Larry is very smart, one of the smartest statisticians alive, I think.
My culture thrives on peer review, as much as we grumble about it. Emphasis on “peer,” of course.
You should probably be a bit more charitable to cousin_it, he’s very smart too.
I was under the impression that it was sufficient to read statistics books. Apparently though, you need also to be anointed by another statistician to even talk about the subject.
You seem to imply that no statistician has ever criticized frequentist methods. LW is just parroting what others, more expert men already said.
Isn’t it, as long as you’re making an incorrect statement, irrelevant how intelligent you are? Jaynes was wrong about quantum mechanics. Einstein was wrong about the unified field.
Everybody can be wrong, no matter how respected or intelligent they are.
“I was under the impression that it was sufficient to read statistics books.”
Ok, what have you read?
I am not the “blogging police,” I am just saying, based on past experience, that when people who aren’t statisticians talk about these issues, the result is very low quality. So low that it would have been better to stay silent. Statistics is a very mathematical field. These types of arguments are akin to “should we think about mathematics topologically or algebraically?”
“You seem to imply that no statistician has ever criticized frequentist methods.”
See “Tom Knight and the LISP machine”:
http://catb.org/jargon/html/koans.html
One of these koans is pretty Bayesian, actually, the one about tic-tac-toe.
“Isn’t it, as long as you’re making an incorrect statement, irrelevant how intelligent you are?”
Sure is, but how certain are you it’s incorrect? If uncertain, intelligence is useful information you should Bayes Theorem in.
And anyways, charity is about interpreting reasonably what people say.
The pretty standard Bayesian curriculum: De Finetti, Jaynes-Bretthorst, Sivia.
I love Lisp koans much more than I love Lisp… Anyway, it’s still a question of knowing a subject, not being part of a cabal.
Well, I prefer evidence to signalling: if the problems is only my tediousness, refusing to accept a settled argument, someone can simply point me to a paper, a blog post or a book saying “here, this shows clearly that the replication crysis happened for this reason, not because of the opaqueness of frequentist methods”. I am willing to update. I have done it in the past many times, I’m confident I can do this time too.
Here, all this “He is very intelligent! No, you are very intelligent!” is… sad.
I guess the natural question is—what about standard Frequentist curriculum? Lots of stuff is neither B or F in stats (for example the book my group and I are going through now).
“it’s still a question of knowing a subject”
Indeed. That’s exactly the point.
The most common way I see “fishing” manifest with Bayesian methods is changing the prior until you get the signal you want. In fact, the “clarity” of Bayesian machinery is even aiding and abetting this type of practice.
You say you are willing to update—don’t you find it weird that basically the only place people still talk about B vs F is here on LW? Professional statisticians moved on from this argument decades ago.
The charitable view is LW likes arguing about unsettled philosophy, but aren’t up to speed on what real philosophical arguments are in the field. (In my field, for example, one argument is about testability, and how much should causal models assume). The uncharitable view is LW is addicted to online wankery.
Let me retrace the steps of this conversation, so that we have at least a direction to move towards.
The OP argued that we keep a careful eye so that we don’t drift from Bayesianism as the only correct mathematical form of inference.
You try to silence him saying that if he is not a statistician, he should not talk about that.
I point out that those who routinely use frequentists statistics are commonly fucking it up (the disaster about the RDA of vitamin D is another easily mockable mistakes of frequentist statisticians).
The conversation then degenerates on dick-size measuring, only with IQ or academic credentials.
So, let me regroup what I believe to be true, so that specific parts of what I believe to be true can be attacked (but if it’s just: “you don’t have the credentials to talk about that” or “other intelligent people think differently”, please refrain).
1 the only correct foundation for inference and probability is Bayesian
2 Bayesian probability has a broader applicability than frequentist probability
3 basic frequentist statistics can be and should be reformulated from a Bayesian point of view
4 frequentist statistics is taught badly and applied even worse
5 point 4 bears a no small responsability in famous scientific mistakes
6 nor Bayesian or frequentist statiscs bound dishonest scientists
7 advanced statistics has much more in common with functional analysis and measure theory, so that whether it’s expressed in one or the other form is less important
8 LW has the merit of insisting on Bayes because frequentist statiscs, being the academic tradition, has a higher status, and no amount mistakes derived from it seems able to make a dent in its reputation
9 Bayes theorem is the basis of the first formally defined artificial intelligence
I hope this list can keep the discussion productive.
“The conversation then degenerates on dick-size measuring.”
“I hope this list can keep the discussion productive.”
Alright then, Bayes away!
Generic advice for others: the growth mindset for stats (which is a very hard mathematical subject) is to be more like a grad student, e.g. work very very hard and read a lot, and maybe even try to publish. Leave arguing about philosophy to undergrads.
This sounds a lot like the Neil Tyson / Bill Nye attitude of “science has made philosophy obsolete!”
I don’t agree with Tyson on this, I just think yall aren’t qualified to do philosophy of stats.
The Wikipedia page for replication crisis doesn’t mention frequentism or Bayesianism. The main reasons are more like the file drawer effect, publish or perish, etc. Of course an honest Bayesian wouldn’t be vulnerable to those, but neither would an honest frequentist.
Who else has said that science could and should be wholesale replaced by Bayes?
No one?
Also the point.