Sorry, ambiguous wording. 0.05 is too weak, and should be replaced with, say, 0.005. It would be a better scientific investment to do fewer studies with twice as many subjects and have nearly all the reported results be replicable. Unfortunately, this change has to be standardized within a field, because otherwise you’re deliberately handicapping yourself in an arms race.
Ah, yes, I see. I understand and lean instinctively towards agreeing. Certainly I agree about the standardization problem. I think it’s rather difficult to determine what is the best number, though. 0.005 is as equally pulled out of a hat as Fisher’s 0.05.
From your “A Technical Explanation of Technical Explanation”:
Similarly, I wonder how many betters on horse races realize that you don’t win by betting on the horse you think will win the race, but by betting on horses whose payoffs exceed what you think are the odds. But then, statistical thinkers that sophisticated would probably not bet on horse races.
Now I know that you aren’t familiar with gambling. The latter is precisely what the professional gamblers do, and some of them do bet on horse races, or sports. Professional gamblers, unlike the amateurs, are sophisticated statistical thinkers. (And horse races are acceptable for sophisticated gamblers because there’s only the small vigorish involved, and there’s plenty of area for specialized knowledge.)
I think you’ve made a common statistical fallacy. Perhaps “someone who bets on horse races is probably not a sophisticated statistical thinker.” But it does not necessarily follow that “someone who is a sophisticated statistical thinker probably does not bet on horse races.” Bayes’s Theorem, my man. :)
I know plenty of math Ph.D.s and grad students who do gamble online and look for arbitrage in a variety on ways. Whether they’re representative I don’t know.
Sorry, ambiguous wording. 0.05 is too weak, and should be replaced with, say, 0.005. It would be a better scientific investment to do fewer studies with twice as many subjects and have nearly all the reported results be replicable. Unfortunately, this change has to be standardized within a field, because otherwise you’re deliberately handicapping yourself in an arms race.
Ah, yes, I see. I understand and lean instinctively towards agreeing. Certainly I agree about the standardization problem. I think it’s rather difficult to determine what is the best number, though. 0.005 is as equally pulled out of a hat as Fisher’s 0.05.
From your “A Technical Explanation of Technical Explanation”:
Similarly, I wonder how many betters on horse races realize that you don’t win by betting on the horse you think will win the race, but by betting on horses whose payoffs exceed what you think are the odds. But then, statistical thinkers that sophisticated would probably not bet on horse races.
Now I know that you aren’t familiar with gambling. The latter is precisely what the professional gamblers do, and some of them do bet on horse races, or sports. Professional gamblers, unlike the amateurs, are sophisticated statistical thinkers. (And horse races are acceptable for sophisticated gamblers because there’s only the small vigorish involved, and there’s plenty of area for specialized knowledge.)
I think you’ve made a common statistical fallacy. Perhaps “someone who bets on horse races is probably not a sophisticated statistical thinker.” But it does not necessarily follow that “someone who is a sophisticated statistical thinker probably does not bet on horse races.” Bayes’s Theorem, my man. :)
I know plenty of math Ph.D.s and grad students who do gamble online and look for arbitrage in a variety on ways. Whether they’re representative I don’t know.