In my experience, people who get excited about bayesian methods and write about applying them to their own field do a terrible job, no better than those who get excited about any other. None of the details of this review move me from a prior of this book being scientism, considerably worse than the typical book about historical methods. Surely what a review of a book on methods needs is examples.
I would have liked to see Carrier team up with somebody like Andrew Gelman. That probably would have resulted in a better book on applying Bayes to historical method. But as it stands, Carrier’s book is all we’ve got, and it ain’t bad. Can you give an example of a “typical book about historical methods” that you think is pretty good?
I did a review of a bunch of Peter Turchin’s work a couple years back. I could look it up and post it if people are interested. It isn’t specifically Bayesian, but he does apply mathematical modeling and statistical analysis to social processes. I wasn’t overly convinced by his methodology, but he did come to some interesting conclusions.
He’s got a good amount of work that ISN’T behind a pay wall. Here’s a sample.
Interesting review, but I have to take exception to your last paragraph: I think Turchin is doing the right thing by only investigating a few selected variables (which he has substantial background reason for thinking of interest) as input into his models. Turning a neural network loose on every possible variable is just begging for massive datamining and multiple comparison problems which eliminate any validity you might hope to have for your results! Worse, if you use all your data initially, no one will be able to test your results for overfitting on any other data set...
Thanks for the feedback. I would guess you’re probably right. My knowledge of data mining practices is actually pretty minimal.
The review, however, was written for a class, and so it is academically mandatory (i.e. “If you want an A you better...”) to come up with problems with the original research and ways to improve. The professor seemed to like neural networks, so… (I think I inherited her “Just run everything through a neural network” mentality, but will definitely update my views based on your feedback. Thanks!)
Which definition of “scientism” are you using? The Oxford Dictionary of Philosophy notes that the word is a term of abuse. Your comment appears to be a general-purpose collection of snarl words and phrases.
In my experience, people who get excited about bayesian methods and write about applying them to their own field do a terrible job, no better than those who get excited about any other. None of the details of this review move me from a prior of this book being scientism, considerably worse than the typical book about historical methods. Surely what a review of a book on methods needs is examples.
I would have liked to see Carrier team up with somebody like Andrew Gelman. That probably would have resulted in a better book on applying Bayes to historical method. But as it stands, Carrier’s book is all we’ve got, and it ain’t bad. Can you give an example of a “typical book about historical methods” that you think is pretty good?
I did a review of a bunch of Peter Turchin’s work a couple years back. I could look it up and post it if people are interested. It isn’t specifically Bayesian, but he does apply mathematical modeling and statistical analysis to social processes. I wasn’t overly convinced by his methodology, but he did come to some interesting conclusions.
He’s got a good amount of work that ISN’T behind a pay wall. Here’s a sample.
I am interested in that review of yours.
It’s long, so I put it in dropbox. This link should take you there. (If not, let me know. My dropbox skills are probably sub-par)
Interesting review, but I have to take exception to your last paragraph: I think Turchin is doing the right thing by only investigating a few selected variables (which he has substantial background reason for thinking of interest) as input into his models. Turning a neural network loose on every possible variable is just begging for massive datamining and multiple comparison problems which eliminate any validity you might hope to have for your results! Worse, if you use all your data initially, no one will be able to test your results for overfitting on any other data set...
Thanks for the feedback. I would guess you’re probably right. My knowledge of data mining practices is actually pretty minimal.
The review, however, was written for a class, and so it is academically mandatory (i.e. “If you want an A you better...”) to come up with problems with the original research and ways to improve. The professor seemed to like neural networks, so… (I think I inherited her “Just run everything through a neural network” mentality, but will definitely update my views based on your feedback. Thanks!)
Could you be more concrete? What are the typical failure modes of these people?
Which definition of “scientism” are you using? The Oxford Dictionary of Philosophy notes that the word is a term of abuse. Your comment appears to be a general-purpose collection of snarl words and phrases.