“Big Data needs Big Model” (converting non-random Xbox-based polling into accurate election forecasts by modeling the non-randomness & adjusting for it)
“Non-industry-Sponsored Preclinical Studies on Statins Yield Greater Efficacy Estimates Than Industry-Sponsored Studies: A Meta-Analysis”, Krauth et al 2014 (Typically when you look at study results with an industry funding variable, you find that industry studies are biased upwards—this is the sort of study that comes up in books like Bad Pharma—but here we seem to see the opposite: it’s the non-industry, academic/nonprofit/government, funding which seems to be biased towards finding effects. Interestingly, this is for studies early in the drug pipeline, while IIRC the usual studies examine drugs later in the approval pipeline and which have reached human clinical trials. This immediately suggests an economic rationale: early in the process, drug companies have incentives to reach true results in order to avoid investing much in drugs which won’t ultimately work; but later in the process, because they’ve managed to get a drug close to approval, they have incentives to cook the books in order to try to force approval regardless. So for preliminary results, you would want to distrust academic work and trust industry findings, but then at some point flip your assessments and start assuming the opposite. Makes me wonder what the midpoint is where neither group is more untrustworthy?)
Technology:
“Exponential and non-exponential trends in information technology” (LW)
“The Three Projections of Dr Futamura” (isomorphisms between compilers/interpreters/etc)
Framing Brian Krebs with heroin
“It’s the Latency, Stupid”
Medieval computer science: “STOC 1500”
“A World Without Randomness”
“Life Inside Brewster’s Magnificent Contraption” (Jason Scott on the Internet Archive)
“Mundane Magic”
Sand as a form of power storage
Statistics:
“Search for the Wreckage of Air France Flight AF 447”, Stone et al 2014 (technical report)
“What do null fields tell use about scientific fraud?”
“A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, ‘The Economic Effects of Climate Change’”
“Theory-testing in psychology and physics: a methodological paradox”, Meehl 1967 (excerpts)
“What Bayesianism Taught Me”
“The robust beauty of improper linear models in decision making”
“Big Data needs Big Model” (converting non-random Xbox-based polling into accurate election forecasts by modeling the non-randomness & adjusting for it)
What are statistical models?
“Non-industry-Sponsored Preclinical Studies on Statins Yield Greater Efficacy Estimates Than Industry-Sponsored Studies: A Meta-Analysis”, Krauth et al 2014 (Typically when you look at study results with an industry funding variable, you find that industry studies are biased upwards—this is the sort of study that comes up in books like Bad Pharma—but here we seem to see the opposite: it’s the non-industry, academic/nonprofit/government, funding which seems to be biased towards finding effects. Interestingly, this is for studies early in the drug pipeline, while IIRC the usual studies examine drugs later in the approval pipeline and which have reached human clinical trials. This immediately suggests an economic rationale: early in the process, drug companies have incentives to reach true results in order to avoid investing much in drugs which won’t ultimately work; but later in the process, because they’ve managed to get a drug close to approval, they have incentives to cook the books in order to try to force approval regardless. So for preliminary results, you would want to distrust academic work and trust industry findings, but then at some point flip your assessments and start assuming the opposite. Makes me wonder what the midpoint is where neither group is more untrustworthy?)
How to Measure Anything review
Science:
“Predictive brains, situated agents, and the future of cognitive science”, Clark 2013
“Detection of Near-Earth Asteroids”
“Cosmic Horror: In which we confront the terrible racism of H. P. Lovecraft”
“How Athletes Get Great: Just train for 10,000 hours, right? Not quite. In his new book, author David Epstein argues that top-shelf athletic performance may be a more complicated formula than we’ve recently come to believe.”