This. Basically, if your job is to do predictions, and the accuracy of your predictions is not measured, then (at least the prediction part of) your job is bullshit.
I think that if you compare simple linear models in domains where people actually care about their predictions, the outcome would be different. For example, if simple models predicted stock performance better than experts at investment banks, anyone with a spreadsheet could quickly become rich. There are few if any cases of ‘I started with Excel and 1000$, and now I am a billionaire’. Likewise, I would be highly surprised to see a simple linear model outperform Nate Silver or the weather forecast.
Even predicting chess outcomes from mid-game board configurations is something where I would expect human experts to outperform simple statistical models working on easily quantifiable data (e.g. number of pieces remaining, number of possible moves, being in check, etc).
Neural networks contained in animal brains (which includes human brains) are quite capable of implementing linear models, and such should at least perform equally well when they are properly trained. A wolf pack deciding to chase or not chase some prey has direct evolutionary skin in the game of making their prediction of success as accurate as possible which the average school counselor predicting academic success simply does not have.
--
You touch this a bit in ‘In defense of explainatory modeling’, but I want to emphasize that uncovering causal relationships and pathways is central to world modelling. Often, we don’t want just predictions, we want predictions conditional on interventions. If you don’t have that, you will end up trying to cure chickenpox with makeup, as ‘visible blisters’ is negatively correlated with outcomes.
Likewise, if we know the causal pathway, we have a much better basis to judge if some finding can be applied to out-of-distribution data. No matter how many anvils you have seen falling, without a causal understanding (e.g. Newtonian mechanics), you will not be able to reliably apply your findings to falling apples or pianos.
I see this as less of an endorsement of linear models and more of a scathing review of expert performance.
This. Basically, if your job is to do predictions, and the accuracy of your predictions is not measured, then (at least the prediction part of) your job is bullshit.
I think that if you compare simple linear models in domains where people actually care about their predictions, the outcome would be different. For example, if simple models predicted stock performance better than experts at investment banks, anyone with a spreadsheet could quickly become rich. There are few if any cases of ‘I started with Excel and 1000$, and now I am a billionaire’. Likewise, I would be highly surprised to see a simple linear model outperform Nate Silver or the weather forecast.
Even predicting chess outcomes from mid-game board configurations is something where I would expect human experts to outperform simple statistical models working on easily quantifiable data (e.g. number of pieces remaining, number of possible moves, being in check, etc).
Neural networks contained in animal brains (which includes human brains) are quite capable of implementing linear models, and such should at least perform equally well when they are properly trained. A wolf pack deciding to chase or not chase some prey has direct evolutionary skin in the game of making their prediction of success as accurate as possible which the average school counselor predicting academic success simply does not have.
--
You touch this a bit in ‘In defense of explainatory modeling’, but I want to emphasize that uncovering causal relationships and pathways is central to world modelling. Often, we don’t want just predictions, we want predictions conditional on interventions. If you don’t have that, you will end up trying to cure chickenpox with makeup, as ‘visible blisters’ is negatively correlated with outcomes.
Likewise, if we know the causal pathway, we have a much better basis to judge if some finding can be applied to out-of-distribution data. No matter how many anvils you have seen falling, without a causal understanding (e.g. Newtonian mechanics), you will not be able to reliably apply your findings to falling apples or pianos.