Adams doesn’t elaborate on this point, but I read him as saying, if you’ve actually measured things and taken data that goes to your point, then your model is more likely to be correct.
For example, suppose a model says that raising the minimum wage reduces employment. That’s a pretty common model in economics and it can be backed up with a lot of math. However I would not find that alone convincing. On the other hand, if an economist goes out into the world and looks at what actually happened when the minimum wage was raised, that would be more convincing. If they can figure out a way to do an experiment in which, for example, 5 nearby towns raise their minimum wage, 5 keep it the same, and another 5 lower it, that would be even more convincing.
Another example: consider a model that says
heart disease kills people
heart disease is correlated with high cholesterol
eggs contain lots of cholesterol
Those three statements are reasonably well established and backed up by data. However if you throw in a model that says dietary cholesterol causes in-body cholesterol, and in-body cholesterol causes heart disease, and therefore eating eggs reduces life expectancy; you’ve jumped way beyond what the data supports. On the other hand, if you compare the levels of all-cause morbidity among people who eat eggs and people who don’t or, better yet, do a multiyear controlled experiment in which the only diet variation between groups is that some people eat eggs and others don’t, the answers you get are far more likely to be correct.
Here’s another one: you have lots of detailed calculations that say if you smash two protons together at .999999c relative velocity, and you do it a few million times, then you’ll see certain particles show up in the debris with very precise probabilities. Only when you run the experiment, you discover that the fractions of different particles you see don’t quite match what you expected because there’s an additional resonance you didn’t know about and didn’t include in the model.
In other words, empirical data beats mere models. Models can be self-consistent and plausible, but not fully reflect the real world. Models that go beyond what the data says run the risk of assuming causal connections that don’t exist (dietary cholesterol to in-body cholesterol) or missing factors outside the model (maybe eggs do increase the risk of heart disease but reduce the risk of cancer) that are more important.
Of course all these experiments are really hard to do, and take years of time and millions, even billions, of dollars, so often we muddle along with seriously flawed models instead. However we need to remember that models are just models, not data, and be reasonably skeptical of their recommendations. In particular, if we’re about to do something really expensive and difficult like changing a nation’s dietary preferences based on nothing more than a model, maybe we should step back and spend the money and the time needed to collect real data before we go full speed ahead.
Fair enough—political conditioning has caused me to assume that any non-specialist saying “don’t trust models, just ‘look at the data’,” is the victim of some sort of anti-epistemology.
In context, it’s less likely that that’s the case, but I still think this quote is painting with much too wide a brush.
political conditioning has caused me to assume that any non-specialist saying “don’t trust models, just ‘look at the data’,” is the victim of some sort of anti-epistemology.
I would argue that it is this political conditioning itself that is the anti-epistemology.
Please, please, kids, stop fighting! Maybe Eugine_Nier & elharo are right about the necessity of looking at the world to decide whether a model’s true, and maybe Manfred & fezziwig have a point about observations and their interpretation not being cleanly separable from the use of models.
Adams doesn’t elaborate on this point, but I read him as saying, if you’ve actually measured things and taken data that goes to your point, then your model is more likely to be correct.
For example, suppose a model says that raising the minimum wage reduces employment. That’s a pretty common model in economics and it can be backed up with a lot of math. However I would not find that alone convincing. On the other hand, if an economist goes out into the world and looks at what actually happened when the minimum wage was raised, that would be more convincing. If they can figure out a way to do an experiment in which, for example, 5 nearby towns raise their minimum wage, 5 keep it the same, and another 5 lower it, that would be even more convincing.
Another example: consider a model that says
heart disease kills people
heart disease is correlated with high cholesterol
eggs contain lots of cholesterol
Those three statements are reasonably well established and backed up by data. However if you throw in a model that says dietary cholesterol causes in-body cholesterol, and in-body cholesterol causes heart disease, and therefore eating eggs reduces life expectancy; you’ve jumped way beyond what the data supports. On the other hand, if you compare the levels of all-cause morbidity among people who eat eggs and people who don’t or, better yet, do a multiyear controlled experiment in which
the only diet variation between groups is that some people eat eggs and others don’t, the answers you get are far more likely to be correct.
Here’s another one: you have lots of detailed calculations that say if you smash two protons together at .999999c relative velocity, and you do it a few million times, then you’ll see certain particles show up in the debris with very precise probabilities. Only when you run the experiment, you discover that the fractions of different particles you see don’t quite match what you expected because there’s an additional resonance you didn’t know about and didn’t include in the model.
In other words, empirical data beats mere models. Models can be self-consistent and plausible, but not fully reflect the real world. Models that go beyond what the data says run the risk of assuming causal connections that don’t exist (dietary cholesterol to in-body cholesterol) or missing factors outside the model (maybe eggs do increase the risk of heart disease but reduce the risk of cancer) that are more important.
Of course all these experiments are really hard to do, and take years of time and millions, even billions, of dollars, so often we muddle along with seriously flawed models instead. However we need to remember that models are just models, not data, and be reasonably skeptical of their recommendations. In particular, if we’re about to do something really expensive and difficult like changing a nation’s dietary preferences based on nothing more than a model, maybe we should step back and spend the money and the time needed to collect real data before we go full speed ahead.
Fair enough—political conditioning has caused me to assume that any non-specialist saying “don’t trust models, just ‘look at the data’,” is the victim of some sort of anti-epistemology.
In context, it’s less likely that that’s the case, but I still think this quote is painting with much too wide a brush.
I would argue that it is this political conditioning itself that is the anti-epistemology.
I don’t suppose you could contribute substance rather than just accusation?
Please, please, kids, stop fighting! Maybe Eugine_Nier & elharo are right about the necessity of looking at the world to decide whether a model’s true, and maybe Manfred & fezziwig have a point about observations and their interpretation not being cleanly separable from the use of models.
Prediction is going beyond the data, so a model that never goes beyond the data isn’t going to be much use.
Climate change models incorporated data, so they are not purely theoretical like the economic model you mentioned.