The reason that testability is not enough is that prediction is not, and cannot be, the purpose of science. Consider an audience watching a conjuring trick. The problem facing them has much the same logic as a scientific problem. Although in nature there is no conjurer trying to deceive us intentionally, we can be mystified in both cases for essentially the same reason: appearances are not self-explanatory. If the explanation of a conjuring trick were evident in its appearance, there would be no trick. If the explanations of physical phenomena were evident in their appearance, empiricism would be true and there would be no need for science as we know it. The problem is not to predict the trick’s appearance. I may, for instance predict that if a conjurer seems to place various balls under various cups, those cups will later appear to be empty; and I may predict that if the conjurer appears to saw someone in half, that person will later appear on stage unharmed. Those are testable predictions. I may experience many conjuring shows and see my predictions vindicated every time. But that does not even address, let alone solve, the problem of how the trick works. Solving it requires an explanation: a statement of the reality which accounts for the trick’s appearance.
I disagree with Deutsch, I think prediction is much more important to science than he makes it out to be.
The issue is the questions (about the future) you ask. Deutsch says
I may, for instance predict that if a conjurer seems to place various balls under various cups, those cups will later appear to be empty; and I may predict that if the conjurer appears to saw someone in half, that person will later appear on stage unharmed. Those are testable predictions.
and, of course, that is true, but these are “uninteresting” questions to ask. Let me ask for different predictions: please predict what will happen to the balls if the cups are transparent. Please predict what will happen to the person being sawed in half if we take away three sides of the box he’s in.
Given the proper questions one will have to understand “how the trick works” to produce correct forecasts.
Science is about predictions, provided you ask to predict the right thing.
I disagree with Deutsch, I think prediction is much more important to science than he makes it out to be.
Deutsch’s point (made in greater length in the book) is that predictions are lower level than the true target of science- explanations- not that they aren’t valuable. One of the main ways to test explanations is to get predictions from them, and then check out the predictions, and getting too many predictions wrong is fatal for an explanation.
Your example of “interesting” predictions highlights his point: the explanation of how the trick work can readily generate a prediction of what would happen if the cups were transparent, but the prediction that the cups would later be empty does not readily generate a prediction of what would happen if the cups were transparent. By focusing directly on explanations, he makes it obvious which predictions are the interesting ones. Indeed, I’d even speculate that someone who didn’t have and couldn’t acquire the concept of explanations would have trouble grasping the idea that some predictions are more ‘interesting’ than others and that there’s a reliable way to determine which predictions those are.
By focusing directly on explanations, he makes it obvious which predictions are the interesting ones. Indeed, I’d even speculate that someone who didn’t have and couldn’t acquire the concept of explanations would have trouble grasping the idea that some predictions are more ‘interesting’ than others and that there’s a reliable way to determine which predictions those are.
Oh, I don’t think so. If you’re a medieval farmer, a prediction of the optimal time to plant is of extreme interest to you regardless of what kind of explanation is behind it. The Ptolemaic epicycles produced good predictions of much interest for a long time even though the explanation behind them was wrong.
Think about it this way: would you rather have a good prediction without an explanation or would you rather have an explanation that is unable to make successful predictions?
However I acknowledge that this is a “what’s more important—the chicken or the egg?” discussion :-)
If you’re a medieval farmer, a prediction of the optimal time to plant is of extreme interest to you regardless of what kind of explanation is behind it.
I believe we have switched uses of the word “interesting.”
Think about it this way: would you rather have a good prediction without an explanation or would you rather have an explanation that is unable to make successful predictions?
This comparison, to me, maps on to “Would you rather have bricks that aren’t arranged as a house, or a house made out of nothing?” Well, it’s better to have the bricks than not, but the usefulness of a house depends on what it is made from, and a house made from nothing is useless (and very possibly harmful, if it prevents me from seeking out superior shelter).
That’s what I meant by ‘lower level’- a prediction is related to an explanation like a brick is related to a house. The statement “construction is about houses” does not mean that construction is not about bricks- but it does mean a focus on bricks for bricks’ sake is not construction.
I believe we have switched uses of the word “interesting.”
Not really, but it’s my fault for not specifying better that I used “interesting” in the meaning elongated towards “useful” and not towards “fucking awesome”.
a prediction is related to an explanation like a brick is related to a house
Well, not the mapping for me. I view predictions as useful/consumable/what-you-actually-want/end result and I view explanations as a machine for generating predictions. So the image in my head is that you have a box with a hopper and a lever, you put the inputs into the hopper, pull the lever, and a prediction pops out.
Now sometimes that box is black and you don’t know what’s inside and how it works. This is a big minus because you trust the predictions less (as you should) and because your ability to manipulate the outcome by twiddling with the inputs is limited. However note that you can still empirically verify whether the (past) predictions are any good just fine.
Sometimes the box is transparent and you see all the pushrods and gears and whatnot inside. You can trace how inputs get converted to outputs and your ability to manipulate the outcome is much greater. You still have to empirically test your predictions, though.
And sometime the box is semi-transparent so that you see some outlines and maybe a few parts, the rest is fuzzy and uncertain.
Yeah, it’s not a very good one- the other one I was thinking of was “financial stability” and “money in your pocket”, which better captures that the interactions go both ways- if you’re financially stable, a symptom of that is that you can get money to put into your pocket, but you can have money in your pocket without being financially stable. But the issue here is it does make sense to think about financial stability when you have no money, whereas it doesn’t make sense to think of a house made out of nothing- and I want an explanation which makes no predictions to not make sense. (Or maybe not- the null explanation of “I know nothing and acknowledge that I know nothing” might be worthwhile to explicitly include.)
Maybe it is better to just look at it as levels of ‘methodological abstraction’- a prediction is a fortune cookie, an explanation is a box that generates fortune cookies, science is a process that generates boxes that generate fortune cookies.
“Testability” is not precisely defined, but most people agree that it can involve RCTs. That is to “test” something can mean “to give some causal account (explanation).”
David Deutsch, The Beginning of Infinity
I disagree with Deutsch, I think prediction is much more important to science than he makes it out to be.
The issue is the questions (about the future) you ask. Deutsch says
and, of course, that is true, but these are “uninteresting” questions to ask. Let me ask for different predictions: please predict what will happen to the balls if the cups are transparent. Please predict what will happen to the person being sawed in half if we take away three sides of the box he’s in.
Given the proper questions one will have to understand “how the trick works” to produce correct forecasts.
Science is about predictions, provided you ask to predict the right thing.
Deutsch’s point (made in greater length in the book) is that predictions are lower level than the true target of science- explanations- not that they aren’t valuable. One of the main ways to test explanations is to get predictions from them, and then check out the predictions, and getting too many predictions wrong is fatal for an explanation.
Your example of “interesting” predictions highlights his point: the explanation of how the trick work can readily generate a prediction of what would happen if the cups were transparent, but the prediction that the cups would later be empty does not readily generate a prediction of what would happen if the cups were transparent. By focusing directly on explanations, he makes it obvious which predictions are the interesting ones. Indeed, I’d even speculate that someone who didn’t have and couldn’t acquire the concept of explanations would have trouble grasping the idea that some predictions are more ‘interesting’ than others and that there’s a reliable way to determine which predictions those are.
Oh, I don’t think so. If you’re a medieval farmer, a prediction of the optimal time to plant is of extreme interest to you regardless of what kind of explanation is behind it. The Ptolemaic epicycles produced good predictions of much interest for a long time even though the explanation behind them was wrong.
Think about it this way: would you rather have a good prediction without an explanation or would you rather have an explanation that is unable to make successful predictions?
However I acknowledge that this is a “what’s more important—the chicken or the egg?” discussion :-)
I believe we have switched uses of the word “interesting.”
This comparison, to me, maps on to “Would you rather have bricks that aren’t arranged as a house, or a house made out of nothing?” Well, it’s better to have the bricks than not, but the usefulness of a house depends on what it is made from, and a house made from nothing is useless (and very possibly harmful, if it prevents me from seeking out superior shelter).
That’s what I meant by ‘lower level’- a prediction is related to an explanation like a brick is related to a house. The statement “construction is about houses” does not mean that construction is not about bricks- but it does mean a focus on bricks for bricks’ sake is not construction.
Not really, but it’s my fault for not specifying better that I used “interesting” in the meaning elongated towards “useful” and not towards “fucking awesome”.
Well, not the mapping for me. I view predictions as useful/consumable/what-you-actually-want/end result and I view explanations as a machine for generating predictions. So the image in my head is that you have a box with a hopper and a lever, you put the inputs into the hopper, pull the lever, and a prediction pops out.
Now sometimes that box is black and you don’t know what’s inside and how it works. This is a big minus because you trust the predictions less (as you should) and because your ability to manipulate the outcome by twiddling with the inputs is limited. However note that you can still empirically verify whether the (past) predictions are any good just fine.
Sometimes the box is transparent and you see all the pushrods and gears and whatnot inside. You can trace how inputs get converted to outputs and your ability to manipulate the outcome is much greater. You still have to empirically test your predictions, though.
And sometime the box is semi-transparent so that you see some outlines and maybe a few parts, the rest is fuzzy and uncertain.
Yeah, it’s not a very good one- the other one I was thinking of was “financial stability” and “money in your pocket”, which better captures that the interactions go both ways- if you’re financially stable, a symptom of that is that you can get money to put into your pocket, but you can have money in your pocket without being financially stable. But the issue here is it does make sense to think about financial stability when you have no money, whereas it doesn’t make sense to think of a house made out of nothing- and I want an explanation which makes no predictions to not make sense. (Or maybe not- the null explanation of “I know nothing and acknowledge that I know nothing” might be worthwhile to explicitly include.)
Maybe it is better to just look at it as levels of ‘methodological abstraction’- a prediction is a fortune cookie, an explanation is a box that generates fortune cookies, science is a process that generates boxes that generate fortune cookies.
This might be relevant (on the distinction between prediction and explanation):
http://amturing.acm.org/vp/pearl_2658896.cfm
starting at time point 20:34.
“Testability” is not precisely defined, but most people agree that it can involve RCTs. That is to “test” something can mean “to give some causal account (explanation).”
Wow, I didn’t realize how far gone Deutsch is.