If you’re a medieval farmer, a prediction of the optimal time to plant is of extreme interest to you regardless of what kind of explanation is behind it.
I believe we have switched uses of the word “interesting.”
Think about it this way: would you rather have a good prediction without an explanation or would you rather have an explanation that is unable to make successful predictions?
This comparison, to me, maps on to “Would you rather have bricks that aren’t arranged as a house, or a house made out of nothing?” Well, it’s better to have the bricks than not, but the usefulness of a house depends on what it is made from, and a house made from nothing is useless (and very possibly harmful, if it prevents me from seeking out superior shelter).
That’s what I meant by ‘lower level’- a prediction is related to an explanation like a brick is related to a house. The statement “construction is about houses” does not mean that construction is not about bricks- but it does mean a focus on bricks for bricks’ sake is not construction.
I believe we have switched uses of the word “interesting.”
Not really, but it’s my fault for not specifying better that I used “interesting” in the meaning elongated towards “useful” and not towards “fucking awesome”.
a prediction is related to an explanation like a brick is related to a house
Well, not the mapping for me. I view predictions as useful/consumable/what-you-actually-want/end result and I view explanations as a machine for generating predictions. So the image in my head is that you have a box with a hopper and a lever, you put the inputs into the hopper, pull the lever, and a prediction pops out.
Now sometimes that box is black and you don’t know what’s inside and how it works. This is a big minus because you trust the predictions less (as you should) and because your ability to manipulate the outcome by twiddling with the inputs is limited. However note that you can still empirically verify whether the (past) predictions are any good just fine.
Sometimes the box is transparent and you see all the pushrods and gears and whatnot inside. You can trace how inputs get converted to outputs and your ability to manipulate the outcome is much greater. You still have to empirically test your predictions, though.
And sometime the box is semi-transparent so that you see some outlines and maybe a few parts, the rest is fuzzy and uncertain.
Yeah, it’s not a very good one- the other one I was thinking of was “financial stability” and “money in your pocket”, which better captures that the interactions go both ways- if you’re financially stable, a symptom of that is that you can get money to put into your pocket, but you can have money in your pocket without being financially stable. But the issue here is it does make sense to think about financial stability when you have no money, whereas it doesn’t make sense to think of a house made out of nothing- and I want an explanation which makes no predictions to not make sense. (Or maybe not- the null explanation of “I know nothing and acknowledge that I know nothing” might be worthwhile to explicitly include.)
Maybe it is better to just look at it as levels of ‘methodological abstraction’- a prediction is a fortune cookie, an explanation is a box that generates fortune cookies, science is a process that generates boxes that generate fortune cookies.
I believe we have switched uses of the word “interesting.”
This comparison, to me, maps on to “Would you rather have bricks that aren’t arranged as a house, or a house made out of nothing?” Well, it’s better to have the bricks than not, but the usefulness of a house depends on what it is made from, and a house made from nothing is useless (and very possibly harmful, if it prevents me from seeking out superior shelter).
That’s what I meant by ‘lower level’- a prediction is related to an explanation like a brick is related to a house. The statement “construction is about houses” does not mean that construction is not about bricks- but it does mean a focus on bricks for bricks’ sake is not construction.
Not really, but it’s my fault for not specifying better that I used “interesting” in the meaning elongated towards “useful” and not towards “fucking awesome”.
Well, not the mapping for me. I view predictions as useful/consumable/what-you-actually-want/end result and I view explanations as a machine for generating predictions. So the image in my head is that you have a box with a hopper and a lever, you put the inputs into the hopper, pull the lever, and a prediction pops out.
Now sometimes that box is black and you don’t know what’s inside and how it works. This is a big minus because you trust the predictions less (as you should) and because your ability to manipulate the outcome by twiddling with the inputs is limited. However note that you can still empirically verify whether the (past) predictions are any good just fine.
Sometimes the box is transparent and you see all the pushrods and gears and whatnot inside. You can trace how inputs get converted to outputs and your ability to manipulate the outcome is much greater. You still have to empirically test your predictions, though.
And sometime the box is semi-transparent so that you see some outlines and maybe a few parts, the rest is fuzzy and uncertain.
Yeah, it’s not a very good one- the other one I was thinking of was “financial stability” and “money in your pocket”, which better captures that the interactions go both ways- if you’re financially stable, a symptom of that is that you can get money to put into your pocket, but you can have money in your pocket without being financially stable. But the issue here is it does make sense to think about financial stability when you have no money, whereas it doesn’t make sense to think of a house made out of nothing- and I want an explanation which makes no predictions to not make sense. (Or maybe not- the null explanation of “I know nothing and acknowledge that I know nothing” might be worthwhile to explicitly include.)
Maybe it is better to just look at it as levels of ‘methodological abstraction’- a prediction is a fortune cookie, an explanation is a box that generates fortune cookies, science is a process that generates boxes that generate fortune cookies.