I believe we have switched uses of the word “interesting.”
Not really, but it’s my fault for not specifying better that I used “interesting” in the meaning elongated towards “useful” and not towards “fucking awesome”.
a prediction is related to an explanation like a brick is related to a house
Well, not the mapping for me. I view predictions as useful/consumable/what-you-actually-want/end result and I view explanations as a machine for generating predictions. So the image in my head is that you have a box with a hopper and a lever, you put the inputs into the hopper, pull the lever, and a prediction pops out.
Now sometimes that box is black and you don’t know what’s inside and how it works. This is a big minus because you trust the predictions less (as you should) and because your ability to manipulate the outcome by twiddling with the inputs is limited. However note that you can still empirically verify whether the (past) predictions are any good just fine.
Sometimes the box is transparent and you see all the pushrods and gears and whatnot inside. You can trace how inputs get converted to outputs and your ability to manipulate the outcome is much greater. You still have to empirically test your predictions, though.
And sometime the box is semi-transparent so that you see some outlines and maybe a few parts, the rest is fuzzy and uncertain.
Yeah, it’s not a very good one- the other one I was thinking of was “financial stability” and “money in your pocket”, which better captures that the interactions go both ways- if you’re financially stable, a symptom of that is that you can get money to put into your pocket, but you can have money in your pocket without being financially stable. But the issue here is it does make sense to think about financial stability when you have no money, whereas it doesn’t make sense to think of a house made out of nothing- and I want an explanation which makes no predictions to not make sense. (Or maybe not- the null explanation of “I know nothing and acknowledge that I know nothing” might be worthwhile to explicitly include.)
Maybe it is better to just look at it as levels of ‘methodological abstraction’- a prediction is a fortune cookie, an explanation is a box that generates fortune cookies, science is a process that generates boxes that generate fortune cookies.
Not really, but it’s my fault for not specifying better that I used “interesting” in the meaning elongated towards “useful” and not towards “fucking awesome”.
Well, not the mapping for me. I view predictions as useful/consumable/what-you-actually-want/end result and I view explanations as a machine for generating predictions. So the image in my head is that you have a box with a hopper and a lever, you put the inputs into the hopper, pull the lever, and a prediction pops out.
Now sometimes that box is black and you don’t know what’s inside and how it works. This is a big minus because you trust the predictions less (as you should) and because your ability to manipulate the outcome by twiddling with the inputs is limited. However note that you can still empirically verify whether the (past) predictions are any good just fine.
Sometimes the box is transparent and you see all the pushrods and gears and whatnot inside. You can trace how inputs get converted to outputs and your ability to manipulate the outcome is much greater. You still have to empirically test your predictions, though.
And sometime the box is semi-transparent so that you see some outlines and maybe a few parts, the rest is fuzzy and uncertain.
Yeah, it’s not a very good one- the other one I was thinking of was “financial stability” and “money in your pocket”, which better captures that the interactions go both ways- if you’re financially stable, a symptom of that is that you can get money to put into your pocket, but you can have money in your pocket without being financially stable. But the issue here is it does make sense to think about financial stability when you have no money, whereas it doesn’t make sense to think of a house made out of nothing- and I want an explanation which makes no predictions to not make sense. (Or maybe not- the null explanation of “I know nothing and acknowledge that I know nothing” might be worthwhile to explicitly include.)
Maybe it is better to just look at it as levels of ‘methodological abstraction’- a prediction is a fortune cookie, an explanation is a box that generates fortune cookies, science is a process that generates boxes that generate fortune cookies.