Since counterfactuals are in the map, not in the territory, the only way to evaluate their accuracy is by explicitly constructing a probability distribution of possible outcomes in your model, checking the accuracy of the model and pointing out the probability of a given counterfactual in it. This approach exposes trivial and nonsensical statements as well, since they fail to map to a probability distribution of possible outcomes. Not everything that sounds meaningful as an English sentence is.
For example
“If not for its heat, the sun would be cold”—how do you unpack this even? Is it a tautology of the type “all objects that are not hot are cold”? Maybe we could steelman it a bit: “if we didn’t feel the heat of the sun, it would be cold”—this is less trivial, since there are stellar objects that are hot but don’t emit much heat, like white dwarfs, or so hot they emit mostly in the X-ray or gamma spectrum, which aren’t really considered “heat”. The latter can be modeled: looked into the distribution of stars and their luminosities and spectra, see which one would count as a “sun”, then evaluate the fraction of stellar objects that do not emit heat and are also cold.
“If not for Thomas Schelling, there would be no book The Strategy of Conflict 1960.” What does it mean? How do you steelman it so it is not a trivial claim “Thomas Schelling wrote the book The Strategy of Conflict 1960″? In a distribution of possible worlds in your model of the relevant parts of reality, what is the probability distribution of a book like that, but possibly with a different title, written by someone like Schelling, but with a different name?
“If the Athenians had stuck with Pericles’ strategy, they would have won the Peloponnesian War.”—this one is a more straightforward counterfactual, construct a probability distribution of possible outcomes of the war in case the Pericles’ strategy was followed, given what you know about the war, the strategy, the unknowns, etc.
“If the Federal Reserve doesn’t raise interest rates, there will be endemic inflation.”—this is not a counterfactual at all, but a straightforward prediction, though the rules are the same: construct the probability distribution of possible outcomes first.
Since counterfactuals are in the map, not in the territory, the only way to evaluate their accuracy is by explicitly constructing a probability distribution of possible outcomes in your model...
“The only way”, clearly not all counterfactuals rely on an explicit probability distribution. The probability distribution is usually non-existent in the mind. We rarely make them explicitly. Implicitly, they are probably not represented in the mind as probability distributions either. (Rule 4: neuroscience claims are false.) I agree that it is a neat approach in that may expose trivial or nonsensical counterfactuals. But that approach only works the consequent is trivial or nonsensical. If the antecedent is trivial or nonsensical, the approach requires a regress.
“If not for Thomas Schelling, there would be no book The Strategy of Conflict 1960.” What does it mean? How do you steelman it so it is not a trivial claim “Thomas Schelling wrote the book The Strategy of Conflict 1960″?
My point is that higher level categories are necessary, and yet have to come from somewhere. I am not taking the author-work relationship as self-evident.
Right, definitely not “the only way”. Still I think most counterfactuals are implicit, not explicit, probability distributions. Sort of like when you shoot a hoop, your mind solves rather complicated differential equations implicitly, not explicitly.
The probability distribution is usually non-existent in the mind.
I don’t know if they are represented in the mind somewhere implicitly, but my guess would be that yes, somewhere in your brain there is a collection of experiences that get converted into “priors”, for example. If 90% of your relevant experiences say that “this proposition is true” and 10% say that “this proposition is false”, you end up with a prior credence of 90% seemingly pulled out of thin air.
Since counterfactuals are in the map, not in the territory, the only way to evaluate their accuracy is by explicitly constructing a probability distribution of possible outcomes in your model, checking the accuracy of the model and pointing out the probability of a given counterfactual in it. This approach exposes trivial and nonsensical statements as well, since they fail to map to a probability distribution of possible outcomes. Not everything that sounds meaningful as an English sentence is.
For example
“If not for its heat, the sun would be cold”—how do you unpack this even? Is it a tautology of the type “all objects that are not hot are cold”? Maybe we could steelman it a bit: “if we didn’t feel the heat of the sun, it would be cold”—this is less trivial, since there are stellar objects that are hot but don’t emit much heat, like white dwarfs, or so hot they emit mostly in the X-ray or gamma spectrum, which aren’t really considered “heat”. The latter can be modeled: looked into the distribution of stars and their luminosities and spectra, see which one would count as a “sun”, then evaluate the fraction of stellar objects that do not emit heat and are also cold.
“If not for Thomas Schelling, there would be no book The Strategy of Conflict 1960.” What does it mean? How do you steelman it so it is not a trivial claim “Thomas Schelling wrote the book The Strategy of Conflict 1960″? In a distribution of possible worlds in your model of the relevant parts of reality, what is the probability distribution of a book like that, but possibly with a different title, written by someone like Schelling, but with a different name?
“If the Athenians had stuck with Pericles’ strategy, they would have won the Peloponnesian War.”—this one is a more straightforward counterfactual, construct a probability distribution of possible outcomes of the war in case the Pericles’ strategy was followed, given what you know about the war, the strategy, the unknowns, etc.
“If the Federal Reserve doesn’t raise interest rates, there will be endemic inflation.”—this is not a counterfactual at all, but a straightforward prediction, though the rules are the same: construct the probability distribution of possible outcomes first.
“The only way”, clearly not all counterfactuals rely on an explicit probability distribution. The probability distribution is usually non-existent in the mind. We rarely make them explicitly. Implicitly, they are probably not represented in the mind as probability distributions either. (Rule 4: neuroscience claims are false.) I agree that it is a neat approach in that may expose trivial or nonsensical counterfactuals. But that approach only works the consequent is trivial or nonsensical. If the antecedent is trivial or nonsensical, the approach requires a regress.
My point is that higher level categories are necessary, and yet have to come from somewhere. I am not taking the author-work relationship as self-evident.
Right, definitely not “the only way”. Still I think most counterfactuals are implicit, not explicit, probability distributions. Sort of like when you shoot a hoop, your mind solves rather complicated differential equations implicitly, not explicitly.
I don’t know if they are represented in the mind somewhere implicitly, but my guess would be that yes, somewhere in your brain there is a collection of experiences that get converted into “priors”, for example. If 90% of your relevant experiences say that “this proposition is true” and 10% say that “this proposition is false”, you end up with a prior credence of 90% seemingly pulled out of thin air.