I’ve read your linked post, and it doesn’t convince me. The reasoning doesn’t seem rooted in any defensible principles, but is rather just using plausible-sounding heuristics which there is no reason to think will produce consistent results.
The example of the person placed on the unknown-sized grid has a perfectly satisfactory solution using standard Bayesian inference: You have a prior for the number of cells in the row. After observing that you’re in cell n, the likelihood function for there being R rows is zero for R less than n, and 1/R for R greater than or equal to n. You multiply the likelihood by the prior and normalize to get a posterior distribution for R. Observing that you’re in cell 1 does increase the probability of small values for R, but not necessarily in the exact way you might think from a heuristic about needing to by “typical”.
To illustrate the inconsistencies of that heuristic, consider that for as long as humans don’t go extinct, we’ll probably be using controlled fire, the wheel, and lenses. But fire was controlled hundreds of thousands of years ago, the wheel was invented thousands of years ago, and lenses were invented hundreds of years ago. Depending on which invention you focus on, you get completely different predictions of when humans will go extinct, based on wanting us to be “typical” in the time span of the invention. I think none of these predictions have any validity.
“End of the reference class” is not extinction, the class could end in differently. For any question we ask we simultaneously define reference class and what we mean by its ending.
In your example of fire, wheels and lenses: imagine that humanity will experience a very long period civilizational decline. Lens will disappear first, wheels seconds and fire will be the last in million of years. It is a boring but plausible apocalypse.
Possible, sure. But the implication of inference from these reference classes is that this future with a long period of civilizational decline is the only likely one—that some catastrophic end in the near future is pretty much ruled out. Much as I’d like to believe that, I don’t think one can actually infer that from the history of fire, wheels, and lenses.
I’ve read your linked post, and it doesn’t convince me. The reasoning doesn’t seem rooted in any defensible principles, but is rather just using plausible-sounding heuristics which there is no reason to think will produce consistent results.
The example of the person placed on the unknown-sized grid has a perfectly satisfactory solution using standard Bayesian inference: You have a prior for the number of cells in the row. After observing that you’re in cell n, the likelihood function for there being R rows is zero for R less than n, and 1/R for R greater than or equal to n. You multiply the likelihood by the prior and normalize to get a posterior distribution for R. Observing that you’re in cell 1 does increase the probability of small values for R, but not necessarily in the exact way you might think from a heuristic about needing to by “typical”.
To illustrate the inconsistencies of that heuristic, consider that for as long as humans don’t go extinct, we’ll probably be using controlled fire, the wheel, and lenses. But fire was controlled hundreds of thousands of years ago, the wheel was invented thousands of years ago, and lenses were invented hundreds of years ago. Depending on which invention you focus on, you get completely different predictions of when humans will go extinct, based on wanting us to be “typical” in the time span of the invention. I think none of these predictions have any validity.
“End of the reference class” is not extinction, the class could end in differently. For any question we ask we simultaneously define reference class and what we mean by its ending.
In your example of fire, wheels and lenses: imagine that humanity will experience a very long period civilizational decline. Lens will disappear first, wheels seconds and fire will be the last in million of years. It is a boring but plausible apocalypse.
Possible, sure. But the implication of inference from these reference classes is that this future with a long period of civilizational decline is the only likely one—that some catastrophic end in the near future is pretty much ruled out. Much as I’d like to believe that, I don’t think one can actually infer that from the history of fire, wheels, and lenses.