The principle of explosion isn’t a problem for all logics.
I think in a way, the problem with Parfit’s Hitchhiker is—how would you know that something is a perfect predictor? In order to have a probability p of making every prediction right, over n predictions, only requires a predictor be right in their predictions with probability x, where x^n>=p. So they have a better than 50% chance of making 100 consecutive predictions right if they’re right 99.31% of the time. By this metric, to be sure the chance they’re wrong is less than 1 in 10,000 (i.e. they’re right 99.99% of the time or more) you’d have to see them make 6,932 correct predictions. (This assumes that all these predictions are independent, unrelated events, in addition to a few other counterfactual requirements that are probably satisfied if this is your first time in such a situation.)
Sure, in the real world you can’t know a predictor is perfect. But the point is that perfection is often a useful abstraction and the tools that I introduced allow you to either work with real world problems as you’d seem to prefer or with more abstract problems which are often easier to work with. Anyway, by representing the input of the problem explicitly I’ve created an abstraction that is closer to the real world than most of these problems are.
I was suggesting that what model you should use if your current one is incorrect is based on how you got your current model, which is why it sounds like ‘I prefer real world problems’ - model generation details do seem necessarily specific. (My angle was that in life, few things are impossible, many things are improbable—like getting out of the desert and not paying.) I probably should have stated that, and that only, instead of the math.
by representing the input of the problem explicitly I’ve created an abstraction that is closer to the real world than most of these problems are.
Indeed. I found your post well thought out, and formal, though I do not yet fully understand the jargon.
Thanks, I appreciate the complement. Even though I have a maths degree, I never formally studied decision theory. I’ve only learned about it by reading posts on Less Wrong. So much of the jargon is my attempt to come up with words that succinctly describe the concept.
The principle of explosion isn’t a problem for all logics.
I think in a way, the problem with Parfit’s Hitchhiker is—how would you know that something is a perfect predictor? In order to have a probability p of making every prediction right, over n predictions, only requires a predictor be right in their predictions with probability x, where x^n>=p. So they have a better than 50% chance of making 100 consecutive predictions right if they’re right 99.31% of the time. By this metric, to be sure the chance they’re wrong is less than 1 in 10,000 (i.e. they’re right 99.99% of the time or more) you’d have to see them make 6,932 correct predictions. (This assumes that all these predictions are independent, unrelated events, in addition to a few other counterfactual requirements that are probably satisfied if this is your first time in such a situation.)
Sure, in the real world you can’t know a predictor is perfect. But the point is that perfection is often a useful abstraction and the tools that I introduced allow you to either work with real world problems as you’d seem to prefer or with more abstract problems which are often easier to work with. Anyway, by representing the input of the problem explicitly I’ve created an abstraction that is closer to the real world than most of these problems are.
I was suggesting that what model you should use if your current one is incorrect is based on how you got your current model, which is why it sounds like ‘I prefer real world problems’ - model generation details do seem necessarily specific. (My angle was that in life, few things are impossible, many things are improbable—like getting out of the desert and not paying.) I probably should have stated that, and that only, instead of the math.
Indeed. I found your post well thought out, and formal, though I do not yet fully understand the jargon.
Where/how did you learn decision theory?
Thanks, I appreciate the complement. Even though I have a maths degree, I never formally studied decision theory. I’ve only learned about it by reading posts on Less Wrong. So much of the jargon is my attempt to come up with words that succinctly describe the concept.