I claim the problem is that our model is insufficient to capture our true beliefs.
There’s a difference in how we act between a coin flip (true 50⁄50) and “are bloxors greeblic?” (a question we have no info about).
For example, if our friend came and said “Yes, i know this one, the answer is (heads|yes)”. For coin flip you’d say “are you out of your mind?” and for bloxors you’d say “Ok, sure, you know better than me”
I’ve been idly pondering over this since Scott Alexander’s post. What is a better model?
One option would be to have another percentage — a meta-percentage. e.g. “What credence do i give to “this is an accurate model of the world””? For coin flips, you’re 99.999% that 50% is a good model. For bloxors, you’re ~0% that 50% is a good model.
I don’t love it, but it’s better than presuming anything on the base level, i think.
One option would be to have another percentage — a meta-percentage. e.g. “What credence do i give to “this is an accurate model of the world””? For coin flips, you’re 99.999% that 50% is a good model. For bloxors, you’re ~0% that 50% is a good model.
This is a model that I always tend to fall back on but I can never find a name for it so find it hard to look into. I have always figured I am misunderstanding Bayesian statistics and somehow credence is all factored in somehow. That doesn’t really seem like the case though.
Does the Scott Alexander post lay this out? I am having difficulty finding it.
The closest term I have been able to find is Kelly constants, which is a measure of how much “wealth” you should rationally put into a probabilistic outcome. Replace “wealth” with credence and maybe it could be useful for decisions but even this misses the point!
It’s possible to do such modelling with beta-distributions (actually similar to meta-probabilities).
Combination of B(1;1) (something like non-informative prior) and B(a;b) (information obtained from friend) will be B(1+a;1+b) - moved from equal probabilities far more than combination B(1000;1000)⋅B(a;b)=B(1000+a;1000+b).
I claim the problem is that our model is insufficient to capture our true beliefs.
There’s a difference in how we act between a coin flip (true 50⁄50) and “are bloxors greeblic?” (a question we have no info about).
For example, if our friend came and said “Yes, i know this one, the answer is (heads|yes)”. For coin flip you’d say “are you out of your mind?” and for bloxors you’d say “Ok, sure, you know better than me”
I’ve been idly pondering over this since Scott Alexander’s post. What is a better model?
One option would be to have another percentage — a meta-percentage. e.g. “What credence do i give to “this is an accurate model of the world””? For coin flips, you’re 99.999% that 50% is a good model. For bloxors, you’re ~0% that 50% is a good model.
I don’t love it, but it’s better than presuming anything on the base level, i think.
This is a model that I always tend to fall back on but I can never find a name for it so find it hard to look into. I have always figured I am misunderstanding Bayesian statistics and somehow credence is all factored in somehow. That doesn’t really seem like the case though.
Does the Scott Alexander post lay this out? I am having difficulty finding it.
The closest term I have been able to find is Kelly constants, which is a measure of how much “wealth” you should rationally put into a probabilistic outcome. Replace “wealth” with credence and maybe it could be useful for decisions but even this misses the point!
He doesn’t really. Here’s the original article:
https://www.astralcodexten.com/p/mr-tries-the-safe-uncertainty-fallacy
Also there was a long follow-up where he insists 50% is the right answer, but it’s subscriber-only:
https://www.astralcodexten.com/p/but-seriously-are-bloxors-greeblic
It’s possible to do such modelling with beta-distributions (actually similar to meta-probabilities).
Combination of B(1;1) (something like non-informative prior) and B(a;b) (information obtained from friend) will be B(1+a;1+b) - moved from equal probabilities far more than combination B(1000;1000)⋅B(a;b)=B(1000+a;1000+b).