I might be misunderstanding some key concepts but here’s my perspective:
It takes more Bayesian evidence to promote the subjective credence assigned to a belief from negligible to non-negligible than from non-negligible to pretty likely. See the intuition on log odds and locating the hypothesis.
So, going from 0.01% to 1% requires more Bayesian evidence than going from 10% to 90%. The same thing applies for going from 99% to 99.99%.
A person could reasonably be considered super weird for thinking something with a really low prior has even a 10% chance of being true, but it isn’t much weirder to think something has a 10% chance of being true than a 90% chance of being true. This all feels wrong in some important way, but mathematically that’s how it pans out if you want to use Bayes’ Rule for tracking your beliefs.
I think it feels wrong because in practice reported probabilities are typically used to talk about something semantically different than actual Bayesian beliefs. That’s fine and useful, but can result in miscommunication.
Especially in fuzzy situations with lots of possible outcomes, even actual Bayesian beliefs have strange properties and are highly sensitive to your priors, weighing of evidence, and choice of hypothesis space. Rigorously comparing reported credence between people is hard/ambiguous unless either everyone already roughly agrees on all that stuff or the evidence is overwhelming.
Sometimes the exact probabilities people report are more accurately interpreted as “vibe checks” than actual Bayesian beliefs. Annoying, but as you say this is all pre-paradigmatic.
I feel like I am “proving too much” here, but for me this all this bottoms out in the intuition that going from 10% to 90% credence isn’t all that big a shift from a mathematical perspective.
Given the fragile and logarithmic nature of subjective probabilities in fuzzy situations, choosing exact percentages will be hard and the exercise might be better treated as a multiple choice question like:
Almost impossible
Very unlikely
Maybe
Very likely
Almost certain
For the specific case of AI x-risk, the massive differences in the expected value of possible outcomes mean you usually only need that level of granularity to evaluate your options/actions. Nailing down the exact numbers is more entertaining than operationally useful.
I agree that 10-50-90% is not unreasonable in a pre-paradigmatic field. Not sure how it translates into words. Anything more confident than that seems like it would hit the limits of our understanding of the field, which is my main point.
Makes sense. From the post, I thought you’d consider 90% as too high an estimate.
My primary point was that an estimate of 10% and 90% (or maybe even >95%) aren’t much different from a Bayesian evidence perspective. My secondary point was that it’s really hard to meaningfully compare different peoples’ estimates because of wildly varying implicit background assumptions.
I might be misunderstanding some key concepts but here’s my perspective:
It takes more Bayesian evidence to promote the subjective credence assigned to a belief from negligible to non-negligible than from non-negligible to pretty likely. See the intuition on log odds and locating the hypothesis.
So, going from 0.01% to 1% requires more Bayesian evidence than going from 10% to 90%. The same thing applies for going from 99% to 99.99%.
A person could reasonably be considered super weird for thinking something with a really low prior has even a 10% chance of being true, but it isn’t much weirder to think something has a 10% chance of being true than a 90% chance of being true. This all feels wrong in some important way, but mathematically that’s how it pans out if you want to use Bayes’ Rule for tracking your beliefs.
I think it feels wrong because in practice reported probabilities are typically used to talk about something semantically different than actual Bayesian beliefs. That’s fine and useful, but can result in miscommunication.
Especially in fuzzy situations with lots of possible outcomes, even actual Bayesian beliefs have strange properties and are highly sensitive to your priors, weighing of evidence, and choice of hypothesis space. Rigorously comparing reported credence between people is hard/ambiguous unless either everyone already roughly agrees on all that stuff or the evidence is overwhelming.
Sometimes the exact probabilities people report are more accurately interpreted as “vibe checks” than actual Bayesian beliefs. Annoying, but as you say this is all pre-paradigmatic.
I feel like I am “proving too much” here, but for me this all this bottoms out in the intuition that going from 10% to 90% credence isn’t all that big a shift from a mathematical perspective.
Given the fragile and logarithmic nature of subjective probabilities in fuzzy situations, choosing exact percentages will be hard and the exercise might be better treated as a multiple choice question like:
Almost impossible
Very unlikely
Maybe
Very likely
Almost certain
For the specific case of AI x-risk, the massive differences in the expected value of possible outcomes mean you usually only need that level of granularity to evaluate your options/actions. Nailing down the exact numbers is more entertaining than operationally useful.
I agree that 10-50-90% is not unreasonable in a pre-paradigmatic field. Not sure how it translates into words. Anything more confident than that seems like it would hit the limits of our understanding of the field, which is my main point.
Makes sense. From the post, I thought you’d consider 90% as too high an estimate.
My primary point was that an estimate of 10% and 90% (or maybe even >95%) aren’t much different from a Bayesian evidence perspective. My secondary point was that it’s really hard to meaningfully compare different peoples’ estimates because of wildly varying implicit background assumptions.