Wanted: Notation for credal resilience
Meta: I’ve spent 30-60 minutes thinking about this, and asking people who I’d expect to know about existing notation. I don’t have scientific training, and I’m not active in the forecasting community.
Problem
I want a clear shorthand notation for communicating credal resilience.
I want to be able to quickly communicate something like:
My 80% confidence interval is 5-20. I think there’s 10% chance I’d change my upper or lower bound by more than 50% of the current value if I spent another ~day investigating this.
I’m using the term “credal resilience”. Some people call this “robustness of belief”.
Existing notation for confidence intervals
APA style guide suggests the following:
80% CI [5, 20]
This seems like the best and probably most popular option, so let’s build on that.
Proposal
For clarity, I’ll repeat the example I gave above:
My 80% confidence interval is 5-20. I think there’s 10% chance I’d change my upper or lower bound by more than 50% of the current value if I spent another ~day investigating this.
To communicate this, I propose:
80% CI [5, 20] CR [0.1, 0.5, 1 day]
Low numbers in the first two parameters indicate high credal resilience. The unit of additional investigation tells you the approximate cost of further investigation to “buy” this extra information.
You can specify hour / day / week / month / year for the unit of additional investigation.
Thoughts?
I’d love to hear people’s thoughts on this. Two questions I’ll highlight in particular:
-
Do you think it’d be worth developing a good notation for credal resilience, then popularising it?
-
What do you think of my particular proposal? What might be better?
The way prediction markets handle this is that there is a limited liquidity at each price. So you offer $100 at 80%, $1000$ at 70%, $10k at 65%, etc. If you said “I’m willing to bet at 90% and someone said, okay I’ll bet you $1M you could say no without being a fraud.
I wonder how that works in regard to credal resistance somehow where you could put depth of money you will either bet on or retract the statement
Biden will win the presidency $50 [45%] $1000 [20%] $10,000 [10%]
Yeah this feels good.
I’ll just note how well this tracks with Garrabrant et al’s solution to logical uncertainty via markets.
A problem with this is that it depends on other factors:
the amount of wealth you currently have
your relative risk aversion
your counterparty’s wealth and their relative risk aversion
how well-informed your counterparty is
You should be willing to offer more liquidity until the marginal value is 0EV; the first two factors change how much you would offer for a 51% coin flip, and when there are information differences the other two factors also come into play.
While you are technically right I think that some of this will even out.
If you have lots of wealth, are stupid and offer bad bets then LessWrong has enough money to slowly take your money off you.
I don’t understand the risk aversion point
Your counterparty being well informed is exactly why you should have shallow pools at higher confidence and deeper pools at lower confidences. This is a feature, not a bug.
I don’t understand the coin flip example
Suppose you’re using your notation to communicate credence in a 51% coin flip. The correct amount to wager at various odds depends on your level of risk aversion. If you’re totally risk-neutral, you should bet all of your money even at 50.99% odds. More realistically you should be using something like the Kelly criterion (being more aggressive than Kelly if utility function diminishes slower than log(wealth) and more conservative if it diminishes faster than log(wealth)). So we already don’t know what to write for a 51% coin flip.
When you’re trading against a counterparty, they will only take bets they think are +EV. Usually this means that for any bet, your EV conditional on being traded against is lower than your unconditional EV. This is called adverse selection, and it varies based on who your counterparty is.
But actually, even if your counterparty is rational, they’re not trying to maximize their EV of dollars either, but their expected utility. If they have diminishing returns to money, they will need even higher EV before they bet against you, which increases your adverse selection (and without knowing their level of wealth or relative risk aversion, you don’t know how much).
These are all standard considerations in trading.
Basically if I were using your notation, I’d have to give <10x lower numbers if I were:
poorer or have a less stable job
less altruistic (personal utility of money diminishes faster than altruistic utility).
around people with less money or less stable jobs
around a high proportion of professional traders (adverse selection)
around people who are irrationally risk averse
How on earth have you got 8 LW upvotes in this short time?
We are all using this notation on LW. So while I agree that if you are poor you have to be more careful if you are rich you can’t say what you want. It isn’t only one person who can call you out.
Likewise, if these go from being 1-1 bets to being markets, then many of your criticisms become smaller. Individuals have different utitlity on money, sure, but with enough liquidity I guess this comes out in the waysh.
I sort of still think even with all these criticisms it’s fine and useful. Yeah so some people will have to give lower numbers. shrug. At least we’ll have an imperfect way of denoting something. Feel free to give a better suggestion.
Just a note on 1.: the LessWrong upvote system allows strong upvoting and upvotes from users with more karma update the total more. Seeing eight karma on a post doesn’t mean too much since it could be just from one or two people.
I think the notation here feels unintuitive. I don’t think I’d guess what it means from reading it.
perhaps: 1 day 80% [5,20], lifetime 80% [2.5, 40] though as I say in the other comment that just feels like a different confidence interval.
I like the idea of your proposal—communicating how solidified one’s credences are should be helpful for quickly communicating on new topics (although I could imagine that one has to be quite good at dealing with probabilities for this to actually provide extra information).
Regarding your particular proposal “CR [ <probability of change>, <min size of change>, <time spent on question> }” is unintuitive to me:
In “80% CI [5,20]” the probability is denoted with %, while its “unit”-less in your notation
In “80% CI [5,20]”, the braces [] indicate an interval, while it is more of a tuple in your notation
A reformulation might be
80% CI [5,20] CR [0.1, 0.5, 1 day] --> 80% CI [5,20] ×10 % CR1d [0.5}
Things I dislike about this proposal:
The “CR1d” complicates notation. Possibly one could agree on a default time interval such as 1 day and only write the time range explicitly if it differs? Alternatively, “1 day CR” or “CR_1d” might be usable
the “[0.5]” still feels unintuitive as notation and is still not an interval. Maybe there is a related theoretically-motivated quantity which could be used? Possibly something like the ‘expected information gain’ can be translated into its influence on the [5,20] intervals (with some reasonable assumptions on the distribution)?
80% CI [5,20] ⋇0.5 @ 10 % CR1d might be an alternative, with the “\divideontimes” symbol being the ± of multiplication and hinting at the possible modification of the interval [5, 20]. For latex-free notation, “[5, 20] x0.5″ might be a suitable simplified version.
Overall, I don’t yet have a good intuition for how to think about expected information gain (especially if it is the expectation of someone else).
Also, it would be nice if there was a theoretical argument that one of the given numbers is redundant—getting an impression of what all the 6 numbers mean exactly would take me sufficiently long that it is probably better to just have the whole sentence
But this would of course be less of a problem with experience :)
This % chance of change should fold back into your original forecast, but I like that there is something signalling the depth of your confidence.
Though it’s unclear to me if confidence intervals suggest this notation already. If you had less chance of moving your interval, then it would already be a smaller interval, right?
Counterexample: if I estimate the size of a tree, I might come up with CI 80 % [5 m, 6 m] by eye-balling it and expect that some friend will do a stronger measurement tomorrow. In that case, CI 80 % [5 m, 6m] still seems fine even though I expect the estimate to narrow down soon.
If the tree is instead from some medieval painting, my CI 80 % [5 m, 6 m] could still be true while I do not expect any significant changes to this estimate.
I think that the credal resilience is mostly the expected ease of gaining (/loosing??) additional information so that it does provides additional info to the current estimates.
But there is something to your statement: If I expected that someone could convince me that the tree is actually 20 m tall, this should already be included in my intervals.
I’m interested.
In some way it’s a lot like liquidity in a market. You are saying you’ll buy $100 at 90% then $200 at 80% etc. Someone can’t just come in and force you to bet $1m at 90%. You’d think they had more information than you.