I don’t think there’s a very good precise way to do so, but one useful concept is bid-ask spreads, which are a way of protecting yourself from adverse selection of bets. E.g. consider the following two credences, both of which are 0.5.
My credence that a fair coin will land heads.
My credence that the wind tomorrow in my neighborhood will be blowing more northwards than southwards (I know very little about meteorology and have no recollection of which direction previous winds have mostly blown).
Intuitively, however, the former is very difficult to change, whereas the latter might swing wildly given even a little bit of evidence (e.g. someone saying “I remember in high school my teacher mentioned that winds often blow towards the equator.”)
Suppose I have to decide on a policy that I’ll accept bets for or against each of these propositions at X:1 odds (i.e. my opponent puts up $X for every $1 I put up). For the first proposition, I might set X to be 1.05, because as long as I have a small edge I’m confident I won’t be exploited.
By contrast, if I set X=1.05 for the second proposition, then probably what will happen is that people will only decide to bet against me if they have more information than me (e.g. checking weather forecasts), and so they’ll end up winning a lot of money for me. And so I’d actually want X to be something more like 2 or maybe higher, depending on who I expect to be betting against, even though my credence right now is 0.5.
In your case, you might formalize this by talking about your bid-ask spread when trading against people who know about these bottlenecks.
Surely something like the expected variance of log(p/(1−p)) would be a much simpler way of formalising this, no? The probability over time is just a stochastic process, and OP is expecting the variance of this process to be very high in the near future.
The variance over time depends on how you gather information in the future, making it less general. For example, I may literally never learn enough about meteorology to update my credence about the winds from 0.5. Nevertheless, there’s still an important sense in which this credence is more fragile than my credence about coins, because I could update it.
I guess you could define it as something like “the variance if you investigated it further”. But defining what it means to investigate further seems about as complicated as defining the reference class of people you’re trading against. Also variance doesn’t give you the same directional information—e.g. OP would bet on doom at 2% or bet against it at 16%.
Overall though, as I said above, I don’t know a great way to formalize this, and would be very interested in attempts to do so.
Wait, why doesn’t the entropy of your posterior distribution capture this effect? In the basic example where we get to see samples from a bernoulli process, the posterior is a beta distribution that gets ever sharper around the truth. If you compute the entropy of the posterior, you might say something like “I’m unlikely to change my mind about this, my posterior only has 0.2 bits to go until zero entropy”. That’s already a quantity which estimates how much future evidence will influence your beliefs.
The thing that distinguishes the coin case from the wind case is how hard it is to gather additional information, not how much more information could be gathered in principle. In theory you could run all sorts of simulations that would give you informative data about an individual flip of the coin, it’s just that it would be really hard to do so/very few people are able to do so. I don’t think the entropy of the posterior captures this dynamic.
Someone asked basically this question before, and someone gave basically the same answer. It’s a good idea, but there are some problems with it: it depends on your and your counterparties’ risk aversion, wealth, and information levels, which are often extraneous.
I don’t think there’s a very good precise way to do so, but one useful concept is bid-ask spreads, which are a way of protecting yourself from adverse selection of bets. E.g. consider the following two credences, both of which are 0.5.
My credence that a fair coin will land heads.
My credence that the wind tomorrow in my neighborhood will be blowing more northwards than southwards (I know very little about meteorology and have no recollection of which direction previous winds have mostly blown).
Intuitively, however, the former is very difficult to change, whereas the latter might swing wildly given even a little bit of evidence (e.g. someone saying “I remember in high school my teacher mentioned that winds often blow towards the equator.”)
Suppose I have to decide on a policy that I’ll accept bets for or against each of these propositions at X:1 odds (i.e. my opponent puts up $X for every $1 I put up). For the first proposition, I might set X to be 1.05, because as long as I have a small edge I’m confident I won’t be exploited.
By contrast, if I set X=1.05 for the second proposition, then probably what will happen is that people will only decide to bet against me if they have more information than me (e.g. checking weather forecasts), and so they’ll end up winning a lot of money for me. And so I’d actually want X to be something more like 2 or maybe higher, depending on who I expect to be betting against, even though my credence right now is 0.5.
In your case, you might formalize this by talking about your bid-ask spread when trading against people who know about these bottlenecks.
Surely something like the expected variance of log(p/(1−p)) would be a much simpler way of formalising this, no? The probability over time is just a stochastic process, and OP is expecting the variance of this process to be very high in the near future.
The variance over time depends on how you gather information in the future, making it less general. For example, I may literally never learn enough about meteorology to update my credence about the winds from 0.5. Nevertheless, there’s still an important sense in which this credence is more fragile than my credence about coins, because I could update it.
I guess you could define it as something like “the variance if you investigated it further”. But defining what it means to investigate further seems about as complicated as defining the reference class of people you’re trading against. Also variance doesn’t give you the same directional information—e.g. OP would bet on doom at 2% or bet against it at 16%.
Overall though, as I said above, I don’t know a great way to formalize this, and would be very interested in attempts to do so.
Wait, why doesn’t the entropy of your posterior distribution capture this effect? In the basic example where we get to see samples from a bernoulli process, the posterior is a beta distribution that gets ever sharper around the truth. If you compute the entropy of the posterior, you might say something like “I’m unlikely to change my mind about this, my posterior only has 0.2 bits to go until zero entropy”. That’s already a quantity which estimates how much future evidence will influence your beliefs.
The thing that distinguishes the coin case from the wind case is how hard it is to gather additional information, not how much more information could be gathered in principle. In theory you could run all sorts of simulations that would give you informative data about an individual flip of the coin, it’s just that it would be really hard to do so/very few people are able to do so. I don’t think the entropy of the posterior captures this dynamic.
Someone asked basically this question before, and someone gave basically the same answer. It’s a good idea, but there are some problems with it: it depends on your and your counterparties’ risk aversion, wealth, and information levels, which are often extraneous.