I think what’s really needed would be a short single page introduction. Sort of an elevator pitch. Alternatively a longer non-technical explanation for dummies, similar to Yudkowsky’s posts in the sequences.
This would get people interested. It’s unlikely to be motivated to dive into a 12k words math heavy paper without any prior knowledge of what the theory promises to accomplish.
I can actually sort of write the elevator pitch myself. (If not, I probably wouldn’t be interested.) If anything I say here is wrong, someone please correct me!
Non-realizability is the problem that none of the options a real-world Bayesian reasoner is considering is a perfect model of the world. (It actually information-theoretically can’t be, if the reasoner is itself part of the world, since it would need a perfect self-model as part of its perfect world-model, which would mean it could take its own output as an input into its decision process, but then it could decide to do something else and boom, paradox.) One way to explain the sense in which the models of real-world reasoners are imperfect is that, rather than a knife-edge between bets they’ll take and bets on which they’ll take the other side, one might, say, be willing to take a bet that pays out 9:1 that it’ll rain tomorrow, and a bet that pays out 1:3 if it doesn’t rain tomorrow, but for anything in between, one wouldn’t be willing to take either side of the bet. A lot of important properties of Bayesian reasoning depend on realizability, so this is a serious problem.
Infra-Bayesianism purports to solve this by replacing the single probability distribution maintained by an ideal Bayesian reasoner by a certain kind of set of probability distributions. As I understand it, this is done in a way that’s “compatible with Bayesianism” in the sense that if there were only one probability distribution in your set, it would act like regular Bayesianism, but in general the thing that corresponds to a probability is instead the minimum of the probability across all the probability distributions in your set. This allows one to express things like “I’m at least 10% confident it’ll rain tomorrow, and at least 75% confident it won’t rain tomorrow, but if you ask me whether it’s 15% or 20% likely to rain tomorrow, I just don’t know.”
The case in which this seems most obviously useful to me is adversarial. Those offering bets should—if they’re rational—be systematically better informed about the relevant topics. So I should (it seems to me) have a range of probabilities within which the fact that you’re offering the bet is effectively telling me that you appear to be better informed than I am, and therefore I shouldn’t bet. However, I believe Infra-Bayesianism is intended to more generally allow agents to just not have opinions about every possible question they could be asked, but only those about which they actually have some relevant information.
This seems approximately correct as the motivation, which IMO is expressible/ cashable-out in several isomorphic ways. (In that, in Demiurgery, in distributions over game-tree-branches, in expected utility maximinning...)
I think what’s really needed would be a short single page introduction. Sort of an elevator pitch. Alternatively a longer non-technical explanation for dummies, similar to Yudkowsky’s posts in the sequences.
This would get people interested. It’s unlikely to be motivated to dive into a 12k words math heavy paper without any prior knowledge of what the theory promises to accomplish.
I can actually sort of write the elevator pitch myself. (If not, I probably wouldn’t be interested.) If anything I say here is wrong, someone please correct me!
Non-realizability is the problem that none of the options a real-world Bayesian reasoner is considering is a perfect model of the world. (It actually information-theoretically can’t be, if the reasoner is itself part of the world, since it would need a perfect self-model as part of its perfect world-model, which would mean it could take its own output as an input into its decision process, but then it could decide to do something else and boom, paradox.) One way to explain the sense in which the models of real-world reasoners are imperfect is that, rather than a knife-edge between bets they’ll take and bets on which they’ll take the other side, one might, say, be willing to take a bet that pays out 9:1 that it’ll rain tomorrow, and a bet that pays out 1:3 if it doesn’t rain tomorrow, but for anything in between, one wouldn’t be willing to take either side of the bet. A lot of important properties of Bayesian reasoning depend on realizability, so this is a serious problem.
Infra-Bayesianism purports to solve this by replacing the single probability distribution maintained by an ideal Bayesian reasoner by a certain kind of set of probability distributions. As I understand it, this is done in a way that’s “compatible with Bayesianism” in the sense that if there were only one probability distribution in your set, it would act like regular Bayesianism, but in general the thing that corresponds to a probability is instead the minimum of the probability across all the probability distributions in your set. This allows one to express things like “I’m at least 10% confident it’ll rain tomorrow, and at least 75% confident it won’t rain tomorrow, but if you ask me whether it’s 15% or 20% likely to rain tomorrow, I just don’t know.”
The case in which this seems most obviously useful to me is adversarial. Those offering bets should—if they’re rational—be systematically better informed about the relevant topics. So I should (it seems to me) have a range of probabilities within which the fact that you’re offering the bet is effectively telling me that you appear to be better informed than I am, and therefore I shouldn’t bet. However, I believe Infra-Bayesianism is intended to more generally allow agents to just not have opinions about every possible question they could be asked, but only those about which they actually have some relevant information.
This seems approximately correct as the motivation, which IMO is expressible/ cashable-out in several isomorphic ways. (In that, in Demiurgery, in distributions over game-tree-branches, in expected utility maximinning...)