That said, if you have example problems where a logically omniscient Bayesian reasoner who incorporates all your implicit knowledge into their prior would get the wrong answers, those I want to see, because those do bear on the philosophical question that I currently see Bayesian probability theory as providing an answer to—and if there’s a chink in that armor, then I want to know :-)
It is well known where there might be chinks in the armor, which is what happens when two logically omniscient Bayesians sit down to play a a game of Poker? Bayesian game theory is still in a very developmental stage (in fact, I’m guessing it’s one of the things MIRI is working on) and there could be all kinds of paradoxes lurking in wait to supplement the ones we’ve already encountered (e.g. two-boxing.)
Sure! I would like to clarify, though, that by “logically omniscient” I also meant “while being way larger than everything else in the universe.” I’m also readily willing to admit that Bayesian probability theory doesn’t get anywhere near solving decision theory, that’s an entirely different can of worms where there’s still lots of work to be done. (Bayesian probability theory alone does not prescribe two-boxing, in fact; that requires the addition of some decision theory which tells you how to compute the consequences of actions given a probability distribution, which is way outside the domain of Bayesian inference.)
Bayesian reasoning is an idealized method for building accurate world-models when you’re the biggest thing in the room; two large open problems are (a) modeling the world when you’re smaller than the universe and (b) computing the counterfactual consequences of actions from your world model. Bayesian probability theory sheds little light on either; nor is it intended to.
I personally don’t think it’s that useful to consider cases like “but what if there’s two logically omniscient reasoners in the same room?” and then demand a coherent probability distribution. Nevertheless, you can do that, and in fact, we’ve recently solved that problem (Benya and Jessica Taylor will be presenting it at LORI V next week, in fact); the answer, assuming the usual decision-theoretic assumptions, is “they play Nash equilibria”, as you’d expect :-)
Great comment, mind if I quote you later on? :)
It is well known where there might be chinks in the armor, which is what happens when two logically omniscient Bayesians sit down to play a a game of Poker? Bayesian game theory is still in a very developmental stage (in fact, I’m guessing it’s one of the things MIRI is working on) and there could be all kinds of paradoxes lurking in wait to supplement the ones we’ve already encountered (e.g. two-boxing.)
Sure! I would like to clarify, though, that by “logically omniscient” I also meant “while being way larger than everything else in the universe.” I’m also readily willing to admit that Bayesian probability theory doesn’t get anywhere near solving decision theory, that’s an entirely different can of worms where there’s still lots of work to be done. (Bayesian probability theory alone does not prescribe two-boxing, in fact; that requires the addition of some decision theory which tells you how to compute the consequences of actions given a probability distribution, which is way outside the domain of Bayesian inference.)
Bayesian reasoning is an idealized method for building accurate world-models when you’re the biggest thing in the room; two large open problems are (a) modeling the world when you’re smaller than the universe and (b) computing the counterfactual consequences of actions from your world model. Bayesian probability theory sheds little light on either; nor is it intended to.
I personally don’t think it’s that useful to consider cases like “but what if there’s two logically omniscient reasoners in the same room?” and then demand a coherent probability distribution. Nevertheless, you can do that, and in fact, we’ve recently solved that problem (Benya and Jessica Taylor will be presenting it at LORI V next week, in fact); the answer, assuming the usual decision-theoretic assumptions, is “they play Nash equilibria”, as you’d expect :-)
Cool, I will take a look at the paper!