This comment isn’t directly related to the OP, but lately my faith in Bayesian probability theory as an ideal for reasoning (under logical omniscience) has been dropping a bit, due to lack of progress on the problems of understanding what one’s ideal ultimate prior represents and how it ought to be constructed or derived. It seems like one way that Bayesian probability theory could ultimately fail to be a suitable ideal for reasoning is if those problems turn out to be unsolvable.
I’m not sure how this would be failing, except in the sense that we knew from the beginning that it would fail.
Any mathematical formalization is an imperfect expression of real life. And any formalization of anything, mathematical or not, is imperfect, since all words (including mathematical terms) are vague words without a precise meaning. (Either you define a word by other words, which are themselves imprecise; or you define a word by pointing at stuff or by giving examples, which is not a precise way to define things.)
Any mathematical formalization is an imperfect expression of real life.
I think there may have been a misunderstanding here. When So8res and I used the word “ideal” we meant “normative ideal”, something we should try to approximate in order to be more rational, or at least progress towards figuring out how a more rational version of ourselves would reason, not just a simplified mathematical formalism of something in real life. So Bayesian probability theory might qualify as a reasonable formalization of real world reasoning, but still fail to be a normative ideal if it doesn’t represent progress towards figuring out how people ideally ought to reason.
It could represent progress towards figuring out how people ought to reason, in the sense of leaving us better off than we were before, without being able to give a perfect answer that will resolve completely and forever everything about how people ought to reason. And it seems to me that it does do that (leave us better off) in the way So8res was talking about, by at least giving us an analogy to compare our reasoning to.
Yeah, I also have nontrivial odds on “something UDTish is more fundamental than Bayesian inference” / “there are no probabilities only values” these days :-)
Sorry, I meant to imply that my faith in UDT has been dropping a bit too, due to lack of progress on the question of whether the UDT-equivalent of the Bayesian prior just represents subjective values or should be based on something objective like whether some universes has more existence than others (i.e., the “reality fluid” view), and also lack of progress on creating a normative ideal for such a “prior”. (There seems to have been essentially no progress on these questions since “What Are Probabilities, Anyway?” was written about six years ago.)
I mostly agree here, though I’m probably less perturbed by the six year time gap. It seems to me like most of the effort in this space has been going towards figuring out how to handle logical uncertainty and logical counterfactuals (with some reason to believe that answers will bear on the question of how to generate priors), with comparatively little work going into things like naturalized induction that attack the problem of priors more directly.
Can you say any more about alternatives you’ve been considering? I can easily imagine a case where we look back and say “actually the entire problem was about generating a prior-like-thingy” but I have a harder time visualizing different tacts altogether (that don’t eventually have some step that reads “then treat observations like Bayesian evidence”).
Can you say any more about alternatives you’ve been considering?
Not much to say, unfortunately. I tried looking at some frequentist ideas for inspiration, but didn’t find anything that seemed to have much bearing on the kind of philosophical problems we’re trying to solve here.
This comment isn’t directly related to the OP, but lately my faith in Bayesian probability theory as an ideal for reasoning (under logical omniscience) has been dropping a bit, due to lack of progress on the problems of understanding what one’s ideal ultimate prior represents and how it ought to be constructed or derived. It seems like one way that Bayesian probability theory could ultimately fail to be a suitable ideal for reasoning is if those problems turn out to be unsolvable.
(See http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/ and http://lesswrong.com/lw/mln/aixi_can_be_arbitrarily_bad/ for more details about the kind of problems I’m talking about.)
I’m not sure how this would be failing, except in the sense that we knew from the beginning that it would fail.
Any mathematical formalization is an imperfect expression of real life. And any formalization of anything, mathematical or not, is imperfect, since all words (including mathematical terms) are vague words without a precise meaning. (Either you define a word by other words, which are themselves imprecise; or you define a word by pointing at stuff or by giving examples, which is not a precise way to define things.)
I think there may have been a misunderstanding here. When So8res and I used the word “ideal” we meant “normative ideal”, something we should try to approximate in order to be more rational, or at least progress towards figuring out how a more rational version of ourselves would reason, not just a simplified mathematical formalism of something in real life. So Bayesian probability theory might qualify as a reasonable formalization of real world reasoning, but still fail to be a normative ideal if it doesn’t represent progress towards figuring out how people ideally ought to reason.
It could represent progress towards figuring out how people ought to reason, in the sense of leaving us better off than we were before, without being able to give a perfect answer that will resolve completely and forever everything about how people ought to reason. And it seems to me that it does do that (leave us better off) in the way So8res was talking about, by at least giving us an analogy to compare our reasoning to.
Yeah, I also have nontrivial odds on “something UDTish is more fundamental than Bayesian inference” / “there are no probabilities only values” these days :-)
Sorry, I meant to imply that my faith in UDT has been dropping a bit too, due to lack of progress on the question of whether the UDT-equivalent of the Bayesian prior just represents subjective values or should be based on something objective like whether some universes has more existence than others (i.e., the “reality fluid” view), and also lack of progress on creating a normative ideal for such a “prior”. (There seems to have been essentially no progress on these questions since “What Are Probabilities, Anyway?” was written about six years ago.)
I mostly agree here, though I’m probably less perturbed by the six year time gap. It seems to me like most of the effort in this space has been going towards figuring out how to handle logical uncertainty and logical counterfactuals (with some reason to believe that answers will bear on the question of how to generate priors), with comparatively little work going into things like naturalized induction that attack the problem of priors more directly.
Can you say any more about alternatives you’ve been considering? I can easily imagine a case where we look back and say “actually the entire problem was about generating a prior-like-thingy” but I have a harder time visualizing different tacts altogether (that don’t eventually have some step that reads “then treat observations like Bayesian evidence”).
Not much to say, unfortunately. I tried looking at some frequentist ideas for inspiration, but didn’t find anything that seemed to have much bearing on the kind of philosophical problems we’re trying to solve here.