(To side step the issue of how to make sense of probabilities over observations given anthropic-reasoning problems, I’ll assume UDT and just talk about priors on universes instead.)
I’m still torn between thinking that the universe must follow a simplicity prior in some objective sense and there might be a way to find out what that objective prior is, and the idea that a prior is just a way of specifying how much we care about one universe versus another. (See the third and fourth bullet points in What Are Probabilities, Anyway?)
If it’s the former, then any notion of simplicity that we currently have might be wrong in the sense of not matching up with the objective “reality fluid”. If the latter, a notion of simplicity might be wrong in the sense of not reflecting how much we actually care about one universe versus another. In either case, it seems unlikely that any specific notion of simplicity, based on a model of computation (e.g., universal Turing machine) that was chosen because it happens to be useful for reasoning about practical computers, would turn out to be right.
I’m still torn between thinking that the universe must follow a simplicity prior in some objective sense and there might be a way to find out what that objective prior is, and the idea that a prior is just a way of specifying how much we care about one universe versus another.
It doesn’t seem to me that these are necessarily at odds—this ties in with the “is value/morality simple?” question, no?
(Optional conceptual soup: And obviously some mixture of the two is possible, and perhaps a mixture that changes over time/causality as things (decision procedures) get more timeless over time (evolution being timeful, humans less so, AIs even less so). One of my models of morality represents this as deals between algorithms with increasingly timeless discount rates, from hyperbolic to exponential to none, over time, where each algorithm gets satisfied but the more timeless algorithms win out over time by their nature (and are progressively replaced by ever more timeless algorithms until you have a completely acausal-trade-like situation). This highlights interesting parallels between discounting and cooperation—which can be thought of as symmetries between time and space, or future selves and present compatriots—and is generally a pretty useful perspective on the moral universe. That’s the conclusion I have cached anyway. Ainslie’s book “Breakdown of Will” provides some relevant background concepts.) (ETA: /Insert some sheer nonsense about the ergodic hypothesis and generally making analogies to statistical mechanics / probability theory / quantum information theory, simply because, well, at this point why not? I suspect that reading tons of academic paper abstracts without taking time to really understand any of them is a rather “attractive” form of pure Platonic wireheading.)
(To side step the issue of how to make sense of probabilities over observations given anthropic-reasoning problems, I’ll assume UDT and just talk about priors on universes instead.)
I’m still torn between thinking that the universe must follow a simplicity prior in some objective sense and there might be a way to find out what that objective prior is, and the idea that a prior is just a way of specifying how much we care about one universe versus another. (See the third and fourth bullet points in What Are Probabilities, Anyway?)
If it’s the former, then any notion of simplicity that we currently have might be wrong in the sense of not matching up with the objective “reality fluid”. If the latter, a notion of simplicity might be wrong in the sense of not reflecting how much we actually care about one universe versus another. In either case, it seems unlikely that any specific notion of simplicity, based on a model of computation (e.g., universal Turing machine) that was chosen because it happens to be useful for reasoning about practical computers, would turn out to be right.
Does that answer your question?
It doesn’t seem to me that these are necessarily at odds—this ties in with the “is value/morality simple?” question, no?
(Optional conceptual soup: And obviously some mixture of the two is possible, and perhaps a mixture that changes over time/causality as things (decision procedures) get more timeless over time (evolution being timeful, humans less so, AIs even less so). One of my models of morality represents this as deals between algorithms with increasingly timeless discount rates, from hyperbolic to exponential to none, over time, where each algorithm gets satisfied but the more timeless algorithms win out over time by their nature (and are progressively replaced by ever more timeless algorithms until you have a completely acausal-trade-like situation). This highlights interesting parallels between discounting and cooperation—which can be thought of as symmetries between time and space, or future selves and present compatriots—and is generally a pretty useful perspective on the moral universe. That’s the conclusion I have cached anyway. Ainslie’s book “Breakdown of Will” provides some relevant background concepts.) (ETA: /Insert some sheer nonsense about the ergodic hypothesis and generally making analogies to statistical mechanics / probability theory / quantum information theory, simply because, well, at this point why not? I suspect that reading tons of academic paper abstracts without taking time to really understand any of them is a rather “attractive” form of pure Platonic wireheading.)