This seems like a good overview of UDASSA and its problems. One consideration you didn’t touch on is that the universal distribution is in some sense a good approximation of any computable distribution. (Apparently that’s what the “universal” in UD means, as opposed to meaning that it’s based on a universal Turing machine.) So an alternative way to look at UD is we can use it as a temporary stand-in, until we figure out what the actually right prior is, or what the real distribution of “reality-fluid” is, or how we should really distribute our “care” over the infinite number of “individuals” in the multiverse. This is how I’m mostly viewing UDASSA now (but haven’t really talked about it except in scattered comments).
That UDASSA probably isn’t the final right answer to anthropics, along with the opportunity cost involved in investigating any object-level philosophical problem (cf https://www.lesswrong.com/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy) and the slow progress of investigation (where applying effort at any current margin seems to only cause a net increase in open problems), I think explains a lot of why there’s not much research/writings about UDASSA.
..the universal distribution is in some sense a good approximation of any computable distribution. (Apparently that’s what the “universal” in UD means, as opposed to meaning that it’s based on a universal Turing machine.)
This is a very interesting claim. The way you say it here suggests it’s a proven result but neither of the links explain (to me) why this is true, or exactly what it means. Could you elaborate?
Something of this nature might well be implied by the Algorithmic Complexity article but I don’t understand how, other than that the UD assigns P>0 to all computable hypotheses consistent with the data, which seems weaker than what I think you’re claiming.
This seems like a good overview of UDASSA and its problems. One consideration you didn’t touch on is that the universal distribution is in some sense a good approximation of any computable distribution. (Apparently that’s what the “universal” in UD means, as opposed to meaning that it’s based on a universal Turing machine.) So an alternative way to look at UD is we can use it as a temporary stand-in, until we figure out what the actually right prior is, or what the real distribution of “reality-fluid” is, or how we should really distribute our “care” over the infinite number of “individuals” in the multiverse. This is how I’m mostly viewing UDASSA now (but haven’t really talked about it except in scattered comments).
That UDASSA probably isn’t the final right answer to anthropics, along with the opportunity cost involved in investigating any object-level philosophical problem (cf https://www.lesswrong.com/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy) and the slow progress of investigation (where applying effort at any current margin seems to only cause a net increase in open problems), I think explains a lot of why there’s not much research/writings about UDASSA.
This is a very interesting claim. The way you say it here suggests it’s a proven result but neither of the links explain (to me) why this is true, or exactly what it means. Could you elaborate?
Something of this nature might well be implied by the Algorithmic Complexity article but I don’t understand how, other than that the UD assigns P>0 to all computable hypotheses consistent with the data, which seems weaker than what I think you’re claiming.