A model for your observations consists (informally) of a model for the universe and then coordinates within the universe which pinpoint your observations, at least in the semantics of Solomonoff induction. So in an infinite universe, most observations must be very complicated, since the coordinates must already be quite complicated. Solomonoff induction naturally defines a roughly-uniform measure over observers in each possible universe, which very slightly discounts observers as they get farther away from distinguished landmarks. The slight discounting makes large universes unproblematic.
I wrote about these things at some point, here, though that was when I was just getting into these things and it now looks silly even to current me. But that’s still the only framework I know for reasoning about big universes, splitting brains, and the born probabilities.
Consequentialist decision making on “small” mathematical structures seems relatively less perplexing (and far from entirely clear), but I’m very much confused about what happens when there are too “many” instances of decision’s structure or in the presence of observations, and I can’t point to any specific “framework” that explains what’s going on (apart from the general hunch that understanding math better clarifies these things, and it does so far).
If X has a significant probability of existing, but you don’t know at all how to reason about X, how confident can you be that your inability to reason about X isn’t doing tremendous harm? (In this case, X = big universes, splitting brains, etc.)
A model for your observations consists (informally) of a model for the universe and then coordinates within the universe which pinpoint your observations, at least in the semantics of Solomonoff induction. So in an infinite universe, most observations must be very complicated, since the coordinates must already be quite complicated. Solomonoff induction naturally defines a roughly-uniform measure over observers in each possible universe, which very slightly discounts observers as they get farther away from distinguished landmarks. The slight discounting makes large universes unproblematic.
I wrote about these things at some point, here, though that was when I was just getting into these things and it now looks silly even to current me. But that’s still the only framework I know for reasoning about big universes, splitting brains, and the born probabilities.
I get by with none...
Are you sure?
Consequentialist decision making on “small” mathematical structures seems relatively less perplexing (and far from entirely clear), but I’m very much confused about what happens when there are too “many” instances of decision’s structure or in the presence of observations, and I can’t point to any specific “framework” that explains what’s going on (apart from the general hunch that understanding math better clarifies these things, and it does so far).
If X has a significant probability of existing, but you don’t know at all how to reason about X, how confident can you be that your inability to reason about X isn’t doing tremendous harm? (In this case, X = big universes, splitting brains, etc.)