Boxes proofed against all direct and indirect observation, potential for observation mixed with concrete practicality of such observation, strictly-worse choices, morality… one would be hard-pressed to muddle your thought experiment more than that.
Let’s try to make it a little more straightforward: assume that there exists a certain amount of physical space which falls outside our past light cone. Do you think it is equally likely that it contains galaxies and that it contains unicorns? More importantly, do you think the preceding question means anything?
In my example, as in many others, morality/utility is necessary for reducing questions about “beliefs” to questions about decisions. (Similar to how adding payoffs to the Sleeping Beauty problem clarifies matters a lot, and how naively talking about probabilities in the Absent-Minded Driver introduces a time loop.) In your formulation I may legitimately withhold judgment about unicorns—say the question is as meaningless as asking whether integers “are” a subset of reals, or a distinct set—because it doesn’t affect my future utility either way. In my formulation you can’t wiggle out as easily.
I thought about your questions some more, and stumbled upon a perspective that makes them all meaningful—yes, even the one about defining the real numbers. You have to imagine yourself living in a sort of “Solomonoff multiverse” that runs a weighted mix of all possible programs, and act as if to maximize your expected utility over that whole multiverse. Never mind “truth” or “degrees of belief” at all! If Omega comes to you and asks whether an inaccessible region of space contains galaxies or unicorns, bravely answer “galaxies” because it wins you more cookies weighted by universe-weight—simpler programs have more of it. This seems to be the coherent position that many commenters seem to be groping toward…
Boxes proofed against all direct and indirect observation, potential for observation mixed with concrete practicality of such observation, strictly-worse choices, morality… one would be hard-pressed to muddle your thought experiment more than that.
Let’s try to make it a little more straightforward: assume that there exists a certain amount of physical space which falls outside our past light cone. Do you think it is equally likely that it contains galaxies and that it contains unicorns? More importantly, do you think the preceding question means anything?
In my example, as in many others, morality/utility is necessary for reducing questions about “beliefs” to questions about decisions. (Similar to how adding payoffs to the Sleeping Beauty problem clarifies matters a lot, and how naively talking about probabilities in the Absent-Minded Driver introduces a time loop.) In your formulation I may legitimately withhold judgment about unicorns—say the question is as meaningless as asking whether integers “are” a subset of reals, or a distinct set—because it doesn’t affect my future utility either way. In my formulation you can’t wiggle out as easily.
[Edited out—I need to think this over a little longer]
I thought about your questions some more, and stumbled upon a perspective that makes them all meaningful—yes, even the one about defining the real numbers. You have to imagine yourself living in a sort of “Solomonoff multiverse” that runs a weighted mix of all possible programs, and act as if to maximize your expected utility over that whole multiverse. Never mind “truth” or “degrees of belief” at all! If Omega comes to you and asks whether an inaccessible region of space contains galaxies or unicorns, bravely answer “galaxies” because it wins you more cookies weighted by universe-weight—simpler programs have more of it. This seems to be the coherent position that many commenters seem to be groping toward…