If the hypothetical Omega tells you that they’re is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?
This would depend on my level of trust in Omega (why would I believe it? Because Omega said so. Why believe Omega? That depends on how much Omega has demonstrated near-omniscience and honesty). And in the absence of Omega telling me so, I’m rather skeptical of the idea.
For my part, it’s difficult for me to imagine a set of observations I could make that would provide sufficient evidence to justify belief in many of the kinds of statements that get tossed around in these sorts of discussions. I generally just assume Omega adjusts my priors directly.
This would depend on my level of trust in Omega (why would I believe it? Because Omega said so. Why believe Omega? That depends on how much Omega has demonstrated near-omniscience and honesty). And in the absence of Omega telling me so, I’m rather skeptical of the idea.
For my part, it’s difficult for me to imagine a set of observations I could make that would provide sufficient evidence to justify belief in many of the kinds of statements that get tossed around in these sorts of discussions. I generally just assume Omega adjusts my priors directly.