I’ve always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it’s about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge.
Sounds right, if you unpack “implied by its state of knowledge” to not mean “only consider possible worlds consistent with observations”. Basically, anthropic reasoning is about logical (agent-provable even) uncertainty, and for the same reason very sensitive to the problem statement and hard to get right, given that we have no theory that is anywhere adequate for understanding decision-making given logical uncertainty.
(This is also a way of explaining away the whole anthropic reasoning question, by pointing out that nothing will be left to understand once you can make the logically correlated decisions correctly.)
Sounds right, if you unpack “implied by its state of knowledge” to not mean “only consider possible worlds consistent with observations”. Basically, anthropic reasoning is about logical (agent-provable even) uncertainty, and for the same reason very sensitive to the problem statement and hard to get right, given that we have no theory that is anywhere adequate for understanding decision-making given logical uncertainty.
(This is also a way of explaining away the whole anthropic reasoning question, by pointing out that nothing will be left to understand once you can make the logically correlated decisions correctly.)