Consciousness is really just a name for having a model of yourself which you can reflect on and act on—plus a whole bunch of other confused interpretations which don’t really add much.
To do anthropic reasoning you have to have a simple model of yourself which you can reason about.
Machines can do this too, of course, without too much difficulty. That typically makes them conscious, though. Perhaps we can imagine a machine performing anthropic reasoning while dreaming—i.e. when most of its actuators are disabled, and it would not normally be regarded as being conscious. However, then, how would we know about its conclusions?
Consciousness is really just a name for having a model of yourself which you can reflect on and act on—plus a whole bunch of other confused interpretations which don’t really add much.
To do anthropic reasoning you have to have a simple model of yourself which you can reason about.
Machines can do this too, of course, without too much difficulty. That typically makes them conscious, though. Perhaps we can imagine a machine performing anthropic reasoning while dreaming—i.e. when most of its actuators are disabled, and it would not normally be regarded as being conscious. However, then, how would we know about its conclusions?