I read this and told myself that it only takes five minutes to have an insight. Five minutes later, here’s what I’m thinking:
Anthropic reasoning is confusing because it treats consciousness as a primitive. By doing so, we’re committing LW’s ultimate no-no: assuming an ontologically fundamental mental state. We need to find a way to reformulate anthropic reasoning in terms Solomonoff induction. If we can successfully do so, the paradox will dissolve.
Anthropic reasoning is confusing—probably because we are not used to doing it much in our ancestral environment.
I don’t think you can argue it treats consciousness as a primitive, though. Anthropic reasoning is challenging—but not so tricky that machines can’t do it.
I read this and told myself that it only takes five minutes to have an insight. Five minutes later, here’s what I’m thinking:
Anthropic reasoning is confusing because it treats consciousness as a primitive. By doing so, we’re committing LW’s ultimate no-no: assuming an ontologically fundamental mental state. We need to find a way to reformulate anthropic reasoning in terms Solomonoff induction. If we can successfully do so, the paradox will dissolve.
Anthropic reasoning is confusing—probably because we are not used to doing it much in our ancestral environment.
I don’t think you can argue it treats consciousness as a primitive, though. Anthropic reasoning is challenging—but not so tricky that machines can’t do it.
It involves calculating a ‘correct measure’ of how many partial duplicates of a computation exist:
www.nickbostrom.com/papers/experience.pdf
Anthropics does involve magical categories.
Right—but that’s “Arthur C Clark-style magic”—stuff that is complicated and difficult—not the type of magic associated with mystical mumbo-jumbo.
We can live with some of the former type of magic—and it might even spice things up a bit.
I fail to see how solomonoff can reduce ontologically basic mental states.