The trouble is, anthropic evidence works. I wish it didn’t, because I wish the nuclear arms race hadn’t come so close to killing us (and may well have killed others), and was instead prevented by some sort of hard-to-observe cooperation.
But it works. Witness the Sleeping Beauty Problem, for example. Or the Sailor’s Child, a modified Sleeping Beauty that I could go outside and play a version of right now if I wished.
The winning solution, that gives the right answer, is to use “anthropic” evidence.
If this confuses you, then I (seriously) suggest you re-examine your understanding of how to perform anthropic calculations.
In fact, what you are describing is not “anthropic” evidence, but just ordinary evidence.
I (think I) know that George VI had five siblings (because you told me so.) That observation is more likely in a world where he did have five siblings (because I guessed your line of argument pretty early in the post, so I know you have no reason to trick me.) Therefore, updating on this observation, it is probable that George VI had five siblings.
Is this an explanation? Sort of.
There might be some special reason why George VI had only five siblings—maybe his parents decided to stop after five, say.
More likely, the true “explanation” is that he just happened to have five siblings, randomly. It wasn’t unusually probable, it just happened by chance that it was that number.
And if that is the true explanation, then that is what I desire to believe.
I don’t understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn’t feel like what I’d call ‘using anthropic evidence’. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)
Can you give a concrete example of what you see as an example of where anthropic reasoning wins (or would win if we performed a simple experiment)? If anything, experiments seem like they would highlight ambiguities that naïve anthropic reasoning misses; if I try to write ‘halfer’ and ‘thirder’ computer programs for Sleeping Beauty to see which wins more, I run into the problem of defining the payoffs and thereby rederive the dissolution ata gave in the linked post.
I don’t understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn’t feel like what I’d call ‘using anthropic evidence’. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)
OK, well by analogy, what’s the “payoff structure” for nuclear anthropics?
Obviously, we can’t prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world.
It isn’t perfectly analogous, but it seems to me that “be right” is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.
I’m not sure if it’s because I’m Confused, but I’m struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.
I’m saying that if Sleeping Beauty’s goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of “payoff” that gives Thirder results.
Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”, and the answer given will be scored according to a logarithmic scoring rule, with the aggregate result corresponding to the number of utilons (converted to dollars, let’s say) she will be penalized after the experiment.
In this case it is optimal to bet 1⁄3 that the coin came up heads, 2⁄3 that it came up tails: [snip table]
The trouble is, anthropic evidence works. I wish it didn’t, because I wish the nuclear arms race hadn’t come so close to killing us (and may well have killed others), and was instead prevented by some sort of hard-to-observe cooperation.
But it works. Witness the Sleeping Beauty Problem, for example. Or the Sailor’s Child, a modified Sleeping Beauty that I could go outside and play a version of right now if I wished.
The winning solution, that gives the right answer, is to use “anthropic” evidence.
If this confuses you, then I (seriously) suggest you re-examine your understanding of how to perform anthropic calculations.
In fact, what you are describing is not “anthropic” evidence, but just ordinary evidence.
I (think I) know that George VI had five siblings (because you told me so.) That observation is more likely in a world where he did have five siblings (because I guessed your line of argument pretty early in the post, so I know you have no reason to trick me.) Therefore, updating on this observation, it is probable that George VI had five siblings.
Is this an explanation? Sort of.
There might be some special reason why George VI had only five siblings—maybe his parents decided to stop after five, say.
More likely, the true “explanation” is that he just happened to have five siblings, randomly. It wasn’t unusually probable, it just happened by chance that it was that number.
And if that is the true explanation, then that is what I desire to believe.
I don’t understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn’t feel like what I’d call ‘using anthropic evidence’. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)
Can you give a concrete example of what you see as an example of where anthropic reasoning wins (or would win if we performed a simple experiment)? If anything, experiments seem like they would highlight ambiguities that naïve anthropic reasoning misses; if I try to write ‘halfer’ and ‘thirder’ computer programs for Sleeping Beauty to see which wins more, I run into the problem of defining the payoffs and thereby rederive the dissolution ata gave in the linked post.
OK, well by analogy, what’s the “payoff structure” for nuclear anthropics?
Obviously, we can’t prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world.
It isn’t perfectly analogous, but it seems to me that “be right” is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.
I’m not sure if it’s because I’m Confused, but I’m struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.
I’m saying that if Sleeping Beauty’s goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of “payoff” that gives Thirder results.
From If a tree falls on Sleeping Beauty...: