I don’t understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn’t feel like what I’d call ‘using anthropic evidence’. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)
Can you give a concrete example of what you see as an example of where anthropic reasoning wins (or would win if we performed a simple experiment)? If anything, experiments seem like they would highlight ambiguities that naïve anthropic reasoning misses; if I try to write ‘halfer’ and ‘thirder’ computer programs for Sleeping Beauty to see which wins more, I run into the problem of defining the payoffs and thereby rederive the dissolution ata gave in the linked post.
I don’t understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn’t feel like what I’d call ‘using anthropic evidence’. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)
OK, well by analogy, what’s the “payoff structure” for nuclear anthropics?
Obviously, we can’t prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world.
It isn’t perfectly analogous, but it seems to me that “be right” is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.
I’m not sure if it’s because I’m Confused, but I’m struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.
I’m saying that if Sleeping Beauty’s goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of “payoff” that gives Thirder results.
Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”, and the answer given will be scored according to a logarithmic scoring rule, with the aggregate result corresponding to the number of utilons (converted to dollars, let’s say) she will be penalized after the experiment.
In this case it is optimal to bet 1⁄3 that the coin came up heads, 2⁄3 that it came up tails: [snip table]
I don’t understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn’t feel like what I’d call ‘using anthropic evidence’. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)
Can you give a concrete example of what you see as an example of where anthropic reasoning wins (or would win if we performed a simple experiment)? If anything, experiments seem like they would highlight ambiguities that naïve anthropic reasoning misses; if I try to write ‘halfer’ and ‘thirder’ computer programs for Sleeping Beauty to see which wins more, I run into the problem of defining the payoffs and thereby rederive the dissolution ata gave in the linked post.
OK, well by analogy, what’s the “payoff structure” for nuclear anthropics?
Obviously, we can’t prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world.
It isn’t perfectly analogous, but it seems to me that “be right” is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.
I’m not sure if it’s because I’m Confused, but I’m struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.
I’m saying that if Sleeping Beauty’s goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of “payoff” that gives Thirder results.
From If a tree falls on Sleeping Beauty...: