Being new to this, I have no problem asking naïve questions, so: why does Sleeping Beauty need to have any “credence” at all? She’s armed with the facts of the experiment and can make decisions based solely on those; why does anyone suppose that she forms some “credence” as a proxy for the facts?
Just like to add that I found your website very clear and, in parts, quite compelling. Thank you.
I think your question is not naive at all. Stuart_Armstrong argued for a similar point that anthropic questions should be about decision making rather than probability assigning here. However I do remember a post in lesswrong claiming he has modified his position but I couldn’t find that post right now.
That said, I think ignoring probability in anthropics is not the way to go. We generally regard probability as the logical basis of rational decision making. There is no good reason why anthropic problems are different. In my opinion, focusing on decision making mitigate one problem: it forces us to state the decision objective as premises. So Halfers and Thirders can each come up with their own objectives reflecting their answers. It avoids the question of which objective is reflective of the correct probability. By perspective-based reasoning, I think the correct objective should be simple selfish goals, where the self is primitively identified I by its immediacy to subjective experience (i.e. the perspective center). And for some paradoxes such as the doomsday argument and presumptuous philosopher, converting them into decision making problem seems rather strange: I just want to know if their conclusions are right or wrong, why even ask if I care about the welfare of all potentially existing humans?
I would agree that having a clear decision objective is important. I would go further: without an objective, why should anyone care how the agent (who, by definition, takes action) feels about their circumstances? I note in your final sentence that you see things differently, but I don’t have a killer argument to the contrary.
I can see the need for subjective probability, but only in model selection. Thereafter, you’re working to find a strategy maximising the expected value of an objective function. I recorded my thoughts here.
Being new to this, I have no problem asking naïve questions, so: why does Sleeping Beauty need to have any “credence” at all? She’s armed with the facts of the experiment and can make decisions based solely on those; why does anyone suppose that she forms some “credence” as a proxy for the facts?
Just like to add that I found your website very clear and, in parts, quite compelling. Thank you.
I think your question is not naive at all. Stuart_Armstrong argued for a similar point that anthropic questions should be about decision making rather than probability assigning here. However I do remember a post in lesswrong claiming he has modified his position but I couldn’t find that post right now.
That said, I think ignoring probability in anthropics is not the way to go. We generally regard probability as the logical basis of rational decision making. There is no good reason why anthropic problems are different. In my opinion, focusing on decision making mitigate one problem: it forces us to state the decision objective as premises. So Halfers and Thirders can each come up with their own objectives reflecting their answers. It avoids the question of which objective is reflective of the correct probability. By perspective-based reasoning, I think the correct objective should be simple selfish goals, where the self is primitively identified I by its immediacy to subjective experience (i.e. the perspective center). And for some paradoxes such as the doomsday argument and presumptuous philosopher, converting them into decision making problem seems rather strange: I just want to know if their conclusions are right or wrong, why even ask if I care about the welfare of all potentially existing humans?
Thank you for the quick reply.
I would agree that having a clear decision objective is important. I would go further: without an objective, why should anyone care how the agent (who, by definition, takes action) feels about their circumstances? I note in your final sentence that you see things differently, but I don’t have a killer argument to the contrary.
I can see the need for subjective probability, but only in model selection. Thereafter, you’re working to find a strategy maximising the expected value of an objective function. I recorded my thoughts here.