An obvious way to do so is put a hazard sign on “probability” and just not use it, not putting resources into figuring out what “probability” SB should name, isn’t it?
It’s an obvious thing to do when dealing with simularity clusters poorly defined in natural language. Not so much, when we are talking about a logically pinpointed mathematical concept which we know are crucial for epistemology.
(And now I realize a possible point why you’re arguing to keep “probability” term for such scenarios well-defined; so that people in ~anthropic settings can tell you their probability estimates and you, being observer, could update on that information.)
It’s not just about anthropic scenarios and not just about me being able to understand other people. It’s about general truth preserving mechanism of logical and mathematical reasoning. When people just use different definitions—this is annoying but fine. But when they use different definitions without realizing that these definitions are different and, moreover insist that it’s you who is making a mistake—then we have an actual disagreement about math which will provide more confusion along the way. Anthropic scenarious are just the ones where this confusion is noticeable.
As for why I believe probability theory to be useful in life despite the fact that sometimes different tools need to be used
What exactly do you mean by “different tools need to be used”? Can you give me an example?
What exactly do you mean by “different tools need to be used”? Can you give me an example?
I mean that Beauty should maintain full model of experiment, and use decision theory as well as probability theory (if latter is even useful, which it admittedly seems to be). If she didn’t keep track of full setup but only “a fair coin was flipped, so the odds are 1:1”, she would predictably lose when betting on the coin outcome.
Also, I’ve minted another “paradox” version. I can predict you’ll take issue with one of formulations in it, but what do you think about it?
A fair coin is flipped, hidden from you.
On Heads, you’re waken up on Monday, asked “what credence do you have that coin landed Heads?”; on Tuesday, you’re let go.
If coin landed Tails, you’re waken up on Monday and still asked “what credence do you have that coin landed Heads?”; then, with no memory erasure, you’re waken up on Tuesday, and experimenter says to you: “Name the credence that coin landed Heads, but you must name the exact same number as yesterday”. Afterwards, you’re let go.
If you don’t follow experiment protocol, you lose/lose out on some reward.
I suppose the participant is just supposed to lie about their credence here in order to “win”.
On Tuesday your credence in Heads supposed to be 0, but saying the true value would go against the experimental protocol unless you also said that your credence is 0 on Monday, which would also be a lie.
She certainly gets a reward for following experimental protocol, but beyond that… I concur there’s the problem, and I have the same issue with standard formulation asking for probability.
In particular, pushing problem out to morality “what should Sleeping Beauty answer so that she doesn’t feel as if she’s lying” doesn’t solve anything either; rather, it feels like asking question “is continuum hypothesis true?” providing only options ‘true’ and ‘false’, while it’s actually independent of ZFC axioms (claims of it or of its negation produce different models, neither proven to self-contradict).
P.S. One more analogue: there’s a field, and some people (experimenters) are asking whether it rained recently with clear intent to walk through if it didn’t; you know it didn’t rain but there are mines all over the field. I argue you should mention the mines first (“that probability—which by the way will be 1⁄2 - can be found out, conforms to epistemology, but isn’t directly usable anywhere”) before saying if there was rain.
It’s an obvious thing to do when dealing with simularity clusters poorly defined in natural language. Not so much, when we are talking about a logically pinpointed mathematical concept which we know are crucial for epistemology.
It’s not just about anthropic scenarios and not just about me being able to understand other people. It’s about general truth preserving mechanism of logical and mathematical reasoning. When people just use different definitions—this is annoying but fine. But when they use different definitions without realizing that these definitions are different and, moreover insist that it’s you who is making a mistake—then we have an actual disagreement about math which will provide more confusion along the way. Anthropic scenarious are just the ones where this confusion is noticeable.
What exactly do you mean by “different tools need to be used”? Can you give me an example?
I mean that Beauty should maintain full model of experiment, and use decision theory as well as probability theory (if latter is even useful, which it admittedly seems to be). If she didn’t keep track of full setup but only “a fair coin was flipped, so the odds are 1:1”, she would predictably lose when betting on the coin outcome.
Also, I’ve minted another “paradox” version. I can predict you’ll take issue with one of formulations in it, but what do you think about it?
I suppose the participant is just supposed to lie about their credence here in order to “win”.
On Tuesday your credence in Heads supposed to be 0, but saying the true value would go against the experimental protocol unless you also said that your credence is 0 on Monday, which would also be a lie.
I don’t understand this formulation. If Beauty always says that the probability of Heads is 1⁄7, does she win? Whatever “win” means...
She certainly gets a reward for following experimental protocol, but beyond that… I concur there’s the problem, and I have the same issue with standard formulation asking for probability.
In particular, pushing problem out to morality “what should Sleeping Beauty answer so that she doesn’t feel as if she’s lying” doesn’t solve anything either; rather, it feels like asking question “is continuum hypothesis true?” providing only options ‘true’ and ‘false’, while it’s actually independent of ZFC axioms (claims of it or of its negation produce different models, neither proven to self-contradict).
P.S. One more analogue: there’s a field, and some people (experimenters) are asking whether it rained recently with clear intent to walk through if it didn’t; you know it didn’t rain but there are mines all over the field.
I argue you should mention the mines first (“that probability—which by the way will be 1⁄2 - can be found out, conforms to epistemology, but isn’t directly usable anywhere”) before saying if there was rain.