You know, it’s actually possible for a rational person to be unable to give consistent answers to de Finetti’s choice under certain circumstances. When the person offering the bet is a semi-rational person who wants to win money and who might have unknown-to-me information, that’s evidence in favor of the position they’re offering to take. Because I should update in the direction of their implied beliefs no matter which side of the bet they offered me, there will be a range around my own subjective probability in which I won’t want to take any bet.
Sure, when you’re 100% sure that the person offering the bet is a nerd who’s solely trying to honestly elicit some Bayesian subjective probability estimate, then you’re safe taking either side of the same probability bet. But I’ll bet your estimate of that likelihood is less than 100%.
I don’t see how this applies to ciphergoth’s example. In the example under consideration, the person offering you the bet cannot make money, and the person offered the bet cannot lose money. The question is, “For which of two events would you like to be paid some set amount of money, say $5, in case it occurs?” One of the events is that a fair coin flip comes up heads. The other is an ordinary one-off occurrence, like the election of Obama in 2012 or the sun exploding tomorrow.
The goal is to elicit the degree of belief that the person has in the one-off event. If the person takes the one-off event when given a choice like this, then we want to say (or de Finetti wanted to say, anyway) that the person’s prior is greater than 1⁄2. If the person says, “I don’t care, let me flip a coin,” like ciphergoth’s interlocutor did, then we want to say that the person has a prior equal to 1⁄2. There are still lots of problems, since (among other things) in the usual personalist story, degrees of belief have to be infinitely precise—corresponding to a single real number—and it is not clear that when a person says, “Oh, just flip a coin,” the person has a degree of belief equal to 1⁄2, as opposed to an interval-valued degree of belief centered on 1⁄2 or something like that.
But anyway, I don’t see how your point makes contact with ciphergoth’s.
For a rational person with infinite processing power, my point doesn’t apply. You can also neglect air resistance when determining the trajectory of a perfectly spherical cow in a vacuum.
For a person of limited intelligence (i.e. all of us), it’s typically necessary to pick easily-evaluated heuristics that can be used in place of detailed analysis of every decision. I last used my “people offering me free stuff out of nowhere are probably trying to scam me somehow” heuristic while opening mail a few days ago. If ciphergoth’s interlocuter had been subconsciously thinking the same way, then this time they missed a valuable opportunity for introspection, but it’s not immediately obvious that such false positive mistakes are worse than the increased possibility of false negatives that would be created if they instead tried to successfully outthink every “cannot lose” bet that comes their way.
The person offering the bet still (presumably) wants to minimize their loss, so they would be more likely to offer it if the unknown occurrence was impossible than if it was certain.
You know, it’s actually possible for a rational person to be unable to give consistent answers to de Finetti’s choice under certain circumstances. When the person offering the bet is a semi-rational person who wants to win money and who might have unknown-to-me information, that’s evidence in favor of the position they’re offering to take. Because I should update in the direction of their implied beliefs no matter which side of the bet they offered me, there will be a range around my own subjective probability in which I won’t want to take any bet.
Sure, when you’re 100% sure that the person offering the bet is a nerd who’s solely trying to honestly elicit some Bayesian subjective probability estimate, then you’re safe taking either side of the same probability bet. But I’ll bet your estimate of that likelihood is less than 100%.
I don’t see how this applies to ciphergoth’s example. In the example under consideration, the person offering you the bet cannot make money, and the person offered the bet cannot lose money. The question is, “For which of two events would you like to be paid some set amount of money, say $5, in case it occurs?” One of the events is that a fair coin flip comes up heads. The other is an ordinary one-off occurrence, like the election of Obama in 2012 or the sun exploding tomorrow.
The goal is to elicit the degree of belief that the person has in the one-off event. If the person takes the one-off event when given a choice like this, then we want to say (or de Finetti wanted to say, anyway) that the person’s prior is greater than 1⁄2. If the person says, “I don’t care, let me flip a coin,” like ciphergoth’s interlocutor did, then we want to say that the person has a prior equal to 1⁄2. There are still lots of problems, since (among other things) in the usual personalist story, degrees of belief have to be infinitely precise—corresponding to a single real number—and it is not clear that when a person says, “Oh, just flip a coin,” the person has a degree of belief equal to 1⁄2, as opposed to an interval-valued degree of belief centered on 1⁄2 or something like that.
But anyway, I don’t see how your point makes contact with ciphergoth’s.
For a rational person with infinite processing power, my point doesn’t apply. You can also neglect air resistance when determining the trajectory of a perfectly spherical cow in a vacuum.
For a person of limited intelligence (i.e. all of us), it’s typically necessary to pick easily-evaluated heuristics that can be used in place of detailed analysis of every decision. I last used my “people offering me free stuff out of nowhere are probably trying to scam me somehow” heuristic while opening mail a few days ago. If ciphergoth’s interlocuter had been subconsciously thinking the same way, then this time they missed a valuable opportunity for introspection, but it’s not immediately obvious that such false positive mistakes are worse than the increased possibility of false negatives that would be created if they instead tried to successfully outthink every “cannot lose” bet that comes their way.
The person offering the bet still (presumably) wants to minimize their loss, so they would be more likely to offer it if the unknown occurrence was impossible than if it was certain.