why can’t they admit they’re approximations to something else, rather than come up with this totally new, counter-intuitive epistemology where it’s not allowed to assign probabilities to fixed but unknown parameters
Because they don’t accept the premises of Cox’s theorem—in particular, the one that says that the plausibility of a claim shall be represented by a single real number. I’m thinking of Deborah Mayo here (referenced upthread).
Mayo sees the process of science as one of probing a claim for errors by subjecting it to “severe” tests. Here the severity of a test (vis-a-vis a hypothesis) is the sampling probability that the hypothesis passes fails to pass the test given that the hypothesis does not, in fact, hold true. (Severity is calculated holding the data fixed and varying hypotheses.) This is a process-centred view of science: it sees good science as founded on methodologies that rarely permit false hypotheses to pass tests.
Her pithy slogan for the contrast between her view and Bayesian epistemology is “well-probed versus highly probable”. I expect that even she were willing to offer betting odds on the truth of a given claim, she would still deny that her betting odds have any relevance to the process of providing a warrant for asserting the claim.
You know, it’s actually possible for a rational person to be unable to give consistent answers to de Finetti’s choice under certain circumstances. When the person offering the bet is a semi-rational person who wants to win money and who might have unknown-to-me information, that’s evidence in favor of the position they’re offering to take. Because I should update in the direction of their implied beliefs no matter which side of the bet they offered me, there will be a range around my own subjective probability in which I won’t want to take any bet.
Sure, when you’re 100% sure that the person offering the bet is a nerd who’s solely trying to honestly elicit some Bayesian subjective probability estimate, then you’re safe taking either side of the same probability bet. But I’ll bet your estimate of that likelihood is less than 100%.
I don’t see how this applies to ciphergoth’s example. In the example under consideration, the person offering you the bet cannot make money, and the person offered the bet cannot lose money. The question is, “For which of two events would you like to be paid some set amount of money, say $5, in case it occurs?” One of the events is that a fair coin flip comes up heads. The other is an ordinary one-off occurrence, like the election of Obama in 2012 or the sun exploding tomorrow.
The goal is to elicit the degree of belief that the person has in the one-off event. If the person takes the one-off event when given a choice like this, then we want to say (or de Finetti wanted to say, anyway) that the person’s prior is greater than 1⁄2. If the person says, “I don’t care, let me flip a coin,” like ciphergoth’s interlocutor did, then we want to say that the person has a prior equal to 1⁄2. There are still lots of problems, since (among other things) in the usual personalist story, degrees of belief have to be infinitely precise—corresponding to a single real number—and it is not clear that when a person says, “Oh, just flip a coin,” the person has a degree of belief equal to 1⁄2, as opposed to an interval-valued degree of belief centered on 1⁄2 or something like that.
But anyway, I don’t see how your point makes contact with ciphergoth’s.
For a rational person with infinite processing power, my point doesn’t apply. You can also neglect air resistance when determining the trajectory of a perfectly spherical cow in a vacuum.
For a person of limited intelligence (i.e. all of us), it’s typically necessary to pick easily-evaluated heuristics that can be used in place of detailed analysis of every decision. I last used my “people offering me free stuff out of nowhere are probably trying to scam me somehow” heuristic while opening mail a few days ago. If ciphergoth’s interlocuter had been subconsciously thinking the same way, then this time they missed a valuable opportunity for introspection, but it’s not immediately obvious that such false positive mistakes are worse than the increased possibility of false negatives that would be created if they instead tried to successfully outthink every “cannot lose” bet that comes their way.
The person offering the bet still (presumably) wants to minimize their loss, so they would be more likely to offer it if the unknown occurrence was impossible than if it was certain.
I’d expect Mayo to say something along the lines of -- translating on the fly into LW-ese—preferences ought not to enter into the question of how best to establish map-territory correspondence.
We know, infer, accept, and detach from evidence, all kinds of claims without any inclination to add an additional quantity such as a degree of probability or belief arrived at via, and obeying, the formal probability calculus.
As far as I know, she is not familiar with Cox’s Theorem at all, nor does she explicitly address the premise in question. I’ve been following her blog from the start, and I tried to get her to read about Cox’s theorem two or three times. I stopped after I read a post which made it clear to me that she thinks that encoding the plausibility of a claim with a single real number is not necessary—not useful, even—to construct an account of how science uses data to provide a warrant for a scientific claim. Unfortunately I don’t remember when I read the post…
Because they don’t accept the premises of Cox’s theorem—in particular, the one that says that the plausibility of a claim shall be represented by a single real number. I’m thinking of Deborah Mayo here (referenced upthread).
Have you tried offering de Finetti’s choice to them? I had a go at one probability-resister here and basically they squirmed like a fish on a hook.
Mayo sees the process of science as one of probing a claim for errors by subjecting it to “severe” tests. Here the severity of a test (vis-a-vis a hypothesis) is the sampling probability that the hypothesis passes fails to pass the test given that the hypothesis does not, in fact, hold true. (Severity is calculated holding the data fixed and varying hypotheses.) This is a process-centred view of science: it sees good science as founded on methodologies that rarely permit false hypotheses to pass tests.
Her pithy slogan for the contrast between her view and Bayesian epistemology is “well-probed versus highly probable”. I expect that even she were willing to offer betting odds on the truth of a given claim, she would still deny that her betting odds have any relevance to the process of providing a warrant for asserting the claim.
You know, it’s actually possible for a rational person to be unable to give consistent answers to de Finetti’s choice under certain circumstances. When the person offering the bet is a semi-rational person who wants to win money and who might have unknown-to-me information, that’s evidence in favor of the position they’re offering to take. Because I should update in the direction of their implied beliefs no matter which side of the bet they offered me, there will be a range around my own subjective probability in which I won’t want to take any bet.
Sure, when you’re 100% sure that the person offering the bet is a nerd who’s solely trying to honestly elicit some Bayesian subjective probability estimate, then you’re safe taking either side of the same probability bet. But I’ll bet your estimate of that likelihood is less than 100%.
I don’t see how this applies to ciphergoth’s example. In the example under consideration, the person offering you the bet cannot make money, and the person offered the bet cannot lose money. The question is, “For which of two events would you like to be paid some set amount of money, say $5, in case it occurs?” One of the events is that a fair coin flip comes up heads. The other is an ordinary one-off occurrence, like the election of Obama in 2012 or the sun exploding tomorrow.
The goal is to elicit the degree of belief that the person has in the one-off event. If the person takes the one-off event when given a choice like this, then we want to say (or de Finetti wanted to say, anyway) that the person’s prior is greater than 1⁄2. If the person says, “I don’t care, let me flip a coin,” like ciphergoth’s interlocutor did, then we want to say that the person has a prior equal to 1⁄2. There are still lots of problems, since (among other things) in the usual personalist story, degrees of belief have to be infinitely precise—corresponding to a single real number—and it is not clear that when a person says, “Oh, just flip a coin,” the person has a degree of belief equal to 1⁄2, as opposed to an interval-valued degree of belief centered on 1⁄2 or something like that.
But anyway, I don’t see how your point makes contact with ciphergoth’s.
For a rational person with infinite processing power, my point doesn’t apply. You can also neglect air resistance when determining the trajectory of a perfectly spherical cow in a vacuum.
For a person of limited intelligence (i.e. all of us), it’s typically necessary to pick easily-evaluated heuristics that can be used in place of detailed analysis of every decision. I last used my “people offering me free stuff out of nowhere are probably trying to scam me somehow” heuristic while opening mail a few days ago. If ciphergoth’s interlocuter had been subconsciously thinking the same way, then this time they missed a valuable opportunity for introspection, but it’s not immediately obvious that such false positive mistakes are worse than the increased possibility of false negatives that would be created if they instead tried to successfully outthink every “cannot lose” bet that comes their way.
The person offering the bet still (presumably) wants to minimize their loss, so they would be more likely to offer it if the unknown occurrence was impossible than if it was certain.
So, as to Savage’s theorem...?
I’d expect Mayo to say something along the lines of -- translating on the fly into LW-ese—preferences ought not to enter into the question of how best to establish map-territory correspondence.
I poked around, but couldn’t find anything where Mayo talked about Cox’s Theorem and it’s premises. Did you have something particular in mind?
Ah, found it:
Thanks!
As far as I know, she is not familiar with Cox’s Theorem at all, nor does she explicitly address the premise in question. I’ve been following her blog from the start, and I tried to get her to read about Cox’s theorem two or three times. I stopped after I read a post which made it clear to me that she thinks that encoding the plausibility of a claim with a single real number is not necessary—not useful, even—to construct an account of how science uses data to provide a warrant for a scientific claim. Unfortunately I don’t remember when I read the post…