Hofstadter just gained a bunch of points with me.
Paul_Gowder
Ben: what kind of duties might there be other than moral ones?
Leo, hmm… I see the point, but it’s gotta be an error. It’s a straightforward instance of the genetic fallacy to reason from “our moral intuitions have biological origins” to “therefore, it makes no sense to speak of ‘moral duties.’” It might make no sense to speak of religious moral duties—but surely that’s because there’s no god, and not because the source of our moral intuitions is otherwise. The quoted sentence seems to equivocate between religious claims of moral duty—which was the topic of the rest of the surrounding paragraphs—and [deontological?] claims about moral duty generally.
Also, what is Harris’s quote supposed to mean? (About the moral duty to save children, that is. Not the god one, which is wholly unobjectionable.) I want to interpret it as some kind of skepticism about normative statements, but if that’s what he means, it’s very oddly expressed. Perhaps it’s supposed to be some conceptual analysis about “duty?”
I mean, one ought to understand a syllogism, just as one ought to save the drowning child… no?
Memo to Jaynes: please don’t generalize beyond statistics. Cough… mixed strategy equilibria in game theory.
Cyan, I’ve been mulling this over for the last 23 hours or so—and I think you’ve convinced me that the frequentist approach has worrisome elements of subjectivity too. Huh. Which doesn’t mean I’m comfortable with the the whole priors business either. I’ll think about this some more. Thanks.
Cyan, that source is slightly more convincing.
Although I’m a little concerned that it, too, is attacking another strawman. At the beginning of chapter 37, it seems that the author just doesn’t understand what good researchers do. In the medical example given at the start of the chapter (458-462ish), many good researchers would use a one-sided hypothesis rather than a two-sided hypothesis (I would), which would better catch the weak relationship. One can also avoid false negatives by measuring the power of one’s test. McKay also claims that “this answer does not say how much more effective A is than B.” But that’s just false: one can get an idea of the size of the effect with either sharper techniques (like doing a linear regression, getting beta values and calculating r-squared) or just by modifying one’s null hypothesis (i.e. demanding that a datum improve on control by X amount before it counts in favor of the alternative hypothesis).
Given all that, I’m going to withhold judgment. McKay’s argument on the coin flip example is convincing on the surface. But given his history from the prior pages of understating the counterarguments, I’m not going to give it credence until I find a better statistician than I to give me the response, if any, from a “sampling theory” perspective.
Uh, strike the “how would the math change?” question—I just read the relevant portion of Jaynes’s paper, which gives a plausible answer to that. Still, I deny that an actual practicing frequentist would follow his logic and treat n as the random variable.
(ALSO: another dose of unreality in the scenario: what experimenter who decided to play it like that would ever reveal the quirky methodology?)
I have to say, the reason the example is convincing is because of its artificiality. I don’t know many old-school frequentists (though I suppose I’m a frequentist myself, at least so far as I’m still really nervous about the whole priors business—but not quite so hard as all that), but I doubt that, presented with a stark case like the one above, they’d say the results would come out differently. For one thing, how would the math change?
But the case would never come up—that’s the thing. It’s empty counterfactual analysis. Nobody who is following a stopping rule as ridiculous as the one offered would be able to otherwise conduct the research properly. I mean, seriously. I think Benquo nailed it: the second researcher’s stopping rule ought to rather severely change our subjective probability in his/her having used a random sample, or for that matter not committed any number of other research sins, perhaps unconsciously. And that in turn should make us less confident about the results.
Haha, very nice CGD. Shows how much those philosophers of language know about golf. :-)
Although… hmm… interesting. I think that gives us a way to think about another probability 1 statement: statements that occupy the entire logical space. Example: “either there are probability 1 statements, or there are not probability 1 statements.” That statement seems to be true with probability 1...
Caledonian, every undergraduate who has ever taken a statistics class knows that the probability of any single point in a continuous distribution is zero. Probabilities in continuous space are measured on intervals. Basic calculus...
Poke: let’s attack the problem a different way. You seem to want to cast doubt on the difference along the dimension of certainty between induction and deduction. (“the difference you cite between deductive and inductive arguments (that the former is certain and the latter not), is the conclusion of the problem of induction; you can’t use it to argue for the problem of induction”)
Either deduction and induction are different along the dimension of certainty, or they’re not. So there are four possibilities. induction = certain, deduction = certain (IC, DC); InC, DnC; IC, DnC; and InC, DC.
Surely, you don’t agree that induction gives us certain knowledge. The “imagination-based” story: the fact that the coin came up heads the last three million times gives us very high probability for the proposition that the coin is loaded, but not certain. But you’ve rejected the “imagination-based” story. I’m fine with that. Because there are real stories. Countless real stories. Every time one scientist repeats another scientist’s experiment and gets a different result, it’s a demonstration of the fact that inductive knowledge isn’t certain: the first scientist validly drew a conclusion from induction as a result of his/her experiments (do you disagree with that??), and the second scientist showed that the conclusion was wrong or at least incomplete. Ergo, induction doesn’t give us certain knowledge.
That eliminates two possibilities, leaving us with InC, DnC and InC, DC. The following is a deductive argument. “1. A. 2. A-->B. 3. B.” Assume 1 and 2 are true. Do you think we thereby have certain knowledge that B? If so, you seem to be committed to DC, and thereby to a difference between induction and deduction on the domain of certainty.
(Heavens… the things I do rather than sleep.)
j.edwards, I think your last sentence convinced me to withdraw the objection—I can’t very well assign a probability of 1 to ~”the green is either” can I? Good point, thanks.
hmm… I feel even more confident about the existence of probability-zero statements than I feel about the existence of probability-1 statements. Because not only do we have logical contradictions, but we also have incoherent statements (like Husserl’s “the green is either”).
Can one form subjective probabilities over the truth of “the green is either” at all? I don’t think so, but I remember a some-months-ago suggestion of Robin’s about “impossible possible worlds,” which might also imply the ability to form probability estimates over incoherencies. (Why not incoherent worlds? One might ask.) So the idea is at least potentially on the table.
And then it seems obvious that we will forever, across all space and time, have no evidence to support an incoherent proposition. That’s as good an approximation of infinite lack of evidence as I can come up with. P(“the green is either”)=0?
Oh, on the ratios of probabilities thing, whether we call them probabilities or schmobabilities, it still seems like they can equal 1. But if we accept that there are schmobabilities that equal 1, and that we are warranted in giving them the same level of confidence that we’d give probabilities of 1, isn’t that good enough?
Put a different way, P(A|A)=1 (or perhaps I should call it S(A|A)=1) is just equivalent to yet another one of those logical tautologies, A-->A. Which again seems pretty hard to live without. (I’d like to see someone prove NCC to me without binding me to accept NCC!)
(Waking up.) Sure, if I thought I had evidence (how) of P&~P, that would be pretty good reason to believe a paraconsistent logic was true (except what does true mean in this context? not just about logics, but about paraconsistent ones!!)
But if that ever happened, if we went there, the rules for being rational would be so radically changed that there wouldn’t necessarily be good reason to believe that one has to update one’s probabilities in that way. (Perhaps one could say the probability of the law of non-contradiction being true is both 1 and 0? Who knows?)
I think the problem with taking a high probability that logic is paraconsistent is that all other beliefs stop working. I don’t know how to think in a paraconsistent logic. And I doubt anyone else does either. (Can you get Bayes Rule out of a paraconsistent logic? I doubt it. I mean, maybe… who knows?)
Wait a second, conditional probabilities aren’t probabilities? Huhhh? Isn’t Bayesianism all conditional probabilities?
Hah, I’ll let Decartes go (or condition him on a workable concept of existence—but that’s more of a spitball than the hardball I was going for).
But in answer to your non-contradiction question… I think I’d be epistemically entitled to just sneer and walk away. For one reason, again, if we’re in any conventional (i.e. not paraconsistent) logic, admitting any contradiction entails that I can prove any proposition to be true. And, giggle giggle, that includes the proposition “the law of non-contradiction is true.” (Isn’t logic a beautiful thing?) So if this mathematician thinks s/he can argue me into accepting the negation of the law of non-contradiction, and takes the further step of asserting any statement whatsoever to which it purportedly applies (i.e. some P, for which P&~P, such as the whiteness of snow), then lo and behold, I get the law of non-contradiction right back.
I suppose if we wanted to split hairs, we could say that one can deny the law of non-contradiction without further asserting an actual statement to which that denial applies—i.e. ~(~(P&~P)) doesn’t have to entail the existence of a statement P which is both true and false ((∃p)Np, where N stands for “is true and not true?” Abusing notation? Never!) But then what would be the point of denying the law?
(That being said, what I’d actually do is stop long enough to listen to the argument—but I don’t think that commits me to changing my zero probability. I’d listen to the argument solely in order to refute it.)
As for the very tiny credence in the negation of the law of non-contradiction (let’s just call it NNC), I wonder what the point would be, if it wouldn’t have any effect on any reasoning process EXCEPT that it would create weird glitches that you’d have to discard? It’s as if you deliberately loosened one of the spark plugs in your engine.
Also (and sorry for the rapid-fire commenting), do you accept that we can have conditional probabilities of one? For example, P(A|A)=1? And, for that matter, P(B|(A-->B, A))=1? If so, I believe I can force you to accept at least probabilities of 1 in sound deductive arguments. And perhaps (I’ll have to think about it some more) in the logical laws that get you to the sound deductive arguments. I’m just trying to get the camel’s nose in the tent here...
I confess, the money pump thing sometimes strikes me as … well… contrived. Yes, in theory, if one’s preferences violate various rules of rationality (acyclicity being the easiest), one could conceivably be money-pumped. But, uh, it never actually happens in the real world. Our preferences, once they violate idealized axioms, lead to messes in highly unrealistic situations. Big deal.