Looking for mental information in individual neuronal firing patterns is looking at the wrong level of scale and at the wrong kind of physical manifestation. As in other statistical dynamical regularities, there are a vast number of microstates (i.e., network activity patterns) that can constitute the same ghloal attractor, and a vast numbmer of trajectories of microstate-to-microstate changes that will tend to converge to a common attractor. But it is the final quasi-regular network-level dynamic, like a melody played by a million-instrument orchestra, that is the medium of mental information. - Terrence W. Deacon, Incomplete Nature: How Mind Emerged from Matter, pp. 516 − 517.
common_law
Elaborate or detailed are characteristics neither necessary nor sufficient for rigor. The first describe characteristics of the theory; the second of the argument for the theory. To say a theory is rigorous is neither more or less than to say it is well argued (with particular emphasis on the argument’s tightness).
Whether Freud and Marx argued well may be hard to agree on when we examine their arguments. [Agreement or disagreement on conclusions have a way of grossly interfering with evaluation of argument, with the added complication that evaluation must be relative to a historical state of play.] And we ignore what could be called holes in Einstein and Darwin because the theories are the consensus—holes like the absence of the Mendelian mechanism in Darwin or the (still-unresolved, at least philosophically) problem of infinities in general relativity. [I’m sure that’s controversial, however.]
But I would suggest that a theories that have sustained the agreement of even a large minority of serious intellectuals and academics for more than a century should be presumed rigorous. Rigor is what establishes lasting intellectual success. It is what primarily defines whether a work is “impressive” (to use Robin Hanson’s as-always useful term).
On the other hand, I agree that third-rate minds use formulaic methods to generate a huge number of publications, and by their nature, such works will never be rigorous (or lastingly impressive).
You’ve drawn a significant distinction, but I don’t think degree of rigor defines it. I’m not sufficiently familiar with many of these thinkers to assess their rigorousness, but I am familiar with several, the ones who would often be deemed most important: Einstein, Darwin on the side you describe as rigorous; Freud and Marx on the side you describe as less rigorous. I can’t agree that Freud and Marx are less rigorous. Marx makes a argument for his theory of capitalism in three tightly reasoned volumes of capital, none of the arguments formulaic. Freud develops the basics of his psychology in “The Interpretation of Dreams,” a rigorous study of numerous dreams, his own and his patients, extracting principles of dream interpretation.
Let me offer an alternative hypothesis. The distinction doesn’t regard rigor but rather elegance. Einstein and Darwin developed elegant explanations; Freud and Marx developed systems of insights, supported by argument and evidence, but less reducible to a central, crisp insight. I haven’t considered a term for the latter, but for the moment, I’ll call them systematic theories.
An elegant theory must be accepted as a whole or not at all. A systematic theory contains numerous insights that despite their integration can often be separated from one another, one idea accepted and another rejected.
With that distinction, it can readily be explained why systematic theorists produce a greater total bulk of work. It takes more words, and more working through, to explain a system than an elegant principle.
Aesthetic ability as such hasn’t been extracted as a cognitive ability factor. My guess would be that it’s mainly explained by g and the temperamental factor of openness to experience. (I don’t know what the empirical data is on this subject, but I think some immersion in the factor-analytic data would prove rewarding.)
[Added.] On aesthetic sense: the late R.B. Cattell (psychologist) devised an IQ test based on which jokes were preferred.
[Added.2] I’m wondering if you’re not misinterpreting your personal experience. You say your IQ is only LW-average. You also say you have a nonverbal learning disability; but that would render any score you obtained on an IQ test a substantial underestimate. I’m inclined to think what you’re calling aesthetic ability (in your case, at least) is just intelligence beyond what the uninterpreted scores say.
What’s your basis for concluding that verbal-reasoning ability is an important component of mathematical ability—particularly important in more theoretical areas of math?
The research that I recall showed little influence of verbal reasoning on high-level math ability, verbal ability certainly being correlated with math ability but the correlation almost entirely accounted for by g (or R). There’s some evidence that spatio-visual ability, rather unimportant for mathematical literacy (as measured by SAT-M, GRE-Q), becomes significant at higher levels of achievement. But from what I’ve seen, the factor that emerged most distinctive for excellent mathematicians (distinguishing them from other fields also demanding high g) isn’t g itself, but rather cognitive speed. Talented mathematicians are mentally quick.
You should question your unstated but fundamental premise: one should avoid arguments with “hostile arguers.”
A person who argues to convince rather than to understand harms himself, but from his interlocutor’s standpoint, dealing with his arguments can be just as challenging and enlightening as arguing with someone more “intellectually honest.”
Whether an argument is worthwhile depends primarily on the competence of the arguments presented, which isn’t strongly related to the sincerity of the arguer.
- 29 Nov 2014 19:54 UTC; 3 points) 's comment on The Hostile Arguer by (
Actually, I think you’re wrong in thinking that LW doctrine doesn’t dictate heightened scrutiny of the deployment of self-deception. At the same time, I think you’re wrong to think false beliefs can seldom be quarantined, compartmentalization being a widely employed defense mechanism. (Cf., any liberal theist.)
Everyone feels a tug toward the pure truth, away from pure instrumental rationalism. You’re mistake (and LW’s), I think, is to incorporate truth into instrumental rationality (without really having a cogent rationale, given the reality of compartmentalization). The real defect in instrumental rationalism is that no person of integrity can take it to heart. “Values” are of two kinds: biological givens and acquired tendencies that restrict the operation of those givens (instinct and restraint). The drive for instrumental rationality is a biological given; epistemic rationality is a restraint intellectuals apply to their instrumental rationality. It is ethical in character, whereas instrumental rationality is not; and it is a seductive confusion to moralize it.
For intellectuals, the businessman’s “winner” ethos—the evaluative subordination of epistemic rationality to instrumentality—is an invitation to functional psychopathy.
I don’t see how your argument gains from attributing the hard-work bias to stories. (For one thing, you still have to explain why stories express this bias—unless you think it’s culturally adventitious.)
The bias seems to me to be a particular case of the fair-world bias and perhaps also the “more is better” heuristic. It seems like you are positing a new bias unnecessarily. (That doesn’t detract from the value of describing this particular variant.)
Philosophically, I want to know how you calculate the rational degree of belief in every proposition.
If you automatically assign the axioms an actually unobtainable certainty, you don’t get the rational degree of belief in every proposition, as the set of “propositions” includes those not conditioned on the axioms.
What about the problem that if you admit that logical propositions are only probable, you must admit that the foundations of decision theory and Bayesian inference are only probable (and treat them accordingly)? Doesn’t this leave you unable to complete a deduction because of a vicious regress?
A critical mistake in the lead analysis is false assumption: where there is a causal relation between two variables, they will be correlated. This ignores that causes often cancel out. (Of course, not perfectly, but enough to make raw correlation a generally poor guide to causality.
I think you have a fundamentally mistaken epistemology, gwern: you don’t see that correlations only support causality when they are predicted by a causal theory.
“how else could this correlation happen if there’s no causal connection between A & B‽”
The main way to correct for this bias toward seeing causation where there is only correlation follows from this introspection: be more imaginative about how it could happen (other than by direct causation).
[The causation bias (does it have a name?) seems to express the availability bias. So, the corrective is to increase the availability of the other possibilities.]
You link ego depletion (willpower depletion and decision fatigue alternative terms) to working memory, based on neuroscientists refusal to reify the former. But you neglect that, neuroscientists would deny that working memory is “a thing.”
The psychological findings are robust. That the first-proposed physiological explanation is dubious doesn’t disqualify the phenomena.
The opponent process theory in mentioned in the wikipedia article is promising.
Who says willpower limitations are a function of limited capacity? This is something of an engineer’s favored explanation. A better explanation is probably rooted in evolutionary psychology rather than inherent capacity limitations. We evolved with opponent processes governing control and gratification.
Attributing willpower depletion to “distraction” isn’t an explanation. Distraction probably has causal relevance, but it isn’t a magic wand to wave away psychological findings.
The OP is amateurish.
I intended nothing more than to solve the literal interpretation. This isn’t my beaten path. I don’t intend more on the subject besides speculation about why an essentially trivial problem of “literal interpretation” has resisted articulation.
I think you’ll find the argument is clear without any formalization if you recognize that it is NOT the usual claim that confidence goes down. Rather, it’s that the confidence falls below its contrary.
In philH’s terms, you’re engaging in pattern matching rather than taking the argument on its own terms.
What you’re ignoring is the comparison probability. See philH’s comment.
It’s accurate. But it’s crucial, of course, to see why P(C) comes to dominate P(B), and I think this is what most commenters have missed. (But maybe I’m wrong about that; maybe its because of pattern matching.) As the threat increases, P(C) comes to dominate P(B) because the threat, when large enough, is evidence against the threatened event occurring.
That is [it is assumed that] the only plausible reason to state a meganumber class high utility is to beat someone elses number.
It’s the only reason that doesn’t cancel out because it’s the only one about which we have any knowledge. The higher the number, the more likely it is that the mugger is playing the “pick the highest number” game. You can imagine scenarios in which picking the highest number has some unknown significance, they cancel out, in the same way as Pascal’s God is canceled by the possibility of contrary gods.
Also why what the mugger says have anythinhg to do how big of a threat the conversation is?
Same question (formally) as why should failure to confirm a theory be evidence against it.
What you present is the basic fallacy of Pascal’s Mugging: treating the probability of B and of C as independent the fact that a threat of given magnitude is made.
Your formalism, in other words, doesn’t model the argument. The basic point is that Pascal Mugging can be solved by the same logic as succeeds with Pascal’s wager. Pascal ignored that believing in god A was instrumentally rational by ignoring that there might, with equal consequences, be a god B instead who hated people who worshiped god A.
Pascal’s Mugging ignores that giving to the mugger might cause the calamity threatened to be more likely if you accede to the mugger than if you don’t. The point of inflection is that point where the mugger’s making the claim becomes evidence against it rather than for it.
No commenters have engaged the argument!
But it’s also obvious that Thomas is no legal genius. (Unlike, say, Scalia, who I actually abhor more, probably for that reason). Why no black legal geniuses, theoretical physicists, abstract mathematicians, or analytic philosophers? This is more telling than fishing about in the superior range, which, even on the assumptions, is only as rare as falling in the general population’s very-superior range.