Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields
(This post is an expanded version of a LW comment I left a while ago. I have found myself referring to it so much in the meantime that I think it’s worth reworking into a proper post. Some related posts are “The Correct Contrarian Cluster” and “What is Bunk?”)
When looking for information about some area outside of one’s expertise, it is usually a good idea to first ask what academic scholarship has to say on the subject. In many areas, there is no need to look elsewhere for answers: respectable academic authors are the richest and most reliable source of information, and people claiming things completely outside the academic mainstream are almost certain to be crackpots.
The trouble is, this is not always the case. Even those whose view of the modern academia is much rosier than mine should agree that it would be astonishing if there didn’t exist at least some areas where the academic mainstream is detached from reality on important issues, while much more accurate views are scorned as kooky (or would be if they were heard at all). Therefore, depending on the area, the fact that a view is way out of the academic mainstream may imply that it’s bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.
I will discuss some heuristics that, in my experience, provide a realistic first estimate of how sound the academic mainstream in a given field is likely to be, and how justified one would be to dismiss contrarians out of hand. These conclusions have come from my own observations of research literature in various fields and some personal experience with the way modern academia operates, and I would be interested in reading others’ opinions.
Low-hanging fruit heuristic
As the first heuristic, we should ask if there is a lot of low-hanging fruit available in the given area, in the sense of research goals that are both interesting and doable. If yes, this means that there are clear paths to quality work open for reasonably smart people with an adequate level of knowledge and resources, which makes it unnecessary to invent clever-looking nonsense instead. In this situation, smart and capable people can just state a sound and honest plan of work on their grant applications and proceed with it.
In contrast, if a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change. What will likely happen instead is that they’ll continue producing output that will have all the superficial trappings of science and sound scholarship, but will in fact be increasingly pointless and detached from reality.
Arguably, some areas of theoretical physics have reached this state, if we are to trust the critics like Lee Smolin. I am not a physicist, and I cannot judge directly if Smolin and the other similar critics are right, but some powerful evidence for this came several years ago in the form of the Bogdanoff affair, which demonstrated that highly credentialed physicists in some areas can find it difficult, perhaps even impossible, to distinguish sound work from a well-contrived nonsensical imitation. [1]
Somewhat surprisingly, another example is presented by some subfields of computer science. With all the new computer gadgets everywhere, one would think that no other field could be further from a stale dead end. In some of its subfields this is definitely true, but in others, much of what is studied is based on decades old major breakthroughs, and the known viable directions from there have long since been explored all until they hit against some fundamentally intractable problem. (Or alternatively, further progress is a matter of hands-on engineering practice that doesn’t lend itself to the way academia operates.) This has led to a situation where a lot of the published CS research is increasingly distant from reality, because to keep the illusion of progress, it must pretend to solve problems that are basically known to be impossible. [2]
Ideological/venal interest heuristic
Bad as they might be, the problems that occur when clear research directions are lacking pale in comparison with what happens when things under discussion are ideologically charged or a matter in which powerful interest groups have a stake. As Hobbes remarked, people agree about theorems of geometry not because their proofs are solid, but because “men care not in that subject what be truth, as a thing that crosses no man’s ambition, profit, or lust.” [3]
One example is the cluster of research areas encompassing intelligence research, sociobiology, and behavioral genetics, which touches on a lot of highly ideologically charged questions. These pass the low-hanging fruit heuristic easily: the existing literature is full of proposals for interesting studies waiting to be done. Yet, because of their striking ideological implications, these areas are full of work clearly aimed at advancing the authors’ non-scientific agenda, and even after a lot of reading one is left in confusion over whom to believe, if anyone. It doesn’t even matter whose side one supports in these controversies: whichever side is right (if any one is), it’s simply impossible that there isn’t a whole lot of nonsense published in prestigious academic venues and under august academic titles.
Yet another academic area that suffers from the same problems is the history of the modern era. On many significant events from the last two centuries, there is a great deal of documentary evidence laying around still waiting to be assessed properly, so there is certainly no lack of low-hanging fruit for a smart and diligent historian. Yet due to the clear ideological implications of many historical topics, ideological nonsense cleverly masquerading as scholarship abounds. I don’t think anything resembling an accurate world history of the last two centuries could be written without making a great many contrarian claims. [4] In contrast, on topics that don’t arouse ideological passions, modern histories are often amazingly well researched and free of speculation and distortion. (In particular, if you are from a small nation that has never really been a player in world history, your local historians are likely to be full of parochial bias motivated by the local political quarrels and grievances, but you may be able to find very accurate information on your local history in the works of foreign historians from the elite academia.)
On the whole, it seems to me that failing the ideological interest test suggests a much worse situation than failing the low-hanging fruit test. The areas affected by just the latter are still fundamentally sound, and tend to produce work whose contribution is way overblown, but which is still built on a sound basis and internally coherent. Even if outright nonsense is produced, it’s still clearly distinguishable with some effort and usually restricted to less prestigious authors. Areas affected by ideological biases, however, tend to drift much further into outright delusion, possibly lacking a sound core body of scholarship altogether.
[Paragraphs below added in response to comments:]
What about the problem of purely venal influences, i.e. the cases where researchers are under the patronage of parties that have stakes in the results of their research? On the whole, the modern Western academic system is very good at discovering and stamping out clear and obvious corruption and fraud. It’s clearly not possible for researchers to openly sell their services to the highest bidder; even if there are no formal sanctions, their reputation would be ruined. However, venal influences are nevertheless far from nonexistent, and a fascinating question is under what exact conditions researchers are likely to fall under them and get away with it.
Sometimes venal influences are masked by scams such as setting up phony front organizations for funding, but even that tends to be discovered eventually and tarnish the reputations of the researchers involved. What seems to be the real problem is when the beneficiaries of biased research enjoy such status in the eyes of the public and such legal and customary position in society that they don’t even need to hide anything when establishing a perverse symbiosis that results in biased research. Such relationships, while fundamentally representing venal interest, are in fact often boasted about as beneficial and productive cooperation. Pharmaceutical research is an often cited example, but I think the phenomenon is in fact far more widespread, and reaches the height of perverse perfection in those research communities whose structure effectively blends into various government agencies.
The really bad cases: failing both tests
So far, I’ve discussed examples where one of the mentioned heuristics returns a negative answer, but not the other. What happens when a field fails both of them, having no clear research directions and at the same time being highly relevant to ideologues and interest groups? Unsurprisingly, it tends to be really bad.
The clearest example of such a field is probably economics, particularly macroeconomics. (Microeconomics covers an extremely broad range of issues deeply intertwined with many other fields, and its soundness, in my opinion, varies greatly depending on the subject, so I’ll avoid a lengthy digression into it.) Macroeconomists lack any clearly sound and fruitful approach to the problems they wish to study, and any conclusion they might draw will have immediately obvious ideological implications, often expressible in stark “who-whom?” terms.
And indeed, even a casual inspection of the standards in this field shows clear symptoms of cargo-cult science: weaving complex and abstruse theories that can be made to predict everything and nothing, manipulating essentially meaningless numbers as if they were objectively measurable properties of the real world [5], experts with the most prestigious credentials dismissing each other as crackpots (in more or less diplomatic terms) when their favored ideologies clash, etc., etc. Fringe contrarians in this area (most notably extreme Austrians) typically have silly enough ideas of their own, but their criticism of the academic mainstream is nevertheless often spot-on, in my opinion.
Other examples
So, what are some other interesting case studies for these heuristics?
An example of great interest is climate science. Clearly, the ideological interest heuristic raises a big red flag here, and indeed, there is little doubt that a lot of the research coming out in recent years that supposedly links “climate change” with all kinds of bad things is just fashionable nonsense [6]. (Another sanity check it fails is that only a tiny proportion of these authors ever hypothesize that the predicted/observed climate change might actually improve something, as if there existed some law of physics prohibiting it.) Thus, I’d say that contrarians on this issue should definitely not be dismissed out of hand; the really hard question is how much sound insight (if any) remains after one eliminates all the nonsense that’s infiltrated the mainstream. When it comes to the low-hanging fruit heuristic, I find the situation less clear. How difficult is it to achieve progress in accurately reconstructing long-term climate trends and forecasting the influences of increasing greenhouse gases? Is it hard enough that we’d expect, even absent an ideological motivation, that people would try to substitute cleverly contrived bunk for unreachable sound insight? My conclusion is that I’ll have to read much more on the technical background of these subjects before I can form any reliable opinion on these questions.
Another example of practical interest is nutrition. Here ideological influences aren’t very strong (though not altogether absent either). However, the low-hanging fruit raises a huge red flag: it’s almost impossible to study these things in a sound way, controlling for all the incredibly complex and counterintuitive confounding variables. At the same time, it’s easy to produce endless amounts of plausible-looking junk studies. Thus, I’d expect that the mainstream research in this area is on average pure nonsense, with a few possible gems of solid insight hopelessly buried under it, and even when it comes to very extreme contrarians, I wouldn’t be tremendously surprised to see any one of them proven right at the end. My conclusion is similar when it comes to exercise and numerous other lifestyle issues.
Exceptions
Finally, what are the evident exceptions to these trends?
I can think of some exceptions to the low-hanging fruit heuristic. One is in historical linguistics, whose standard well-substantiated methods have had great success in identifying the structure of the world’s language family trees, but give no answer at all to the fascinating question of how far back into the past the nodes of these trees reach (except of course when we have written evidence). Nobody has any good idea how to make progress there, and the questions are tantalizing. Now, there are all sorts of plausible-looking but fundamentally unsound methods that purport to answer these questions, and papers using them occasionally get published in prestigious non-linguistic journals, but the actual historical linguists firmly dismiss them as unsound, even though they have no answers of their own to offer instead. [7] It’s an example of a commendable stand against seductive nonsense.
It’s much harder to think of examples where the ideological interest heuristic fails. What field can one point out where mainstream scholarship is reliably sound and objective despite its topic being ideologically charged? Honestly, I can’t think of one.
What about the other direction—fields that pass both heuristics but are nevertheless nonsense? I can think of e.g. artsy areas that don’t make much of a pretense to objectivity in the first place, but otherwise, it seems to me that absent ideological and venal perverse incentives, and given clear paths to progress that don’t require extraordinary genius, the modern academic system is great in producing solid and reliable insight. The trouble is that these conditions often don’t hold in practice.
I’d be curious to see additional examples that either confirm of disprove these heuristics I proposed.
Footnotes
[1] Commenter gwern has argued that the Bogdanoff affair is not a good example, claiming that the brothers have been shown as fraud decisively after they came under intense public scrutiny. However, even if this is true, the fact still remains that they initially managed to publish their work in reputable peer-reviewed venues and obtain doctorates at a reputable (though not top-ranking) university, which strongly suggests that there is much more work in the field that is equally bad but doesn’t elicit equal public interest and thus never gets really scrutinized. Moreover, from my own reading about the affair, it was clear that in its initial phases several credentialed physicists were unable to make a clear judgment about their work. On the whole, I don’t think the affair can be dismissed as an insignificant accident.
[2] Moldbug’s “What’s wrong with CS research” is a witty and essentially accurate overview of this situation. He mostly limits himself to the discussion of programming language research, but a similar scenario can be seen in some other related fields too.
[3] Thomas Hobbes, Leviathan, Chapter XI.
[4] I have the impression that LW readers would mostly not be interested in a detailed discussion of the topics where I think one should read contrarian history, so I’m skipping it. In case I’m wrong, please feel free to open the issue in the comments.
[5] Oskar Morgenstern’s On the Accuracy of Economic Observations is a tour de force on the subject, demonstrating the essential meaninglessness of many sorts of numbers that economists use routinely. (Many thanks to the commenter realitygrill for directing me to this amazing book.) Morgenstern is of course far too prestigious a name to dismiss as a crackpot, so economists appear to have chosen to simply ignore the questions he raised, and his book has been languishing in obscurity and out of print for decades. It is available for download though (warning: ~31MB PDF).
[6] Some amusing lists of examples have been posted by the Heritage Foundation and the Number Watch (not intended to endorse the rest of the stuff on these websites). Admittedly, a lot of the stuff listed there is not real published research, but rather just people’s media statements. Still, there’s no shortage of similar things even in published research either, as a search of e.g. Google Scholar will show.
[7] Here is, for example, the linguist Bill Poser dismissing one such paper published in Nature a few years ago.
- Scholarship: How to Do It Efficiently by 9 May 2011 22:05 UTC; 237 points) (
- Feed the spinoff heuristic! by 9 Feb 2012 7:41 UTC; 82 points) (
- Costs and Benefits of Scholarship by 22 Mar 2011 2:19 UTC; 72 points) (
- Leveling Up in Rationality: A Personal Journey by 17 Jan 2012 11:02 UTC; 51 points) (
- Trusting Expert Consensus by 16 Oct 2013 20:22 UTC; 41 points) (
- Peter Thiel warns of upcoming (and current) stagnation by 4 Oct 2011 17:30 UTC; 36 points) (
- 2 Dec 2011 1:58 UTC; 34 points) 's comment on Hack Away at the Edges by (
- Is my view contrarian? by 11 Mar 2014 17:42 UTC; 33 points) (
- Mini-review: ‘Proving History: Bayes’ Theorem and the Quest for the Historical Jesus’ by 1 Feb 2012 19:20 UTC; 26 points) (
- Scholarship and DIY Science by 18 Feb 2011 6:37 UTC; 22 points) (
- 22 Mar 2011 16:16 UTC; 19 points) 's comment on Costs and Benefits of Scholarship by (
- Scholarship: how to tell good advice from bad advice? by 29 Jun 2012 2:13 UTC; 18 points) (
- 26 Jul 2011 4:29 UTC; 13 points) 's comment on A potentially great improvement to minimum wage laws to handle both economic efficiency as well as poverty concerns by (
- 17 Mar 2012 20:22 UTC; 13 points) 's comment on Global warming is a better test of irrationality that theism by (
- 19 Apr 2013 1:17 UTC; 12 points) 's comment on Litany of a Bright Dilettante by (
- 17 Aug 2012 6:01 UTC; 12 points) 's comment on Competence in experts: summary by (
- 14 Feb 2016 11:05 UTC; 8 points) 's comment on Why and how to assess expertise by (EA Forum;
- 31 Jul 2011 3:23 UTC; 8 points) 's comment on How credible is neuroeconomics? by (
- 17 Nov 2014 23:18 UTC; 7 points) 's comment on Open thread, Nov. 17 - Nov. 23, 2014 by (
- 30 Jun 2011 5:06 UTC; 7 points) 's comment on Reasons for being rational by (
- Influence of scientific research by 9 Apr 2012 7:06 UTC; 7 points) (
- 28 Oct 2013 5:34 UTC; 6 points) 's comment on Only You Can Prevent Your Mind From Getting Killed By Politics by (
- 28 Jun 2011 6:08 UTC; 5 points) 's comment on Rationality Quotes: June 2011 by (
- 17 Aug 2012 11:11 UTC; 4 points) 's comment on [Link] Admitting to Bias by (
- 3 Sep 2012 22:31 UTC; 4 points) 's comment on How to tell apart science from pseudo-science in a field you don’t know ? by (
- 1 Jul 2011 17:31 UTC; 3 points) 's comment on Reasons for being rational by (
- 1 Jun 2015 0:24 UTC; 2 points) 's comment on Realistic epistemic expectations by (
- [Link] Diversity and Academic Open Mindedness by 4 Apr 2013 12:31 UTC; 2 points) (
- 9 Sep 2011 8:02 UTC; 1 point) 's comment on Pressure to publish increases scientists’ vulnerability to positive bias by (
- 2 Jul 2011 3:20 UTC; 1 point) 's comment on Reasons for being rational by (
- 12 Aug 2014 11:09 UTC; 1 point) 's comment on From Natural (or Naturalized) to Social Epistemology by (
- 10 Dec 2016 20:05 UTC; 1 point) 's comment on CFAR’s new mission statement (on our website) by (
- 5 Jan 2015 21:35 UTC; 0 points) 's comment on Why do you really believe what you believe regarding controversial subjects? by (
- 10 Feb 2014 1:39 UTC; -1 points) 's comment on Publication: the “anti-science” trope is culturally polarizing and makes people distrust scientists by (
If you are going to suggest that academic climate research is not up to scratch, you need to do more than post links to pages that link to non-academic articles. Saying “you can find lots on google scholar” is not that same as actually pointing to the alleged sub-standard research.
For a long time I too was somewhat skeptical about global warming. I recognized the risk that researchers would exaggerate the problem in order to obtain more funding.
What I chose to do to resolve the matter was to deep dive into a few often-raised skeptic arguments using my knowledge of physics as a starting point, and learning whatever I needed to learn along the way (it took a while). The result was that the academic researchers won 6-0 6-0 6-0 in three sets (to use a tennis score analogy). Most striking to me was the dishonesty and lack of substance on the “skeptic” side. There was just no “there” there.
The topics I looked into were: accuracy of the climate temperature record, alleged natural causes explaining the recent heating, the alleged saturation of the atmospheric CO2 infra-red wavelengths, and the claim that the CO2 that is emitted by man is absorbed very quickly.
In retrospect I became aware that my ‘skepticism’ was fulled in large part by deliberate misinformation campaigns in the grand tradition of tobacco, asbestos, HFCs, DDT etc. The same techniques, and even many of the same PR firms are involved. As one tobacco executive said “Our product is doubt”.
An article about assessing the soundness of the academic mainstream would benefit from also discussing the ways in which the message from, and even the research done in, academia is corrupted and distorted by commercial interests. Economics is a case in point, but it is a big issue also in drug research and other aspects of medicine.
Another thing I have noticed in looking into various areas of academic research is just how much research in every field I looked at is inconclusive, inconsequential, flawed or subtly biased (look up “desk drawer bias” for example).
Edit: fixed a few typos.
Edit: good article by the way, very well reasoned.
waveman:
I agree that I should have argued and referenced that part better. What I wanted to point out is that there is a whole cottage industry of research purporting to show that climate change is supposedly influencing one thing or another, a very large part of which appears to advance hypotheses so far-fetched and weakly substantiated that they seem like obvious products of the tendency to involve this super-fashionable topic into one’s research whenever possible, for reasons of both status- and career-advancement.
Even if one accepts that the standard view on climate change has been decisively proven and the issue shown to be a pressing problem, I still don’t think how one could escape this conclusion.
Yes.
My daughter works in molecular biology, and she has noted that every paper / grant application is full of hope and promise of a cure to cancer or some other dread disease. Sometimes this hope and promise is significantly exaggerated.
It is very depressing, even in fields where the science is absolutely rock-solid, to read the nonsense that comes from the periphery. Read Deepak Chopra on Quantum Mechanics for example.
You wrote “what I chose to do to resolve the matter was to deep dive into three often-raised skeptic arguments using my knowledge of physics as a starting point” and “deliberate misinformation campaigns in the grand tradition of tobacco [etc.]”.
Less Wrong is not the place for a comprehensive argument about catastrophic AGW, but I’d like to make a general Less-Wrong-ish point about your analysis here. It is perceptive to notice that millions of dollars are spent on a shoddy PR effort by the other side. It is also perceptive to notice that many of the other side’s most popular arguments aren’t technically very strong. It’s even actively helpful to debunk unreasonable popular arguments even if you only do it for those which are popular on the other side. However, remember that it’s sadly common that regardless of their technical merits, big politicized controversies tend to grow big shoddy PR efforts associated with all factions. And even medium-sized controversies tend to attract some loud clueless supporters on both sides. Thus, it’s not a very useful heuristic to consider significant PR spending, or the popularity of flaky arguments, as particularly useful evidence against the underlying factual position.
It may be “too much information [about AGW]” for Less Wrong, but I feel I should support my point in this particular controversy at least a little, so… E.g., look at the behavior of Pachauri himself in the “Glaciergate” glaciers-melting-by-2035 case. I can’t read the guy’s mind, and indeed find some of his behavior quite odd, so for all I know it is not “deliberate.” But accidental or not, it looks rather like an episode in a misinformation campaign in the sorry tradition of big-money innumerate scare-environmentalism. Also, Judith Curry just wrote a blog post which mentions, among other things, the amount of money sloshing around in various AGW-PR-related organizations associated with anti-IPCC positions. For comparison, a rather angry critic I don’t know much about (but one who should, at a minimum, be constrained by British libel law) ties the Glaciergate factoid to grants of $500K and $3M, and Greenpeace USA seems to have an annual budget of around $30M.
I should have said that I tried to find the best arguments I could, and then deep dive into those. More from someone else in the link below. If someone can point me to some actual credible sceptical arguments I would be interested.
http://lesswrong.com/lw/4ba/some_heuristics_for_evaluating_the_soundness_of/3k1e
I certainly agree AGW is a highly politicized issue and there are plenty of people trying to profit from it. Any time money is involved that will be the case. One should not assume that all the anti money spent on anti AGW money goes through the think tanks mentioned.
The whole Glaciergate thing was indeed a disgrace.
I don’t get too worked up about AGW because I think it is just one of the many things that are likely to sink us.
Also, to comment on this:
That would fall under the “venal” part of considering the ideological/venal factors involved. I agree that I should have cited the example of drug research; the main reason I didn’t do so is that I’m not confident that my impressions about this area are accurate enough.
One fascinating question about the problem of venal influences, about which I might write more in the future, is when and under what exact conditions researchers are likely to fall under them and get away with it, considering that the present system is overall very good at discovering and punishing crude and obvious corruption and fraud. As I wrote in another comment, sometimes such influences are masked by scams such as setting up phony front organizations for funding, but even that tends to be discovered eventually and tarnish the reputations of the researchers involved. What seems to be the worst problem is when the beneficiaries of biased research enjoy such status in the eyes of the public and such legal and customary position in society that they don’t even need to hide anything when establishing a perverse symbiosis that results in biased research.
RW has a three-way chart (tobacco, creationism, climate change) so you can learn to spot this sort of argument:
http://rationalwiki.org/wiki/A_comparative_guide_to_science_denial
Work in progress, please feel free to extend.
Hmm. So if someday I find that some scientists make conclusions that don’t follow and these conclusions are used to make harmful policy decisions, I must not point out that certain scientific problems are unsolved or gather other scientists to write petitions, because that would make me match the RW pattern of “denialist”. Also apparently I must not say that correlation isn’t causation, because that’s “minimizing the relevance of statistical data”.
You failed to read the bit with the smoking gun of us knowing who’s paying for the pseudoscience in all three cases.
If that’s the only bit that actually matters for identifying “denialists”, then you can delete everything else from the article. Or put many other things in, e.g. “denialists often have two eyes and a nose”.
The question is: What else fits that pattern? Are there legitimate scientific movements that your filter catches?
Speaking of which, smoking, asbestos, and pesticides are good examples of the venal interest heuristic where the most respected people on the academic side are pretty damn correct.
Manfred:
I don’t think this is an accurate analysis. Venal interests are relevant when they have ways of influencing researchers in ways that won’t make it look like immediately obvious fraud and crude malfeasance, which the modern academic system is indeed very good at stamping out.
If a researcher benefits from affiliation with some individuals or institutions and in turn produces research benefiting these parties, thus forming a suspiciously convenient symbiotic relationship, it will work in practice only if this relationship is somehow obscured. Sometimes it is obscured by channeling funding through neutral-looking third parties and similar swindles, but again, this is difficult to pull off in a way that won’t raise all sorts of red flags in the present system. A far more serious and common problem, in my opinion, is when the relationship is completely in the open—often even boasted about—because the institutions involved have such high status and exalted image that they’re normally perceived as worthy of highest trust and confidence in their objectivity and benevolence.
I mostly agree, but I think there’s a continuous scale here, not a general rule. The situation of pesticide companies and pharmaceutical companies is very similar, and both have used similar tactics to try and corrupt the science around them, but pharmaceutical companies have been much more effective—probably by spending much more money.
I don’t think the amount of money is relevant in this particular comparison. Far more important is the ability of the corrupting special interest to assume the forms and establish the social and legal status enabling it to present itself as a legitimate patron of scholarship, association with which won’t be detrimental to the researchers’ reputation. Money clearly doesn’t hurt in this endeavor, but I think that it’s far from being the most important factor.
Can you spell this out some more, focusing on this example? I’m looking for criteria which can be applied in advance to predict the degree of success of special interest propaganda.
Does the social and legal status and legitimacy of pharmaceuticals, as against pesticides, simply reflect the greater prestige of doctors over farmers?
torekp:
In this case, I think that’s a correct hypothesis. The medical profession—and by extension all the related professions in its orbit, to varying extents—certainly enjoys such a high-status public perception that people will be biased towards interpreting its official claims and acts as coming from benevolent and objective expertise, even when a completely analogous situation in some ordinary industry or profession would be met with suspicion. Thus, it seems eminently plausible that in medical and related research people can let themselves be influenced by much more venal interest than usual, thinly disguised and rationalized as neutral and objective expertise and beneficial cooperation.
In my opinion, however, this is not where the worst problem lies. As long as the beneficiaries of biased research are easy to identify, one at least has a straightforward way to start making sense of the situation. A much worse problem is when the perverse incentives have a more complex and impersonal bureaucratic structure, in which ostensibly there are no private profits and venal interests, merely people working according to strict standards of professional ethics and expertise, but in reality this impeccable bureaucratic facade hides an awful hierarchy of patronage and the output is horrible nonsense with the effective purpose of rationalizing and excusing actions out of touch with reality. In these situations, venal interests effectively blend with ideological ones, and with all their elaborate and impressive bureaucratic facade, they are very difficult to recognize and analyze correctly.
You are right. However it is worth noting the powerful forces that are arrayed against any researcher who threatens powerful economic interests.
Given how much research is funded by the government, it is very possible for those with the right connections to punish those who do not sing the right song.
The story of trans fats is a good case in point, well documented in Gary Taubes’s book. Good Calories Bad Calories.
One marker to watch out for is a kind of selection effect.
In some fields, only ‘true believers’ have any motivation to spend their entire careers studying the subject in the first place, and so the ‘mainstream’ in that field is absolutely nutty.
Case examples include philosophy of religion, New Testament studies, Historical Jesus studies, and Quranic studies. These fields differ from, say, cryptozoology in that the biggest names in the field, and the biggest papers, are published by very smart people in leading journals and look all very normal and impressive but those entire fields are so incredibly screwed by the selection effect that it’s only “radicals” who say things like, “Um, you realize that the ‘gospel of Mark’ is written in the genre of fiction, right?”
I agree about the historical Jesus studies. At one point, I got intensely interested in this topic and read a dozen or so books about it by various authors (mostly on the skeptical end). My conclusion is that this is possibly the ultimate example of an area where the questions are tantalizingly interesting, but making any reliable conclusions from the available evidence is basically impossible. At the end, as you say, we get a lot of well written and impressively researched books whose content is however just a rationalization for the authors’ opinions held for altogether different reasons.
On the other hand, I’m not sure if you’re expressing support for the radical mythicist position, but if you do, I disagree. As much as Christian apologists tend to stretch the evidence in their favor, it seems to me like radical mythicists are biased in the other direction. (It’s telling that the doyen of contemporary mythicism, G.A. Wells, who certainly has no inclination towards Christian apologetics, has moderated his position significantly in recent years.)
No, I have yet to hear a great case for mythicism, though Richard Carrier may be in the process of writing the first. But I do think that we know almost nothing about Jesus with any confidence. Basically, there was probably some Jewish prophet who was baptized by John the Baptist and killed by the Romans, and that’s about all we know with any confidence.
I’d appreciate it if you’d let me know if you get around to assessing this. My belief (i.e., Jesus probably existed) is currently the same as yours and Vladimir_M’s but I believe muflax finds Carrier persuasive.
Well, it would certainly go too far to give mythicism an overwhelming probability. It may go too far to say that Occam’s Razor unambiguously favors mythicism.
But the second claim, if we agree that Paul had an unusual experience of some kind which changed his behavior, would require only that people in Paul’s time spoke of a nameless Essene Teacher of Righteousness dying on a cross. (And of course other, less likely discoveries would make the case just as well.)
I have to ask, how much do you know of ‘Quranic studies’? as far as I know, the new testament and quran are structured quite differently, hence research-which I’m not aware of-would be different as well?
Structured differently? Sure, but the fields are extremely similar in that they’re both studying ancient religious texts about which we have very little evidence as to their actual course of development (as is the case with all ancient texts). But I didn’t mean to assume any general similarity between Quranic studies and New Testament studies, anyway. The textual evidence for the Quran is much more recent, obviously, but the textual evidence for the NT is actually the best we have from the entire ancient world, by far. There are lots of other differences...
When I wrote “What is Bunk?” I thought I had a pretty good idea of the distinction between science and pseudoscience, except for some edge cases. Astrology is pseudoscience, astronomy is science. At the time, I was trying to work out a rubric for the edge cases (things like macroeconomics.)
Now, though, knowing a bit more about the natural sciences, it seems that even perfectly honest “science” is much shakier and likelier to be false than I supposed. There’s apparently a high probability that the conclusions of a molecular biology paper will be false—even if the journal is prestigious and the researchers are all at a world-class university. There’s simply a lot of pressure to make results look more conclusive than they are.
In the field of machine learning, which I sometimes read the literature in, there are foundational debates about the best methods. Ideas which very smart and highly credentialed people tout often turn out to be ineffective, years down the road. Apparently smart and accomplished researchers will often claim that some other apparently smart and accomplished researcher is doing it all wrong.
If you don’t actually know a field, you might think, “Oh. Tenured professor. Elite school. Dozens of publications and conferences. Huge erudition. That means I can probably believe his claims.” Whereas actually, he’s extremely fallible. Not just theoretically fallible, but actually has a serious probability of being dead wrong.
I guess the moral is “Don’t trust anyone but a mathematician”?
Safety in numbers? ;)
Perhaps it’s useful to distinguish between the frontier of science vs. established science. One should expect the frontier to be rather shaky and full of disagreements, before the winning theories have had time to be thoroughly tested and become part of our scientific bedrock. There was a time after all when it was rational for a layperson to remain rather neutral with respect to Einstein’s views on space and time. The heuristic of “is this science established / uncontroversial amongst experts?” is perhaps so boring we forget it, but it’s one of the most useful ones we have.
Theorems get published all the time that turn out to have incorrect proofs or to be not even theorems. There was about a decade long period in the late 19th century where there was a proof of the four color theorem that everyone thought was valid. And the middle of the 20th century there were serious issues with calculating homology groups and cohomology groups of spaces where people kept getting different answers. And then there are a handful of examples where theorems simply got more and more conditions tacked on to them as more counterexamples to the theorems became apparent. The Euler formula for polyhedra is possibly the most blatant such example.
So even the mathematicians aren’t always trustworthy.
Huh? There are no counterexamples to the Euler characteristic of a polyhedra being 2, and the theorem has generalized beautifully. If anything conditions have been loosened as new versions of the theorem have been used in more places.
Well, what do you mean by polyhedron? Consider for example a cubic nut. Does this fit your intuition of a polyhedron? Well, since it has genus that is not equal to 1, it doesn’t have Euler characteristic 2. And the original proof that V+F-E=2 didn’t handle this sort of case. (That’s one reason why people often add convex as a condition, to deal with just this situation even though convex is in many respects stronger than what one needs.) Cauchy’s 1811 proof suffers from this problem as do some of the other early proofs (although his is repairable if one is careful). There are also other subtle issues that can go wrong and in fact do go wrong in a lot of the historical versions. Lakatos’s book “Proofs and Refutations” discusses this albeit in an essentially ahistorical fashion.
There are many things called “the Euler formula for polyhedron”, and you’re conflating all of them. Sounds like you missed the point of Proofs and Refutations.
Multiple actual historical versions (such as Cauchy’s proof) are wrong.
I’m pretty sure you’re at least half-joking. But just in case, I need to point out that mathematicians are not immune to this kind of thing.
yep, joke.
To evaluate a contrarian claim, it helps to break down the contentious issue into its contentious sub-issues. For example, contrarians deny that global warming is caused primarily by humans, an issue which can be broken down into the following sub-issues:
Have solar cycles significantly affected earth’s recent climate?
Does cosmic radiation significantly affect earth’s climate?
Has earth’s orbit significantly affected its recent climate?
Does atmospheric CO2 cause significant global warming?
Do negative feedback loops mostly cushion the effect of atmospheric CO2 increases?
Are recent climatic changes consistent with the AGW hypothesis?
Is it possible to accurately predict climate?
Have climate models made good predictions so far?
Are the causes of climate change well understood?
Has CO2 passively lagged temperature in past climates?
Are climate records (of temperature, CO2, etc.) reliable?
Is the Anthropogenic Global Warming hypothesis falsifiable?
Does unpredictable weather imply unpredictable climate?
It’s much easier to assess the liklihood of a position once you’ve assessed the liklihood of each of its supporting positions. In this particular case, I found that the contrarians made a very weak case indeed.
Here’s another one: what I call the layshadow heuristic: could an intelligent layperson produce passable, publishable work [1] in that field after a few days of self-study? It’s named after the phenomenon in which someone with virtually no knowledge of the field sells the service of writing papers for others who don’t want to do the work, and are never discovered, with their clients being granted degrees.
The heuristic works because passing it implies very low inferential distance and therefore very little knowledge accumulation.
[1] specifically, work that unsuspecting “experts” in the field cannot distinguish from that produced by “serious” researchers with real “experience” and “education” in that field.
SilasBarta:
I agree this is indicative of serious pathology of one sort or another, but in fairness, I find it plausible that in many fields there might be a very severe divide between real scholarship done by people on the tenure track and the routine drudgery assigned to students, even graduate students who aren’t aiming for the tenure track.
The pathologies of the educational side of the modern academic system are certainly a fascinating topic in its own right.
For how many fields do you think this is possible?
Refer to the linked discussion thread, which links to accounts of actual layshadows—they describe what fields they did this for in detail. It’s as you’d expect: they could pull it off for everything except engineering and the hard sciences.
This sounds like a useful heuristic, but I think there’s another one almost directly opposed to it which is worth keeping in mind. In some branches of psychology, for instance, there is so much low hanging fruit that you’d think that researchers would never have a shortage of work. But instead, entire schools of psychology have persisted based on conclusions drawn from single experiments which were never followed up with the appropriate further research to narrow down the proper interpretation. I’ve been told that sociology and anthropology suffer similar issues.
If a field (or sub-field) doesn’t exhibit enough interest in pursuing low hanging fruit, I think that’s a good sign that there’s a high ratio of ideological rationalization to solid research.
We’d expect most changes to the Earth’s climate to be bad (on net) for its current inhabitants because the Earth has been settled in ways that are appropriate to its current climate. Species are adapted to their current environment, so if weather patterns change and the temperature goes up or down, or precipitation increases or decreases, or whatever else, that’s more likely to be bad for them than good.
Similarly, humans grow crops in places where those crops grow well, live where they have access to water but not too many floods (and where they are on land rather than underwater), and so on. If the climate changes, then the number of places on Earth that would be a good place for a city might not change, but fewer of our existing cities will be in one of those places.
There are some expected benefits of global warming (e.g., “Crop productivity is projected to increase slightly at mid- to high latitudes for local mean temperature increases of up to 1-3°C depending on the crop, and then decrease beyond that in some regions”). But, unsurprisingly, climate scientists are projecting more costs than benefits, and a net cost. News articles are likely to have a further bias towards explaining negative events rather than positive ones, and may be of uneven quality (as waveman pointed out), so if you want a thorough account of the costs and benefits you should look at something like the IPCC report, which was the source of my quote about increased crop productivity.
I think it’s mostly an availability bias, since most of the non-climate scientists who have anything to say about global warming are heavily involved in either conservation or economic issues relating to the Global South, both areas likely to suffer under climate change. Do any Canadian/Russian/Northern European posters have any stories about people talking positively of climate change, I’ve heard a few comments about peach trees and wine in the UK, though it’s kinda muted because of the possibility of Gulf stream interactions making us colder. But certainly you hear plenty of arguments along the lines of “why should we care, we’ll be well out of it”.
EDIT: Come to think of it, given the wealth differentials involved, people probably avoid saying this because it would come across as callous, though with appropriate changes to international trade and development it needn’t be.
It’s a common joke in Alaska that Global Warming can’t come soon enough.
It’s tongue in cheek, generally related to some weather story which has been conflated to involve Accelerated Global Warming (even though the likelihood that the specific event has anything to do with the changing climate is extremely small).
It’s also worth noting that Climate Science as a discipline is extremely young. As far as I know you can’t even get a specific degree for it anywhere yet. It seems right now the best you can do is a meteorology or atmospheric science degree, or some sort of combined meteorology/climatology degree. That will probably change soon (and there may already be programs I am not aware of), but the study of the climate itself is only a few decades old, so expect a lot of poor theories to give way to more solid theories in the coming years.
The most unfortunate thing about climatology is just how politically charged it is while it is so new. I suppose this happens a lot in science, but it is still unfortunate. It is simply begging researchers to fail the second heuristic, and the usual safe haven of public funding is possibly the biggest source of the problem!
Good points. Also the official reports discuss the impact on cold countries and the water-conserving and growth enhancing effects of higher CO2 levels. So they are not blind to positive impacts of CO2/AGW.
I notice also that Canada, as a superficial beneficiary from AGW has dropped out of the Kyoto treaty. Apart from really cold countries there seem to be few winners.
I’ve been surprised by how bad the majority of scholarship is around the “inspired-by” or “metaphorical” genre of algorithms—neural networks, genetic algorithms, Baum’s Hayek machine and so on. My guess is that the colorful metaphors allow you to disguise any success as due to the technique rather than a grad student poking and prodding at it until a demo seems to work.
Within the metaphorical algorithms, I’ve been surprised at reinforcement learning in particular. It may have started with a metaphor of operant conditioning, but it has a useful mathematical foundation related to dynamic programming.
This was my area of research in my postgrad years. Specifically ants and birds. I couldn’t agree more. The techniques do work but the scholarship was absolutely abysmal.
Hey, can you expand a bit on this? Is this lecture an example of this?
Blast, I wrote a couple of paragraphs but accidental bumped the ‘cancel’ button. So you get dot points this time.
Some of the papers by field leaders misused statistical tool tools.
Insufficient comparison to the relevant mainstream techniques for the same problem.
Excessive amount of ad hoc algorithm selection and cherry picking of results.
I shouldn’t be able to see blatant problems in the leading research any any field when I am a hack who has had to self teach and pick things up as I go along.
My disillusionment, unfortunately, wasn’t just limited to that one field. I had higher expectations of academic research than what seems to be available in quite a few fields. There are some high quality fields but you still have to be careful when you take things at face value.
Not exactly. It is a bit more of a biology course than just a ‘metaphor’ course. It seems quite good. I’m looking at some of the other lectures now.
I hate it when that happens! There’s a good technique to prevent it from happening again, though: form recovery plugins, like Lazarus.
(Heck, it helped me just now; I accidentally pressed cancel on this very comment a moment ago.)
Thanks, installed. :)
Heh. Ultimately, all of AI is in this genre. A particularly bad aspect of this is that it usually means people choose their research problems based on what they think their approach can solve.
I’ll briefly plug my proposal for AI-related inquiries, that explicitly rejects the “metaphorical” approach by actually defining the question before finding the solution. At the same time it doesn’t rule out NNs, GAs, etc, but requires hard proof, in the form of an ungameable compression score, of their quality.
Modern AI is an odd combination of statistics, applied math, discrete math/combinatorics and logic. My theory is the only reason AI is a subfield of Computer Science at all is founder bias (Turing).
Totally agree. My slogan is that for AI to succeed it has to become an empirical science: it should use math, but only to the extent that the math is useful to describe reality. And it should be curiousity-driven and not application-driven like almost all modern research in computer vision and natural language processing.
I’m curious about how to not die (as an individual and a species). Would that count as curiosity or application drive? :P
I responded here
I respond to Hanson’s response here.
You recast his argument as proposing specialization, though he nowhere uses the word “specialize” or any variant, and I think it is misleading, overly forgiving actually, to think of it as specialization, as I will explain.
One of the main early proponents of the idea of specialization, who is responsible for its current well-deserved high esteem, was Adam Smith, but he was talking about specialization within a market. Now, in a market, is very strongly the case that non-specialists constantly judge the quality of your work, because shoddy specialized work is quickly perceptible to a non-specialist. For example, I know nothing about how cars are built these days, but I can easily tell whether my car runs, whether it breaks down, whether repairs are expensive, how much gas it uses, and so on. There is little, perhaps nothing at all, that really matters about the quality of a car that cannot ultimately be discovered by non-specialists without all that much difficulty.
So in the case of specialization in a market, the output of specialists is thoroughly and intensely judged by non-specialists, and they are competent to judge. What Robin Hanson is proposing is not this. He is not proposing a system of specialization in which the output of specialists is thoroughly and intensely judged by non-specialists who are competent to judge. He is proposing something close to the opposite of this. He is proposing that nonspecialists should not consider themselves competent to judge the output of specialists. He is proposing that specialists be deferred to, and that non-specialists, if they find the output of specialists to be of unacceptable quality, ignore their own senses and reason and accept the output anyway. He could not be clearer:
The system of specialization that he advocates is massively different from the rightly highly-valued system of specialization in the marketplace, in which every specialist is continually judged by his non-specialist customers. What Hanson advocates is deferring to authority. He does not clearly explain who these authorities are—sometimes he uses the word “academic” and sometimes the word “expert”. He briefly mentions that a person who has educated himself in the subject can learn how to identify an expert. But this is inconsistent with his overall message—for, if it were really up to each individual to decide for himself who is and who is not an “expert”, then that would completely undermine Hanson’s recommendation that one defer to “experts”, since, obviously, if I am allowed by Hanson to pick who I consider an “expert”, I am likely to pick the people who I find convincing, and therefore the people who I agree with, in which case I have in effect placed my own opinion back on the throne from which he has taken pains to remove it. This seems an inconsistency, and I resolve it by understanding Hanson to mean that we must defer to authority defined, not by ourselves, but by some outside system for establishing authority. And since he uses the term, “academic”, what can he mean but the process that defines academics—most importantly, the process of tenure.
Long story short, Hanson is advocating a system that decides truth by process of tenure. The problem with tenure is that old, tenured faculty decide who will be tenured, and they are likely to do so in a way that preserves dogma indefinitely. We have a long-lasting and extreme model of this: the Roman Catholic Church selects priests and selects Popes by a similar process, and we can see the results. If the universities have managed to escape the fate of the Church, I think it is because they have failed to perfectly implement the system which Hanson appears to advocate. Universities, after all, must teach students, and students are outsiders, they are non-academic customers who are continually judging the quality of a college and deciding where to attend. I can go on to explain how this healthy influence has lately been deteriorating, especially in certain departments, but my comment has gone on long enough.
True that I was guessing with the specialization resolution. Also true that another way you can resolve it is by saying that Hanson thinks you should listen to academics, and since Robin himself is an esteemed academic, he is allowed to have such opinions. I am forgiving because based on his other writings (e.g., love for prediction markets), I don’t think the latter possibility is the case; I don’t think in general Hanson wants “truth by tenure”, as you describe. Also I think Hanson would agree it is a stretch to say that his specialty is labor econ.
I think what we all want here are good rating systems to judge ideas. I am guilty of this too (see above), but it’s not clear that bickering over whose sol’n to the current mess is less inconsistent is going to get us anywhere.
Andy,
I think it’s not entirely clear what exactly you mean by “specialization.” In my post, I’m not addressing the question of when (if ever) it is advisable to stick your nose into topics outside of your own area of expertise, which is a fascinating topic in its own right. Rather, I am assuming that you have decided to do so and that your goal is to form a maximally accurate opinion about some such question.
So overall, I think my post is orthogonal to the issue of how much you should push yourself towards specialization, except insofar as it stops being relevant if you decide to pursue the most extreme and absolute specialization possible.
As an economist myself (though a microeconomist) I share some of your concerns about macroeconomics. The way support and opposition for the US’s recent stimulus broke down along ideological lines was wholly depressing.
I think the problem for macro is that they have almost no data to work with. You can’t run a controlled experiment on a whole country and countries tend to be very different from each other which means there are a lot of confounding factors to deal with. And without much evidence, how could they hope to generate accurate beliefs?
Add to that the raw complexity of what economists study. The human brain the most complex object known to exist and the the global economy is about 7 billion of them interacting with each other.
None of this is meant to absolve macroeconomics, it may just be that meaningful study in this area just isn’t possible. Macro has made some gains, there’s a list of things that don’t work in development economics and stabilisation policy is better than it was in the 1970s. But apart from that? Not much.
James,
Nice of you to drop by and comment—I still remember that really interesting discussion about price indexes we had a few months ago!
One thing I find curious in economics is that basically anything studied under that moniker is considered to belong to a single discipline, and economists of all sorts apparently recognize each other as professional colleagues (even when they bitterly attack each other in ideological disputes). This despite the fact that the intellectual standards in various subfields of economics are of enormously different quality, ranging from very solid to downright pseudoscientific. And while I occasionally see economists questioning the soundness of their discipline, it’s always formulated as questioning the soundness of economics in general, instead of a more specific and realistic observation that micro is pretty solid as long as one knows and respects the limitations of one’s models, whereas macro is basically just pseudoscience.
Is the tendency for professional solidarity really that strong, or am I perhaps misperceiving this situation as an outsider?
It may have more to do with compartmentalisation than anything else. Economists focus their attention on their own sub-disciplines, so the micro guys don’t pay much attention to what the macro guys are doing. I’m not sure that’s especially unusual in any intellectual discipline though.
Secondly, macro is what most people think of when they think of economics. So laypeople talk about the failings of economics when they’re really talking about fairly small parts of the discipline in the grand scheme of things.
As to why economists don’t pick up on this more often, I’m not really sure. Part of it is that debates on the epistemological merits of different methodologies don’t really get a lot of play among the general public for some reason.
That’s an interesting point. I will note that there is both bad and somewhat better macroeconomic research; the better research just focuses a lot more on having clear “microfoundations”.
I’m surprised that you don’t mention the humanities as a really bad case where there is little low-hanging fruit and high ideological content. Take English literature for example. Barrels of ink have been spilled in writing about Hamlet, and genuinely new insights are quite rare. The methods are also about as unsound as you can imagine. Freud is still heavily cited and applied, and postmodern/poststructuralist/deconstructionist writing seems to be accorded higher status the more impossible to read it is.
Ideological interest is also a big problem. This seems almost inevitable, since the subject of the humanities is human culture, which is naturally bound up with human ideals, beliefs, and opinions. Academic disciplines are social groups, so they have a natural tendency to develop group norms and ideologies. It’s unsurprising that this trend is reinforced in those disciplines that have ideologies as their subject matter. The result is that interpretations which do not support the dominant paradigm (often a variation on how certain sympathetic social groups are repressed, marginalized, or “otherized”), are themselves suppressed.
One theory of why the humanities are so bad is that there is no empirical test for whether an answer is right or not. Incorrect science leads to incorrect predictions, and even incorrect macroeconomics leads to suboptimal policy decisions. But it’s hard to imagine what an “incorrect” interpretation of Hamlet even looks like, or what the impact of having an incorrect interpretation would be. Hence, there’s no pressure towards correct answers that offsets the natural tendency for social communities to develop and enforce social norms.
I wonder if “empirical testability” is a should be included with the low-hanging fruit heuristic.
AShepard:
Well, I have mentioned history. Other humanities can be anywhere from artsy fields where there isn’t even a pretense of any sort of objective insight (not that this necessarily makes them worthless for other purposes), to areas that feature very well researched and thought-out scholarship if ideological issues aren’t in the way, and if it’s an area that hasn’t been already done to death for generations (which is basically my first heuristic).
Perhaps surprisingly, it doesn’t seem to me that empirical testability is so important. Lousy work can easily be presented with plenty of empirical data carefully arranged and cherry-picked to support it. To recognize the problem in such cases and sort out correct empirical validation from spin and propaganda is often a problem as difficult as sorting out valid from invalid reasoning in less empirically-oriented work.
It does make them, if not worthless, at least worth less for other purposes.
I spent a lot of time in college in the humanities, art (Bachelor of Fine Art degree, eventually), Philosophy, English (beyond the basic Comp and Rhetoric classes) etc.
The less objective the standards applied, the worse the product, the less effort put into it, the less the artist/author (and yes, I’m generalizing here) put into his work.
I had one class at a very anti-objective school where the teacher (and I almost never use that term, especially for instructors at that school) was fairly strict about meeting her standards, and the final critiques were amusing. Kids who skated by in other classes on a modicum of effort, little talent and a tractor load of post-modernist bullshit (mostly regurgitated and badly understood) got hammered for not working to the fairly loose requirements.
Art is not some special case of human effort where intellect and informed taste have bearing. It currently (since the ~50s) a place where intellect and informed taste have been told they aren’t welcome so the children could keep playing with their mud. And I don’t say this out of bitterness—I have very little talent for the “high” arts, and merely wish the people producing it these days were better at thinking than they are.
I disagree on the “artsy” fields. I feel like art history has reached a dead end because of the structure of the art market. As the area considered “art” for academic purposes has become more concentrated and expensive, scholarship has been undermined and I think we’ve seen a general unwillingness to engage new topics simply because they don’t lend themselves very well to museums or gallery sales.
Well put. You’ve concisely stated a heuristic that is very powerful but rarely used where it needs to be.
Be warned: it’s actually a source of sadness for me whenever I start asking the question, “if X performed Y badly, what would be the impact?”—because the conclusion is often “not much, so why does the world create incentives that led to them trying to do Y ‘well’ in the first place?”
I’m not sure that conceptual soundness has any meaning in fields which don’t even in principle admit to predictive power or provably correct solutions. It might be possible to imagine a rigorous approach to, say, textual criticism, but in actual practice the work that gets done is approached along aesthetic lines, and the people running humanities departments seem aware of and happy with this.
Of course, this wouldn’t apply to the related field of social science, and many of its subfields do seem to fail both of Vladimir’s tests.
Sounds like a good idea until you realize that you are throwing out most math and philosophy with the bathwater.
How about accepting either empirical testability or a requirement that all claims be logically proven? (Much of microeconomics and game theory slides in under ‘provable’ rather than ‘testable’. Quite a bit of philosophy fails under both criteria, but some of it approaches ‘provable’.)
Even in mathematics, you can find contrarian opinions that much of the field is meaningless. What we have is (or at least seems to be) logically proved from the basis of certain assumptions, but we could as easily have picked very different assumptions and proved different theorems instead. There is a prevailing opinion that certain assumptions (the mainstream foundations of mathematics) are correct or at least useful, but correctness ultimately reduces to an aesthetic judgement, and usefulness is known to be exaggerated.
Even better demand that there be strict rules in the discipline, which the research must obey—be it logical provability, empirical testability or whatever else. It is still possible to make up unreasonable rules, but production of bullshit is a lot easier without rules. Which is the case of deconstructionism and related fields.
prase:
Strict formal rules are a two-edged sword. If well designed, they indeed serve as a powerful barrier against nonsense. However, they can also be perverted, with extremely bad results.
In many disciplines that have been affected by the malaises discussed in this thread, what happens is that a perverse formal system develops, which then serves as a template for producing impressive-looking bullshit work. This sometimes leads to the very heights of ass-covering irresponsibility, since everyone involved—authors, editors, reviewers, grant committees… -- can hide behind the fact that the work satisfies all the highest professional expert standards if questioned about it. At worst, these perverse formal standards can also serve as a barrier against actual quality work that doesn’t conform to their template.
Just to be clear: by strict rules I don’t mean anything with significant subjective judgement involved, like peer reviews. I rather mean things like demanding testability, mathematical proofs, logical consistency and such. Also, not much rules governing the social life of the respective community, but rather rules applied to the hypotheses.
Also, I haven’t said that rules are sufficient. One can still publish trivial theories which nobody is interested to test, mathematical proofs of obscure unimportant theorems or logically consistent tautologies. But at least the rules remove arbitrariness and make it possible to objectively assess quality and to decide whether a hypothesis is good or bad, according to standards of the discipline.
The discipline’s standard of good hypothesis may not universally correspond to a true hypothesis, but I suspect that if the standards of the discipline are strict enough, either the correspondence is there, or it is easily visible that the discipline is based on wrong premises, because it endorses some easily identifiable falsehoods. (It would be too big a coincidence if a formal system regularly produced false statements, but no trivially false statements.)
On the other hand, when the rules aren’t enough formal, the discipline still makes complex false claims, but nobody can clearly demonstrate that their methods are unreliable because the methods (if there are any) can be always flexed to avoid producing embarrassingly trivial errors.
prase:
Trouble is, there are examples of fields where the standards satisfy all this, but the work is nevertheless misleading and remote from reality.
Take the example of computer science, which I’m most familiar with. In some of its subfields, the state of the art has reached a dead end, in that any obvious path for improving things hits against some sort of exponential-time or uncomputable problem, and the possible heuristics for getting around it have already been explored to death. Breaking a new path in this situation could be done only by an extraordinary stroke of genius, if it’s possible at all.
So what people do is to propose yet another complex and sophisticated but ultimately feeble heuristic wrapped into thick layers of abstruse math, and argue that it represents an improvement of some performance measure by a few percentage points. Now, if you look at a typical paper from such an area, you’ll see that the formalism is accurate mathematically and logically, the performance evaluation is carefully measured over a set of standard benchmarks according to established guidelines, and the relevant prior work is meticulously researched and cited. You have to satisfy these strict formal standards to publish.
Trouble is, nearly all this work is worthless, and quite obviously so. From a practical engineering perspective, implementing these complex algorithms in a practical system would be a Herculean task for a minuscule gain. The hypertrophied formalism often uses numerous pages of abstruse math to express ideas that could be explained intuitively and informally in a few simple sentences to someone knowledgeable in the field—and in turn would be immediately and correctly dismissed as impractical. Even the measured performance improvements are rarely evaluated truly ceteris paribus and in ways that reveal all the strengths and weaknesses of the approach. It’s simply impossible to devise a formal standard that wold ensure that reliably—these things are possible to figure out only with additional experimentation or with a practical engineering hunch.
Except perhaps in the purest mathematics, no formal standard can function well in practice if legions of extraordinarily smart people have the incentive to get around it. And if there are no easy paths to quality work, the “publish or perish” principle makes it impossible to compete and survive unless one exerts every effort to game the system.
That’s right, and I don’t disagree. Formal standards are not a panacea, never. But, do you suppose, in cases you describe, things would go better without those formal standards?
I am still not sure if we mean exactly the same thing, when talking about formal rules. Take the example of pure mathematics, which you have already mentioned. Surely, abstruse formalist descriptions of practically uninteresting and maybe trivial problems appear there too, now and then. And revolutionary breakthroughs perhaps more often result from intuitive insights of geniuses than from dilligent rigorous formal work. Much papers, in all fields, can be made more readable, accessible, and effective in dissemination of new results by shedding the lofty jargon of scientific publications. But mathematicians certainly wouldn’t do better if they got rid of mathematical proofs.
I do not suggest that all ideas in respectable fields of science should be propagated in form of publications checked against lists of formal requirements: citation index, proofs of all logical statements, p-values below 0.01, certificates of double-blindedness. Not in the slightest. Conjectures, analogies, illustrations, whatever enhances understanding is welcome. I only want a possibility to apply the formal criteria. If a conjecture is published, and it turns out interesting, there should be an ultimate method to test whether it is true. If there is an agreed method to test the results objectively, people aren’t free to publish whatever they want and expect to never be proven wrong.
If you compare the results of computer science to postmodern philosophy, you may see my point. In CS most results may be useless and incomprehensible. In postmodern philosophy, which is essentially without formal rules, all results are useless and incomprehensible, and as a bonus, meaningless or false.
I agree about the awful state of fields that don’t have any formal rules at all. However, I’m not concerned about these so much because, to put it bluntly, nobody important takes them seriously. What is in my opinion a much greater problem are fields that appear to have all the trappings of valid science and scholarship, but it’s in fact hard for an outsider to evaluate whether and to what extent they’re actually cargo-cult science. This especially because the results of some such fields (most notably economics) are used as basis for real-world decision-making with far-reaching consequences.
Regarding the role of formalism, mathematics is unique in that the internal correctness of the formalism is enough to establish the validity of the results. Sure, they may be more or less interesting, but if the formalism is valid, then it’s valid math, period.
In contrast, in areas that make claims about the real world, the important thing is not just the validity of the formalism, but also how well it corresponds to reality. Work based on a logically impeccable formalism can still be misleading garbage if the formalism is distant enough from reality. This is where the really hard problem is. The requirements about the validity of the formalism are easily enforced, since we know how to reduce those to a basically algorithmic procedure. What is really hard is ensuring that the formalism provides an accurate enough description of reality—and given an incentive to do so, smart people will inevitably figure out ways to stretch and evade this requirement, unless there is a sound common-sense judgment standing in the way.
Further, more rigorous formalism isn’t always better. It’s a trade-off. More effort put into greater formal rigor—including both the author’s effort to formulate it, and the reader’s effort to understand it—means less resources for other ways of improving the work. Physicists, for example, normally just assume that the functions are well-behaved enough in a way that would be unacceptable in mathematics, and they’re justified in doing so. In more practical technical fields like computer science, what matters is whether the results are useful in practice, and formal rigor is useful if it helps avoid confusion about complicated things, but worse than useless if applied to things where intuitive understanding is good enough to get the job done.
The crucial lesson, like in so many other things, is that whenever one deals with the real world, formalism cannot substitute for common sense. It may be tremendously helpful and enable otherwise impossible breakthroughs, but without an ultimate sanity check based on sheer common sense, any attempt at science is a house built on sand.
I don’t think we have a real disagreement. I haven’t said that more rigorous formalism is always better, quite the contrary. I was writing about objective methods of looking at the results. Physicists can ignore mathematical rigor because they have experimental tests which finally decide whether their theory is worth attention. Computer scientists can finally write down their algorithm and look whether it works. These are objective rules which validate the results.
Whether the rules are sensible or not is decided by common sense. My point is that it is easier to decide that about the rules of the whole field than about individual theories, and that’s why objective rules are useful.
Of course, saying “common sense” does in fact mean that we don’t know how did we decide, and doesn’t specify the judgement too much. One man’s common sense may be other man’s insanity.
Oh yes, I didn’t mean to imply that you disagreed with everything I wrote in the above comment. My intent was to give a self-contained summary of my position on the issue, and the specific points I raised were not necessarily in response to your claims.
Pure mathematics per se may not be empirically testable, but once you establish certain correspondences—small integers correspond to pebbles in a bag, or increments to physical counting devices—then the combination of conclusion+correspondence often is testable, and often comes out to be true.
In some cases, combinations of correspondences+mathematically true conclusion gives a testably false conclusion about the real world, such as the Banach-Tarski paradox.
The problem here isn’t the mathematics, but the correspondence. Physical balls are only measurable sets to a first approximation.
Yes.
However, imagine some abstruse mathematical theory that, in some “evaluate it on its own terms” sense, is true, but every correspondence that we attempt to make to the empirical world fails. I would claim that the failure to connect to an empirical result is actually a potent criticism of the theory—perhaps a criticism of irrelevance rather than falsehood, but a reason to prefer other fields within mathematics nevertheless.
I don’t know of any such irrelevant mathematical theories, and to some extent, I believe there aren’t any. The vast majority of current mathematical theories can be formalized within something like the Calculus of Constructions or ZF set theory, and so they could be empirically tested by observing the behavior of a computing device programmed to do brute-force proofs within those systems.
My guess is that mathematicians’ intuitions are informed by a pervasive (yet mostly ignored in the casual philosophy of mathematics) habit of “calculating”. Calculating means different things to different mathematicians, but computing with concrete numbers (e.g. factoring 1735) certainly counts, and some “mechanical” equation juggling counts. The “surprising utility” of pure mathematics derives directly from information about the real world injected via these intuitions about which results are powerful.
This suggests that fields within mathematics that do not do much calculating or other forms of empirical testing might become decoupled from reality and essentially become artistic disciplines, producing tautology after tautology without relevance or utility. I’m not deep enough into mathematical culture to guess how often that happens or to point out any subdisciplines in particular, but a scroll through arxiv makes it look pretty possible: http://arxiv.org/list/math/new
In my perfect world, all mathematical papers would start with pointers or gestures back to the engineering problems that motivated this problem, and end with pointers or gestures toward engineering efforts that might be forwarded by this result.
Combined with
is an intuition I agree with. As long as “empirically testable” includes “observing the behavior of a computing device”, that seems to be exactly where the “surprising utility” comes from.
Then you write
which is somewhat worrying, because the few papers I scrolled through seemed computationally implementable, assuming correctness (which is a mighty high standard for a preprint server in any case).
My experience has been that many mathematicians write articles with the assumption that if the reader can read the article, they also know what it’s “good for” in the engineering sense (which seems a somewhat delusional assumption to make). I think that if you read grant proposals, you’d get a better sense of the subfields that are “coupled with reality”—though again, only to a first approximation. Politics is the mind-killer almost everywhere.
I don’t have any good examples of actual irrelevant/artistic mathematics, but possibly:
“Unipotent Schottky bundles on Riemann surfaces and complex tori” http://arxiv.org/abs/1102.3006
would be an example of how opaque to outsiders (and therefore potentially irrelevant) pure mathematics can get. I’m confident (primarily based on surface features) that this paper in particular isn’t self-referential, but I have no clue where it would be applied (cryptography? string theory? really awesome computer graphics?).
...
Why do mathematicians put up with this? I’ll need to describe a mathematical culture a little first. These days mathematicians are divided into little cliques of perhaps a dozen people who work on the same stuff. All of the papers you write get peer reviewed by your clique. You then make a point of reading what your clique produces and writing papers that cite theirs. Nobody outside the clique is likely to pay much attention to, or be able to easily understand, work done within the clique. Over time people do move between cliques, but this social structure is ubiquitous. Anyone who can’t accept it doesn’t remain in mathematics.
Among other things, it sounds like you’re expecting inferential distances to be short.
My intent was to demonstrate a particular possible threat to the peer review system. As the number of people who can see whether you’re grounded in reality gets smaller, the chance of the group becoming an ungrounded mutual admiration society gets larger. I believe one way to improve the peer review system would be to explicitly claim that your work is motivated by some real-world problem and applicable to some real-world solution, and back those claims up with a citation trail for would-be groundedness-auditors to follow.
Actually, there’s a vaguely similar preprint: http://arxiv.org/PS_cache/arxiv/pdf/1102/1102.3523v1.pdf
The danger I see is mathematicians endorsing mathematics research because it serves explicitly mathematical goals. It’s possible, even moderately likely, that a proof of the Riemann Hypothesis (for example) would be relevant to something outside of mathematics. Still, I’d like us to decide to attack it because we expect it to be useful, not merely because it’s difficult and therefore allows us to demonstrate skill.
Why such prejudice against “explicitly mathematical goals”? Why on Earth is this a danger? One way or another, people are going to amuse themselves—via art, sports, sex, or drugs—so it might as well be via mathematics, which even the most cynically “hard-headed” will concede is sometimes “useful”.
But more fundamentally, the heuristic you’re using here (“if I don’t see how it’s useful, it probably isn’t”) is wrong. You underestimate the correlation between what mathematicians find interesting and what is useful. Mathematicians are not interested in the Riemann Hypothesis because it may be useful, but the fact that they’re interested is significant evidence that it will be.
What mathematics is, as a discipline, is the search for conceptual insights on the most abstract level possible. Its usefulness does not lie in specific ad-hoc “applications” of particular mathematical facts, but rather in the fact that the pursuit of mathematical research over a span of decades to centuries results in humans’ possessing a more powerful conceptual vocabulary in terms of which to do science, engineering, philosophy, and everything else.
Mathematicians are the kind of people who would have invented negative numbers on their own because they’re a “natural idea”, without “needing” them for any “application”, back in the day when other people (perhaps their childhood peers) would have seen the idea as nothing but intellectual masturbation. They are people, in other words, whose intuitions about what is “natural” and “interesting” are highly correlated with what later turns out to be useful, even when other people don’t believe it and even when they themselves can’t predict how.
This is what we see in grant proposals—and far from changing the status quo, all it does is get the status quo funded by the government.
It’s easier to concoct “real-world applications” of almost anything you please than it is to explain the real reason mathematics is useful to the kind of people who ask about “real-world applications”.
From an assumption of wealth, that we humans have plenty of time and energy, I agree with you—the fact that someone is curious is sufficient reason to spend effort investigating. However, (and this is a matter of opinion) we’re not in a position of wealth. Rather, we currently have important scarcities of many things (life), we have various ongoing crises, and most of our efforts to better ourselves in some way are also digging ourselves deeper in some other way, manufacturing new crises that will require human ingenuity to address.
Improvements to the practice of peer review would be valuable, to achieve more truth, more science, more technology.
You’re putting words in my mouth by claiming I’m following a “inferential distances are short” heuristic. That would be like additionally requiring the groundedness-auditor ought to bottom out in the real world after a short sequence of citations. I never said anything like that.
Your claim that all mathematicians somehow have accurate intuitions about what will eventually turn out to be useful is dubious. Mathematicians are human, and information about the world has to ultimately come from the world.
Earlier I suggested “computations”, that is, mechanical manipulations of relatively concrete mathematical entities, as the path for information from the world to inform mathematician’s intuitions. However, mathematicians rarely publish the computations motivating their results, which is the whole point that I’m trying to make.
Adding the quantifier “all” is an unfair rhetorical move, of course; but anyway, here we come to the essence of it: you simply do not see the relationship between the thoughts of mathematicians and “the world”. Sure, you’ll concede the usefulness of negative numbers, calculus, and maybe (some parts of) number theory now, in retrospect, after existing technologies have already hit you over the head with it; but when it comes to today’s mathematics, well, that’s just too abstract to be useful.
Do you think you would have correctly predicted, as a peasant in the 1670s, the technological uses of calculus? I’m not even sure Newton or Leibniz would have.
Human brains are part of the world; information that comes from human thought is information about the world. Mathematicians, furthermore, are not just any humans; they are humans specifically selected for deriving pleasure from powerful insights.
Every proof in a mathematics paper is shorthand for a formal proof, which is nothing but a computation. The reason these computations aren’t published is that they would be extremely long and very difficult to read.
I think we’ve both made our positions clear; harvesting links from earlier in this thread, I think my worry that mathematics might become too specialized is perennial:
http://www-personal.umich.edu/~jlawler/von.neumann.html
http://www.math.rutgers.edu/~zeilberg/Opinion104.html
http://bentilly.blogspot.com/2009/11/why-i-left-math.html
Regarding the distinction between computation and proving, I was attempting to distinguish between mechanical computation (such as reducing an expression by applying a well-known set of reduction rules to it) and proving, which (for humans) is often creative and does not feel mechanical.
By “the computations motivating their results”, I mean something like Experimental Mathematics: http://www.experimentalmath.info/
The issue here is about the “usefulness” of mathematical research, and its relationship to the physical world; not whether it is too “specialized”. Far from adding clarity on the intellectual matter at hand, those links merely suggest that what’s motivating your remarks here is an attitude of dissatisfaction with the mathematical profession that you’ve picked up from reading the writings of disgruntled contrarians. They may have good points to make on the sociology of mathematics, but that’s not what’s at issue here. Your complaint wasn’t that mathematicians don’t follow each other’s work because they’re too absorbed in their own (which is the phenomenon that Zeilberger and Tilly complain about); it was that the relationship between modern mathematics and “the world” is too tenuous or indirect for your liking. On that, only the Von Neumann quote (discussed here before) is relevant; and the position expressed therein strikes me as considerably more nuanced than yours (which seems to me to be obtainable from the Von Neumann quote by deleting everything between “l’art pour l’art” and “whenever this stage is reached”).
As for computation: if your concern was the ultimate empirical “grounding” of mathematical results, the fact that all mathematical proofs can in principle be mechanically verified (and hence all mathematical claims are “about” the behavior of computational machines) answers that. Otherwise, you’re talking about matters of taste regarding areas and styles of mathematics.
The inferential chain is: too specialized leads to small cliques of peers who can review your work, which allows mutual admiration societies to start up and survive, which leads to ungroundedness, which leads to irrelevance.
Again, your claim that I think the relationship between modern mathematics in the world is too indirect is simply putting words in my mouth. I have no difficulty with indirect or long chains of relevance; my problem is with “mathematics for mathematics sake”, particularly if it is non-auditable by outsiders. Would you fund “quilting for quiltings sake”, if the quilt designs were impractically large and never actually finished or used to warm or decorate?
Here is a way that I think our positions could be reconciled: If there were studies on the “spin offs” of funding mathematicians to pursue their intuitions (deciding who is a mathematician based on some criterion perhaps a degree in mathematics and/or a Putnam-like test), then citing those studies would be sufficient for my purposes. I believe this is far less restrictive than current grants, which (as you say) demand the grant-writer to confabulate very specific applications; graph theory funded by sifting social networks for terrorists, for example.
Non-functional art quilting
Found while looking for the first link, and included for pretty
I think the crucial thing is not so much demonstrating that there might be some use for some not-obviously useful math—I doubt there’s any way to do that usefully in the short run. An accurate answer can’t be known for any but the most obvious cases, and just making up something that sounds vaguely plausible is all too easy, especially if money is riding on the answer.
Instead, I recommend working on understanding the process by which uses are found for pure math, and, if it makes sense, cultivating that process.
The non-sequitur occurs in the third step (or possibly the second, depending on what you’ve built into the meaning of “mutual admiration society”). The “mutual admiration” in question is based largely, even mostly, on the work that people do within the clique, and not simply on membership. Both within and between cliques, “relevance” is regulated by the mechanism of status: those mathematicians (and cliques) working on subjects that the smartest mathematicians find interesting (which, as I’ve argued, is the appropriate test for “relevance” in this context) will tend to rise in status, while areas where “important” problems are exhausted will likewise lose prestige. This doesn’t work perfectly, and there is some random noise involved, of course, but in the aggregate statistical sense, this is basically how it works. Contrary to the conventional cynical wisdom, the prestige of mathematical topics does not drift randomly like clothing fashion (unless the latter has patterns that I don’t know about), but is instead correlated with (ultimate) usefulness by means of interestingness.
It’s already easy to trace the intellectual ancestry of any mathematics paper all the way back to counting: you simply identify the branch of mathematics that it’s in, look up that branch in Wikipedia, and click a few times. So what else do you mean by “groundedness”, if not that subjects which are fewer inferential steps away from counting are more “grounded” than subjects which are more steps away?
I still don’t understand why you have a problem with “mathematics for mathematics’ sake”. Is interestingness not a value in itself? For me it certainly is, and this is the core of my argument for academic/high-IQ art—an argument which also applies to mathematics, for all that mathematics also benefits from utilitarian arguments. “Quilting for quilting’s sake” as you describe it just sounds like a form of visual art, and visual art is something I would indeed fund.
What would count as a successful “spin off” in your view?
In your first paragraph, you have excellently made my point; the social process of mathematics depends on between-clique evaluations. To the extent that those between-clique evaluations are impossible, the social process of mathematics becomes more like clothing fashion, and mathematical goals become decoupled from engineering or science applications.
As I said previously, my criticism of “mathematics for mathematics sake” is based on an attitude of scarcity—which I admit is an attitude rather than a fact. Similarly, I would tax visual art rather than subsidize it.
Successful spin offs of mathematics would be applications of mathematics to fields that have better arguments that their work is not idle amusement, status-seeking or fashion-following.
But they’re never impossible, and of necessity they’re always going on (since university positions, grant dollars, etc. are limited in number). The only question can be what criteria are being used. While it is conceivable that some fields could end up using criteria that are “arbitrary” (i.e. not ultimately correlated with fundamental values), my argument is that this is not the case in mathematics, due mainly to the strong IQ barrier to entry. (Generally, my view is that the higher someone’s IQ, the more strongly impressing them is correlated with satisfying fundamental values.)
Mathematical cliques are not islands; in fact to the extent they become isolated, they lose prestige! There is a continuum of relatedness, with cliques clustering into “supercliques” of various levels. Mathematicians, particularly those with a taste for cynical humor, will joke about how it is supposedly impossible to understand the work of neighboring cliques; but the reality is that their ability to understand varies more or less continuously with distance, and more or less increases with one’s rank within a clique or superclique.
To summarize, there shouldn’t be much to worry about so long as status in mathematics remains correlated more strongly with IQ than with other variables such as social/political skills. (Given that they’re still willing to (try to) award prizes to someone like Perelman, I’d say the field is in pretty good shape.)
This is most extraordinary. Just how prosperous would we have to get before you would allow people to have tax-free fun?
Assuming you meant it literally (and not just as a signal of something else), this scares the hell out of me. It sounds like we may have practically-incompatible utility functions.
(How would that even be implemented? By paying inspectors to come to people’s houses to check whether they’ve drawn any pictures that day? Extra sales tax on art supplies?)
Allow the visual art industry to have all the usual taxes on goods sold, exhibition prices and education. Don’t subsidise the field at all via grants or via university tax breaks. No commando raids on kindergartens to catch off-the-books, under-the-table finger painters required.
That’s the status quo. The proposal, as I understood it, was to have additional taxes specific to art.
No, the status quo is heavy subsidization. I have an essay on how there is too much art & fiction (http://www.gwern.net/Culture%20is%20not%20about%20esthetics.html) and one of my points is that the arts are heavily subsidized both directly and indirectly, which contributes to the over-supply.
I don’t buy the claim that copyright law amounts to a subsidy. Copyright law is an enforced monopoly, which is not the same thing.
Of course, you’re not focused on the specific works (which is what copyright grants a monopoly on) but on the industry as a whole. So perhaps monopoly on specifics amounts to a subsidy on generalities? But copyright law doesn’t have the same effects as a subsidy overall. A subsidy should lead to a higher quantity at a lower price, but copyright law surely leads to a higher price.
(Defenders of copyright usually argue that it also leads to a higher quantity, and I entirely agree with your scepticism that this would actually be a good thing. It’s obvious to me that copyright law is bad through and through, regardless of its effect on quantity. Still, anti-copyright activists have a valid point that it’s not obvious that copyright actually increases quantity either, since it makes distribution and derivative works harder.)
I think the major way fiction is subsidized is people producing fiction in spite of it not being at all lucrative for most of them.
What are you planning on doing about fan fiction?
Nothing. If people wish to write as their recreation, that’s fine. I’m not arguing that gardens be banned either. The suggestions in my linked essay are that the subsidies be dropped and possibly a Pigovian tax imposed on commercial fiction.
(Am I being unreasonable in expecting people to read the essay which is all about how much fiction/art is produced, its value, and what we should do about it? It seems to me that much of the math discussion is isomorphic.)
I admit I read your essay very quickly, and skipped the footnotes.
I don’t think fiction is very heavily subsidized compared to the amount that’s produced. Copyright enforcement is the only thing you list that I think matters, and I believe we’d be drowning in fiction even without it.
Some responses.
No disagreement.
But if something is wasted what matters is not that there was too much of it, but that an opportunity to produce something else was lost. You’re focused on there being too much literature—when the relevant complaint is that there is too little of other things. An unread book does no harm. A year spent writing the book when something else could have been done with that year represents a lost opportunity. Maybe this is the focus of your concern, but it does not seem to be.
Indeed, most of these apples will be wasted if not sold, and this represents an opportunity lost to produce something else with the soil, but I think the analogy to novels is weak, as I will argue.
There are various ways in which 100 novels is not like 100 apples. For one thing, 100 novels is like 100 varieties of apple. You may prefer one variety of apple; your neighbor may prefer another. The novel Twilight, for example, appeals to many people and does not appeal to me. There are, meanwhile, novels that appeal to me but would probably not appeal to a typical fan of Twilight.
For another, creativity requires variation as well as selection. The vast majority of the variants are not selected, but that does not mean that they are wasted, because a reduction in variation reduces the raw material on which selection can act. In particular, in order that one brilliant writer be found, many must make the attempt. Reduce the number attempting, and you may well reduce the number of great writers found.
In short, if 1000 novels are written and only one is widely read and preserved, that does not necessarily mean the other 999 were wasted. They made up the variation that selection acted upon.
Sure, you can always manufacture hypothetical scenarios, and cherrypick real ones, in which the work of selection is already done, in which the superior variant and only the superior variant is produced in the first place. But that’s simply fantasy. In reality, variation is needed as raw material for selection.
I believe you have misapplied both Gresham’s law and hyperbolic discounting. For instance, there’s an important reason that Gresham’s law applies to money, and novels aren’t money.
This could be said at almost any point in history. You seem to be using it to imply that new works are unnecessary. But it would be equally good as an argument that Beethoven need not bother writing his masterpieces, since, after all, Bach had already written enough to fill a lifetime. But anyone who has listened to Beethoven knows that, even though Bach had already written enough to fill a lifetime, we are nevertheless enriched for having Beethoven, even though Beethoven necessarily displaces Bach to some extent.
Generalizing: even though we are already filled to capacity with art, literature, and music to spend all our lives on, we are nevertheless further enriched by new creation.
Yes, but efficiency is relative to what people want, which is difficult to discover except by observing their choices. And we see that they overwhelmingly choose contemporary fiction. My theory is that contemporary fiction really and truly does give the audience that chooses it greater satisfaction than most great old fiction, even though future generations will find most of it wanting. See for example that often Shakespeare will be updated in certain respects (such as setting—West Side Story, Ran, Forbidden Planet) for a new audience, and Shakespeare himself updated older stories for his own audience. For another example, the movie Clueless is an update of the Austen novel Emma. The novel Twilight takes place in contemporary America; in a hundred years it will be hopelessly out of date, but for much of its audience, Dracula by Bram Stoker, classic that it is, is not contemporary enough.
Because of this, there is a never-ending demand for contemporary fiction and for updates of old fiction, and this will keep writers in business indefinitely. You may judge this wrong by certain standards which you offer, but efficiency depends on what people want, and this is what they want. You don’t get to make the concept of efficiency mean something different.
Evidently not. I see you argue against this, but I find your argument completely unpersuasive. What we have in front of us as evidence is consumer behavior. We see the choices people make. Against this you present hypotheticals and a couple of quotes from people. For example, someone whose grandson happens to be into old music at the moment.
Meanwhile we see that updates of classics, such as Clueless and West Side Story, do very well in the market. This validates the choice that the movie producers made, which choice is based in part on the assumption that there is a significant audience for an update—i.e., people who would in fact not be equally satisfied by the originals without update.
I liked the linked essay. I suspect an even stronger case could be made that there’s too much supply of news.
I wasn’t talking about subsidization, I was talking about taxation. The logic of the discussion was as follows: (1) Johnicolas said there should be an art tax; (2) I said “how would you do that?”; (3) wedrifid said “subject art to standard sales taxes”; (4) I pointed out that art already is subject to standard sales taxes—so far as I know it isn’t specifically exempt; hence wedrifid’s response doesn’t work as an answer.
The part of wedrifid’s comment that I quoted defined the scope of my remark, which you misunderstood.
Any meaningful discussion of taxation focuses on the net, not on arbitrary subdivisions and labels. If art were taxed at 50% sales tax but also came with a tax deduction of 100%, I would feel real physical pain to see someone argue ‘oh, but we are discouraging and taxing heavily artwork! Just look at that 50%!’
Which is why I bring up the subsidies. If art is being hugely subsidized, then just being taxed like everything else (in your impoverished sense) still leads to art being cheaper than it should.
That may or may not be a fair point to make, but in that case your comment should have begun with “Yes, but...” instead of “No...”.
On the merits, I disagree on every point: that there is too much art, that current art subsidies are “heavy”, and that art subsidies necessarily cancel out sales taxes for the purpose of interpreting government policy (which may simply be incoherent and non-uniform).
(I had let the parent be, not wanting to emphasise disagreement but the follow up prompts a reply.)
I do not share your interpretation. The relevant quote is:
… A general sentiment regarding where he would place a slider on a simplistic one dimensional scale of financial incentive vs disincentive. It is definitely not a proposal for specific intervention in any particular jurisdiction.
Come to think of it your status quo claim is way off. The following is definitely not the status quo:
Incidentally, investment in culture and education—even with respect to visual arts—is something I approve of. I just note that your questioning was rather disingenuous:
Taxation and subsidisation are well understood. This objection is silly (your other soldiers are better).
Right; and he was wanting to place it to the right of zero (on the “disincentive” side) whereas you were talking about moving it from the left of zero to zero. This is the distinction I was pointing out.
See the very comment you linked, which contains a reminder that my “status quo” remark did not apply to that aspect.
The main point of that was to emphasize transaction costs of taxation. You will note that I immediately followed it by a more “reasonable” suggestion so as to forestall accusations of being overly rhetorical.
The obvious thing would be some sort of excise tax, like the “sin taxes” on alcohol and the like. That might extend to art supplies, but not necessarily; just charge it on the sale of the final product (if you sell it).
Not that I’m for this; otherwise I agree with your reaction to the proposal.
Allow me to clarify: Tax art rather than subsidize it, at a roughly comparable rate to other industries. I don’t think it matters much whether it’s exactly the same, slightly higher, or slightly lower.
One of the techniques of rational argumentation is called the “Principle of Charity”. When reading and interpreting what someone said, you should infer missing details in order to make their argument the strongest argument possible. In a lw-centric example, Eliezer’s idea of “The least convenient possible world” is the principle of charity, specialized to interpreting hypothetical situations.
I don’t understand the point of your paragraph explaining the principle of charity as if I might never have heard of it. If the implication is that I was being uncharitable to you by not interpreting “tax X” to mean “fail to exempt X from the default taxes”, I strongly disagree. When someone says, for example, that cigarettes should be taxed, they don’t just mean that the same sales taxes that apply to everything else should also apply to cigarettes (as if the default were to exempt cigarettes). Rather, they mean that there ought to be a specific tax on cigarettes in addition to whatever taxes would ordinarily apply, in order to discourage consumption of cigarettes. (This is known as a “sin tax”.)
In the context of the above discussion, the only reasonable interpretation of your remark was that you favored a sin tax on art, analogous to existing sin taxes (in some jurisdictions) on “harmful” products such as alcohol, cigarettes, and the like. If you hadn’t meant this, and simply meant that art should be treated like any other product, you would have simply said “I wouldn’t subsidize art”; as opposed to saying “I would tax art rather than subsidize it”, i.e. “not only would I not subsidize art, I would actually tax it”.
In case this needs still further clarification, the reason this is the only reasonable interpretation is that (so far as I know) art is not exempted from existing taxes. If it were, then the interpretation of “tax art” to mean “subject art to the same taxes as everything else” (i.e. “remove the exemption”) might make sense. As it is, however, “tax art” is highly misleading if what you mean is merely “remove subsidies” (where “subsidies” mean things like government grants, university salaries, etc, rather than tax-exemptions, which, again, don’t currently exist).
Non-functional art quilting
Found while looking for the first link, and included for pretty
Indeed, people will always amuse themselves. But that doesn’t mean they deserve an academic field devoted to amusing people within their own little clique. Should there be Monty Python Studies, stocked with academics who (somehow) get paid to do nothing but write commentary on the same Monty Python sketches and performances?
No, because that would be ****ing stupid. Their work would only be useful to the small clique of people who self-select into the field, and who aspire to do nohting but … teach Monty Python studies. Yet the exact same thing is tolerated with classical music studies, whose advocates always find just the right excuse for why their field isn’t refined enough to make itself applicable outside the ivory tower, or to anyone who isn’t trying to say, “Look at me, plebes! I’m going to the opera!”
With that said, I agree that this criticism doens’t apply to the field of mathematics for the reasons you gave—that it is likely to find uses that are not obvious now (case in point: the anti-war prime number researcher whose “100% abstract and inapplicable” research later found use in military encryption). So I think you’re right about math. But you wouldn’t be able to give the same defense of academic art/music fields.
Well, um, thanks for bringing that up here, but of course I don’t give the same defense of academic art/music fields; for those I would give a different defense.
There is.
Yes, one that fits in the class I described thusly:
And re: Monty Python Studies:
God help us all.
There’s one funny quote I like about partially uniform k-quandles that comes to mind. Somewhat more relevantly, there’s also Von Neumann on the danger of losing concrete applications.
On “ideologically charged” science producing good results:
Evolutionary biology, in general. Creationism went down really hard and really quickly.
Did it? Sure, it’s clear cut now. But what I’ve read about the subject says that back in the days when it was a matter of mainstream intellectual debate, it was long and very messy, and included things like scientists on the ‘right’ side accepting extremely dodgy evidence for spontaneous generation of life in the test tube because they felt that to reject it would weaken the case for being able to do without divine intervention.
I don’t think this is a good example. My post is intended to apply to the contemporary academia, whereas the basics of evolutionary theory were proposed way back in the 19th century, and the decisive controversies over them played out back then, when the situation was very different from nowadays in many relevant ways. (Of course, creationism is still alive and well among the masses, but for generations already it has been a very low-status belief with virtually zero support among the intellectual elites.)
On the other hand, when it comes to questions in evolutionary theory that still have strong implications about issues that are ideologically charged even among the intellectual elites, there is indeed awful confusion and one can find plenty of examples where prestigious academics are clearly throwing their weight behind their favored ideological causes. The controversies over sociobiology are the most obvious example.
In contrast, when it comes to modern applications of evolutionary theory to non-ideologically-sensitive problems, the situation is generally OK—except in those cases where the authors don’t have a clear and sound approach to the problem, so they end up producing just-so stories masquerading as scientific theories. This however is pretty much the situation that should trigger my first heuristic.
Evolutionary biology benefited from two things: the correct side (vis-a-vis Creationism) was absolutely correct—could not possibly have been more correct—and the incorrect side was culturally identified, both by themselves and academia, as outsiders to the mainstream intellectual tradition.
As it happens, all the evidence points to life on earth arising by completely natural processes. The amount of supernatural involvement that apears to be there is: 0% (and negative rates aren’t possible.) Compare with a question like “what is the marginal effect of tax rates on labor supply?” Whatever the correct answer to that question is in that time and place, there are different social groups that benefit from governments acting on relatively higher and relatively lower estimates, and there’s no logical bound on where it could lie, independent of empirics. (Most people’s intuitive guess is that it moderately reduces labor supply in some sense, but perhaps that’s just because that’s where the political balance of power lies right now. Rand wrote novels where modest taxes lead the captains of industry to actively destroy output; a clever grad student could show microeconomic pathways leading from higher taxes to higher labor supply.)
And: pretty much everyone in the left-right economics debate, from Austrians to neoclassicals to Keynesians to Marxists, thinks of themselves as operating within the mainstream Western intellectual tradition, even if they don’t have the dominant position in economics at the moment. Bias affects all humans, but they all at least like to self-conceptualize as principled followers of evidence. Creationists tried to put on that garb late in the game and no one bought it anyway; since for cultural reasons they would never become dominant in the academy, few presumably felt the need to twist academic interpretations around to prevent Creationists from gaining ground.
On occasion, there have been ideologically-charged debates within mainstream materialist biology. I’m no expert on the technical issues involved, but it doesn’t seem that those who can credibly claim to be arrived at a clear consensus either (as they did on “is the diversity of life on Earth the result of natural processes?”)
While they are within the Western philosophical tradition, it’s my impression that neither Austrians nor Marxists can be described as strictly “followers of evidence”, leading me to wonder whether the others can really, either.
The Austrian notion of praxeology, and the Marxist notion of dialectical materialism, are non-empirical. Von Mises insisted on praxeology being an “a-priori science”, like mathematics, rather than an empirical science. Marxist dialectics similarly attempts to begin from axiomatic principles (“laws of dialectics”) rather than measurement.
It’s true that the claims of various schools of thought to be principled followers of the evidence is exceedingly likely to be an exaggeration, if it holds any connection to reality at all. But that doesn’t impugn the claims’ sincerity - or their function as cultural ingroup markers, which is what matters here. (Of course, there are other cultural barriers between them, but none as extreme as that between Creationists ans mainstream biologists, I think.)
“Evidence” can be rationalist as well as empiricist; if the Austrians are right that the laws of economics are discoverable by deduction, then everyone else is wrong to look to the physical world. (Recall the mathemetician’s joke about the physicist who proved all odd numbers are prime: “3 is prime, 5 is prime, 7 is prime, 9 - measurement error, 11 is prime, 13 is prime, viola!”) I think it’s pretty silly for them to think this, of course, but.. For their part I’ve never really seen a western Marxist invoke dialectics except in some very trivial way. What Marxists, Keynesians, and neoclassicals do do (and Austrians also do, although they’d deny they’re doing it) is adopt simplifying assumptions to construct models around. Plausibility limits what one can assume, of course, but choosing the right assumptions is a sufficiently subtle task that each can just as plausibly accuse the others (and, probably more often, their intra-school rivals) of adopting whatever assumptions lead to their preferred conclusions. (My ex ante guess is that the highest proportion of good research takes place within neoclassical and to a lesser extent Keynesian, but only because its status gives it better access to talent and funding, and scrubs away any ideology-signaling effects of working within the discipline.)
Just beat me to it.
I thought evo biology pretty obvious, actually. Maybe OP has some reason for disqualifying it?
I am reminded of this recent article from the arXiv blog:
Biologists Ignoring Low-Hanging Fruit, Says Drug Discovery Study
With the slight problem that Moldbug appears to be writing as a Systems Weenie, and being someone with cursory training on multiple sides of this issue (PL/Formal Verification and systems), I don’t think his assessment there is accurate.
When assessing an academic field, you should include a kind of null hypothesis: “Academia is investigating interesting problems, but I’m a weenie who doesn’t take a complete or unbiased look at the state of academia.” This is often true.
Further example: a couple weeks ago I emailed Daniel Dewey about his Value Learners paper. I also read the ensuing LessWrong discussion. It turned out that the fundamental idea behind value learners was published in academia as a PhD thesis in ~2003, and someone linked it.
So why didn’t we all know about this? Because we were weenies who didn’t look at the academic consensus before diving in ourselves.
You confuse two very different issues.
1) How much weight you should give to the views of academics in that area, e.g., if some claim is accepted by the mainstream establishment (or conversely viewed as a valid point of disagreement) how much should that information affect your own probability judgement.
2) How much progress/how useful is the academic discipline in question. Does it require reform.
Your arguments in the first part are only relevant to #2. The programming language research community may be mirred in hopeless mathematical jealousy as they create more and more arcane type systems while ignoring the fact that ultimately programming language design is an entirely psychological question. The languages are all Turing complete and most offer the same functionality in some form the only question is one of human useability and the community doesn’t seem very interested in checking what sorts of type systems or development environments really are empirically more productive. Maybe physics is stuck and can no longer make any real progress.
Nevertheless this has no bearing on how I should treat the evidence that 99% of physics professors predict experiment X will have outcome Y. Indeed, the argument that physics is stuck is largely that they have been so successful in explaining any easily testable phenomena it is difficult to make further progress. Similarly if I see that the programming language research people say that type system Blah is undecidable I will take that evidence seriously even if it doesn’t turn out to be that useful.
(Frankly I think the harsh on CS is a bit unfair. Academia by it’s nature is conservative and driven by pure research. We don’t yet know whether their work will turn out to be useful down the road since CS is such a young discipline while at the same time many people do work in both practical and theoretical areas.)
I think #1 is the more interesting question. Here I would say the primary test should be whether or not disputes eventually produce consensus or not. That is does the discipline build up a store of accepted fact and move on to new issues (with occasional Kuhnian style paradigm shifts) or does it simply stay mired in the same issues without generating conclusions.
Pardon I didn’t notice your comment earlier—unfortunately, you don’t get notices when someone replies to top-level articles as it’s done for replies to comments.
The difference you have in mind is basically the same as what I meant when I wrote about areas that are infested with a lot of bullshit work, but still fundamentally sound. Clearly CS people are smart and possess huge practically useful knowledge and skills—after all, it’s easy for anyone who works in CS research in an institution of any prominence to get a lucrative industry job working on very concrete, no-nonsense, and profitable projects. The foundations of the field are therefore clearly sound and useful.
This however still doesn’t mean that there aren’t entire bullshit subfields of CS, where a vast research literature is produced on things that are a clear dead-end (or aimed at entirely dreamed-up problems) while everyone pretends and loudly agrees that great contributions are being made. In such cases, the views expressed by the experts are seriously distant from reality, and it would be horribly mistaken to make important decisions by taking them at face value. People who work on such things are of course still capable of earning money doing useful work in industry, but that’s only because the sort of bullshit that they have to produce must be sophisticated enough and in conformity with complex formal rules, so in order to produce the right sort of bullshit, you still need great intellectual ability and lots of useful skills.
You may be right that I should have perhaps made a stronger contrast between such fields and those that are rotten to the bottom.
I do agree that there are fields where the overall standards of the academic mainstream are not that high, but I’m not sure about the heuristics—I tend to use a different set.
One confusing factor is that in almost any field, the academic level of an arbitrary academic paper is not that high—average academic papers are published by average scientists, and are generally averagely brilliant—in other words, not that good. The preferred route is typically to prove something that’s actually already well known, but there are also plenty of flawed papers. There are also plenty of papers that are perhaps interesting if you’re interested in some particularly small niche of some particularly minor topic, but are of no relevance to the average reader. None of this says anything much about the quality of the mainstream orthodoxy, which can be very much higher than the quality of the average paper.
My main principle is that human beings are just not that intelligent. They are intelligent enough to follow a logical argument that is set into a system where there are tightly defined rules from which one can reason. They are NOT intelligent enough to reason sensibly AT ALL in regions where such rules are not defined. Well, perhaps a logical step or two is plausible, but anything beyond that becomes very dubious indeed—it is like trying to build a tower on a foundation of jello.
Reasoning based on vague definitions is a red flag—it encourages people to come up with any answer they want, and believe they’ve logically arrived at it. Reasoning based on a complicated set of not particularly related facts is a red flag, as nobody is intelligent enough to do it correctly.
Someone once said that all science is either physics or stamp collecting. It’s close—you have to have some organising principles of decent mathematical quality to do reasoning with any certainty. Without that, stamp collecting is the limit of the possible.
Equally, maths is not a panacea. It’s quite possible, in an academic paper, to spend a great deal of time developing a mathematical argument based on assumptions that aren’t really connected to the question you’re trying to answer—the maths is probably correct, but the vague and fuzzy bit where the maths is trying to connect to the problem is where it all goes wrong. To take the example everyone knows, financial models that assume average house prices can’t go down as well as up may have perfectly correct mathematics, but will not predict well what will happen to those investments when house prices do go down.
In summary, those fields with widely accepted logical systems are probably doing something right. Those fields where there are multiple logical systems that are competing are probably also doing something right—the worst they can do is to reason correctly about the wrong thing. Fields where there is an incumbent system which is vague are bad, as are those fields where freeform reason is the order of the day.
DuncanS:
I disagree with this. In many areas there are methodologies that don’t approach a mathematical level of formalization, and nevertheless yield rock-solid insight. One case in point is the example of historical linguistics I cited. These people have managed to reach non-obvious conclusions as reliable as anything else in science using a methodology that boils down to assembling a large web of heterogeneous common-sense evidence carefully and according to established systematic guidelines. Their results are a marvelous example of what some people call “traditional rationality” here.
In a way making a forum post is an example of the very kind of thing that I’m criticising—it’s a piece of freeform expression, and it’s a medium in which mistakes creep in easily.
I think you’re right to disagree with my statement there. The key thing isn’t the presence of mathematics—it’s the existence of some kind of set rational process—the “established systematic guidelines” that you mentioned.
I thought it was Heinlein, but it’s actually Ernest Rutherford.
It has a germ of truth, but I think it’s deeply misleading. In particular, it needs some kind of nod to the importance of relevance to everyday life. E.g., it would be more serious to claim “all science is either physics, or the systematizing side of some useful discipline like engineering, or stamp collecting.” Pure stamp collecting endeavors have nothing to stop them from veering into the behavior stereotypically associated with modern art or the Sokal hoax. Fields like paleobotany or astronomy (or, indeed, physics itself in near-unobservable limits) can become arbitrarily pure stamp collecting when the in-group controls funding. More applied fields like genetics or immunology or synthetic chemistry or geology are messy and disordered compared to pure physics, and do resemble stamp collecting in that messiness. But true stamp collecting is not merely messy, but also arbitrarily driven by fashion. To the extent that a significant amount of the interest (and money) associated with an academic field flows from applications like agriculture and medicine and resource extraction, it tends not to dive so deeply into true free-floating arbitrariness of pure stamp collecting.
I’m not as hard on stamp-collecting as you are. Admittedly, you need some sort of theory for why the information you’re collecting is of interest, but if the information isn’t widely and and carefully collected, the theoreticians don’t have anything to work with.
I wasn’t trying to be hard on that kind of collecting, though I was making a distinction. To me, choosing stamps (as opposed to, e.g., butterflies or historical artifacts) as a type specimen suggests that the collecting is largely driven by fashion or sentiment or some other inner or social motive, not because the objects are of interest for piecing together a vast disorderly puzzle found in the outer physical world. Inner and social motives are fine with me, though my motivation in such things tends to things other than collecting. (E.g., music and Go and Chess.)
As far as I can see, sitting in the mechanical engineering department of a state university, engineering research is a combination of physics and stamp collecting.
“In particular, if you are from a small nation that has never really been a player in world history, your local historians are likely to be full of parochial bias motivated by the local political quarrels and grievances...”
Describes Ireland pretty well.
http://lesswrong.com/lw/4ba/some_heuristics_for_evaluating_the_soundness_of/ckd2
Small nations aren’t always Roaring Mice … some trade on being modest and unthreatening , and forming egalitarian alliances with other small nations. For instance., the Benelux Countries
When dealing with the possibility of ideology influencing results one needs to be careful that one isn’t engaging in projection based on one’s own ideology influencing results. Otherwise this can turn into a fully general counter-argument. (To use one of the possibly more amusing examples, look at Conservapedia’s labeling of the complex numbers and the axiom of choice as products of liberal ideology.)
Also, an incidental note about the issue of climate change: we should expect that most aspects of climate change will be bad. Humans have developed an extremely sensitive system over the last few hundred years. We’ve settled far more territory (especially on the coasts) and have far more complicated interacting agriculture. Changing the environment in any way is a change from the status quo. Changing the status quo in any large way will be economically disruptive. Note however that there are a handful of positives to an increase in average global temperature that are clearly acknowledged in the literature. Two examples are the creation of a north-west passage, and the opening of cold areas of Russia to more productive agriculture (or in some cases, any agriculture as the permafrost melts).
Looked for it, didn’t find it. Links: Axiom of Choice. Complex Number.
http://rationalwiki.org/wiki/Conservapedia:Conservapedian_mathematics
If you are foolish enough to want to comprehend the strangeness of Conservapedia, RationalWiki is the place to go.
It looks like my memory was slightly off. The main focus is apparently on the project’s founders belief that “liberals” don’t like elementary proofs. See this discussion. I’m a bit busy right now but I’ll see if I can dig up his comments about the Axiom of Choice.
I checked that page. I don’t see any statement that “liberals” don’t like elementary proofs.
In this discussion, Andy Schlafly, to whom you are apparently referring since he appears to have control over content, is arguing with Mark Gall over the best definition of “elementary proof”. Essentially Mark believes that the definition should reflect what he believes to be common usage, and Andy believes that the definition should reflect a combination of usage and logic, ruling out certain usage as mis-usage. I think Andy is essentially identifying what he believes to be a natural kind, and believes his definition to cut nature at the joints.
Andy uses the word “liberal” in only one place, here:
“Liberal politics” here is given only as an example of error, one example among several, another example being atheism. The statement is not that liberals don’t like elementary proofs any more than that atheists don’t like elementary proofs. In fact I found no statement that anybody doesn’t like elementary proofs. Rather, the discussion appears to be about the best definition of elementary proofs, not about liking or disliking.
Also, the “talk” pages of Conservapedia, like the “talk” pages of Wikipedia, are not part of the encyclopedia proper. I think it’s incorrect, then, to say that the Conservapedia does something, when in fact it is done in the talk pages.
Ok. If you prefer, Andrew is even more blunt about his meaning here
where he says:
(End quote from Andrew).
That example seems to be pretty explicit. I agree that in general what happens on a talk page is not the same thing as what happens in the encyclopedia proper but Andrew includes this claim as one of his examples of bias in Wikipedia which is in their main space (although that doesn’t explicitly call it an example of “liberal” bias).
Okay, that’s close to what you were saying, though this seems to be a speculative hypothesis he came up with to explain the striking fact that Wikipedia did not include the entry. The important topic is the omission from Wikipedia. The explanation—that’s his attempt to understand why it happened. Many people are apt to come up with obviously highly speculative speculations when trying to explain surprising events. I don’t think all that much should be made of such things. In any case, I’m not convinced that he’s wrong. (I’m not convinced that he’s right either.)
It isn’t that surprising that we’d have that sort of thing missing. A lot of the articles I’ve written for Wikipedia are ones I only wrote because I was trying to look them up and was surprised that we didn’t have them. People don’t appreciate how many gaps Wikipedia still has. For example, until I wrote it, there was no Wikipedia article for Samuel Molyneux, who was a major historical astronomer.
Beware false compromise. The truth does not always lie in the middle. (Incidentally, are you a Bayesian? If so, around what probability do you define as being “convinced”?)
To my mind, being convinced of a claim is essentially being ready to take some action which assumes the claim is true. I think that’s the relevant threshold, I think that’s essentially how the term is used in ordinary speech. Anyway, that’s how I think I should use it.
That being the case, then whether to be convinced or not depends on costs and benefits, downsides and upsides. For example, if the upside is $1 and the downside is $100, then I will not be convinced enough to take a risky action unless I assign its success (and, therefore, the truth of statements on which its success depends) a probability greater than about 99%. But if the upside and downside are both $1 then I will readily take action even if I assign the probability slightly over 50%. (By this logic, Pascal can be convinced of God’s existence even if the probability he assigns to it is much less than 50% - which admittedly seems to represent a breakdown in my understanding of “convinced”, but I still think it works above 50%)
In the current case there are essentially no practical consequences from being right or wrong. What I find, though, is that when you take away practical consequences, most people interpret this as a license to have a great deal of confidence in all sorts of conflicting (and therefore at least half wrong) beliefs. This makes sense rationally, if we assume that the costs of having false beliefs are low and the benefits of having true beliefs are high, and in fact there’s even a stronger case for being carelessly overconfident, which is that even false beliefs, confidently asserted, can be beneficial. The benefit in question is largely a social benefit—tribal affiliation, for example.
So then, one might think, I should have little problem becoming convinced by the first claim about academic mathematicians that comes along, seeing as there is so little downside from indulging in delusion. But this does not mean that there is no downside. I think that a certain amount of harm is done to a person who has false beliefs, and whether that harm outweighs the benefit depends on what that person is doing with himself.
In any case I think that when it comes to beliefs that have important practical consequences, the harm of delusion is typically much greater than not knowing—provided one realizes that one does not know. So in practical matters it is usually better to admit ignorance than to delusionally become convinced of a randomly selected belief. For this reason, I think that in practical matters one should usually place the threshold rather high before committing oneself to some belief. So the real, everyday world typically offers us the inverse of Pascal’s wager: the price of commitment to a false belief is high, and the price of admitting one does not know (agnosticism) is (relatively) low.
If I think that I have a 10% chance of being shot today, and I wear a bulletproof vest in response, that is not the same as being convinced that I will be shot.
Your actual belief in different things does not, so far as I can tell, depend on how useful it is to act as if those things are true. How you act in response to your beliefs does.
Edit:
Actually, wait a sec.
Just follow through on the fact that you noticed this.
You have only pointed out an incompleteness in my account that I already pointed out. I pointed out that below 50%, the account I gave of being convinced no longer seems to hold.
The perfect is the enemy of the good. That an account does not cover all cases does not mean the account is not on the right track. A strong attack on the account would be to offer a better account. JoshuaZ already offered an alternative account by implication, which (as I understand it) that belief is simply a constant cutoff, for example, a probability assignment above 80% is belief, or maybe 50%, or maybe 90%.
But here’s the thing: if you believe something, aren’t you willing to act on it? We regularly explain our actions in terms of beliefs. For example, suppose you walk out of the house taking your wife’s car keys. You get to your car, notice that you can’t start the engine, and at that point discover that you are holding your wife’s car keys. Suppose she asks you, “why did you take my keys”? The answer seems obvious: “I took these keys because I believed they were my car keys.” Isn’t that obvious? Of course that’s why you took them.
To restate, you did something that would have been successful had those keys been your keys. To restate, you acted in a way that would have been successful had your belief been true.
And I think this is generally a principle by which we explain our actions, particularly our mistaken actions. The explanation is that we acted in a way that would have worked out had our beliefs been correct. And so, your actions reveal your beliefs. By taking your wife’s car keys, you reveal your belief that they are your car keys.
So your actions reveal your beliefs. But here’s the problem: your actions are a product of a combination of your probability assignments and your value assignments, the costs and benefits. That’s why you are more ready to take risky action when the downside is low and the upside is high, and less ready to take risky action when the downside is high and the upside is low. So your actions are a product of a combination of probability assignments and value assignments.
But your actions meanwhile are in accordance with your beliefs.
Conclusion follows: your beliefs are a product of a combination of probability assignments and value assignments.
Now, as I said, this picture is incomplete. But it seems to hold within certain limits.
A utility maximizing Bayesian doesn’t say “oh, this has the highest probability so I’ll act like that’s true.” A utility maximizing Bayesian says “what course of action will give me the highest expected return given the probability distribution I have for all my hypotheses?” To use an example that might help, suppose A declares that they are going to toss two standard six-sided fair die and take the sum of the two values. If anyone guesses the correct result then A will pay the guesser $10. I assign a low probability to the result being “7” but that’s still my best guess. And one can construct other situations (if for example the payoff was $1000 if one correctly guessed and the guess happened to be an even number then guessing 6 or guessing 8 makes the most sense). Does that help?
That matches my own description of what the brain does. I wrote briefly:
which I explain elsewhere in more detail, and which matches your description of the utility maximizing Bayesian. It is the combination of your probability assignments and your value assignments which produces your expected return for each course of action you might take.
Depends what you mean. You are agreeing with my account, with the exception that you are saying that this describes a “utility maximizing Bayesian”, and I am saying that it describes any brain (more or less). That is, I think that brains work more or less in accordance with Bayesian principles, at least in certain areas. I can’t think that the brain’s calculation is tremendously precise, but I expect that it good enough for survival.
Here’s a simple idea: everything we do is an action. To speak is to do something. Therefore speech is an action. Speech is declaration of belief. So declaration of belief is an action.
Now, let us consider what CuSithBell says:
So, he agrees that how you act depends on utility. But, contrary to what he appears to believe, to declare a belief is to act—the action is linguistic. Therefore how you declare your beliefs depends on utility—that is, on the utility of making that declaration.
The utility of a declaration depends on its context, on how the declaration is used. And declarations are used. We make assertions, draw inferences, and consequently, act. So our actions depend on our statements. So our statements must be adjusted to the actions that depend on them. If someone is considering a highly risky undertaking, then we will avoid making assertions of belief unless our probability assignments are very high.
Maybe people have noticed this. People adjusting their statements, even retracting certain assertions of belief, once they discover that those statements are going to be put to a more risky use than they had thought. Maybe they have noticed it and believed it to be an inconsistency? No—it’s not an inconsistency. It’s a natural consequence of the process by which we decide where the threshold is. Here’s a bit of dialog:
Bob: There are no such thing as ghosts.
Max: Let’s stay in this haunted house overnight.
Bob: Forget it!
Max: Why not?
Bob: Ghosts!
For one purpose (which involves no personal downside), Bob declares a disbelief in ghosts. For another purpose (which involves a significant personal downside if he’s wrong), Bob revises his statement. Here’s another one:
Bob: Bullets please. My revolver is empty.
Max: How do you know?
Bob: How do you think I know?
Max: Point it at your head and pull the trigger.
Bob: No!
Max: Why not?
Bob: Why do you think?
For one purpose (getting bullets), the downside is small, so Bob has no trouble saying that he knows his revolver is empty. For the other purpose, the downside is enormous, so Bob does not say that he knows it’s empty.
I apologize for giving you the impression I disagree with this. By ‘being convinced’, I thought you were talking about belief states rather than declarations of belief, and thence these errors are arose (yes?).
I think that belief is a kind of internal declaration of belief, because it serves essentially the same function (internally) as declaration of belief serves (externally). Please allow me to explain.
There are two pictures of how the brain works which don’t match up comfortably. On one picture, the brain assigns a probability to something. On the other picture, the brain either believes, or fails to believe, something. The reason they don’ t match up is that in the first picture the range of possible brain-states is continuous, ranging from P=0 to p=1. But in the second picture, the range of possible brain-states is binary: one state is the state of belief, the other is the state of failure to believe.
So the question then is, how do we reconcile these two pictures? My current view is that on a more fundamental level, our brains assign [probabilities (edited)]. And on a more superficial level, which is partially informed by the fundamental level, we flip a switch between two states: belief and failure to believe.
I think a key question here is: why do we have these two levels, the continuous level which assigns probabilities, and the binary level which flips a switch between two states? I think the reason for the second level is that action is (usually) binary. If you try to draw a map from probability assignment to best course of action (physical action involving our legs and arms), what you find is that the optimal leg/arm action quite often does not range continuously as probability assignment ranges from 0 to 1. Rather, at some threshold value, the optimal leg/arm action switches from one action to another, quite different action—with nothing in between.
So the level of action is a level populated by distinct courses of action with nothing in between, rather than a continuous range of action. What I think, then, is that the binary level of belief versus failure to believe is a kind of half-way point between probability assignments and leg/arm action. What it is, is a translation of assignment of probability (which ranges continuously from zero to one) into a non-continuous, binary belief which is immediately translatable into decision and then into leg/arm action.
But as has I think been agreed on, the optimal course of action does not depend merely on probability assignments. It also depends on value assignments. So, depending on your value assignments, the optimal course of action may switch from A to B at P=60%, or alternatively at P=80%, etc. In the case of crossing the street, I argued that the optimal course of action switches at P>99.9%.
But binary belief (i.e. belief versus non-belief), I think, is immediately translatable into decision and action. That, I think, is the function of binary belief. But in that case, since optimal action switches at different P depending on value assignments, then belief must also switch between belief and failure to believe at different P depending on value assignments.
Okay, this makes sense, though I think I’d use ‘belief’ differently.
What does it mean in a situation where I take precautions against two possible but mutually exclusive dangers?
Here’s a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) “I will be shot” and “I will be stabbed” are mutually exclusive. Since P<50% for each of these we turn it around, and get:
P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well—set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.
Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.
So the rule seems to work. The fact that these are mutually exclusive dangers doesn’t seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]
{Added:see my other answer for a concise answer, which however leaves out a lot that I think important to discuss}
For starters, I think there is no problem understanding these two precautions against mutually exclusive dangers in terms of probability assignments, what I consider the more fundamental level of how we think. In fact, I consider this fact—that we do prepare for mutually exclusive dangers—as evidence that our fundamental way of thinking really is better described in terms of probability assignments than in terms of binary beliefs.
Talk about binary beliefs is folk psychology. As Wikipedia says:
People who think about mind and brain sometimes express misgivings about folk psychology, sometimes going so far as to suggest that things like beliefs and desires no more exist than do witches exist. I’m actually taking folk psychology somewhat seriously in granting that in addition to a fundamental, Bayesian level of cognition, I think there is also a more superficial, folk psychological level—that (binary) beliefs exist in a way that witches do not exist. I’ve actually gone and described a role that binary, folk psychological beliefs can play in the mental economy, as a mediator between Bayesian probability assignment and binary action.
But a problem immediately arises, in that, mapping probability assignments to different actions, different thresholds apply for different actions. When that arises, then the function of declaring a (binary) belief (publicly or to silently to oneself) breaks down, because the threshold for declaring belief appropriate to one action is inappropriate to another. I attempted to illustrate this breakdown with the two dialogs between Bob and Max. Bob revises his threshold up mid-conversation when he discovers that the actions he is called upon to perform in light of his stated beliefs are riskier than he had anticipated.
I think that in certain break-down situations, it can become problematic to assign binary, folk-psychological beliefs at all, and so we should fall back on Bayesian probability assignments to describe what the brain is doing. The idea of the Bayesian brain might also of course break down, it’s also just an approximation, but I think it’s a closer approximation. So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
Sadly, I think that there is a strong tendency to insist that there is one unique true answer to a question that we have been answering all our lives. I think that, for example, a small child who has not yet learned that the planet is a sphere, “up” is one direction which doesn’t depend on where the child is. And if you send that small child into space, he might immediately wonder, “which way is up?” In fact, even many adults may, in their gut, wonder, “which way is up?”, because deep in their gut they believe that there must be an answer to this, even though intellectually they understand that “up” does not always make sense. The gut feeling that there is a universal “up” that applies to everything arises when someone takes a globe or map of the Earth and turns it upside down. It just looks upside down, even though we understand intellectually that “up” and “down” don’t truly apply here. Similarly, in science fiction space battles where all the ships are oriented in relation to a universal “up”.
Similarly, I think that is a strong tendency to insist that there is one unique and true answer to the question, “what do I believe”. And so we answer the question, “what do I believe”, and we hold on tightly to the answer. Because of this, I think that introspection about “what I believe” is suspect.
As I said, I have not entirely figured out the implicit rules that underly what we (declare to ourselves silently that we) believe. I’ve acknowledged that for P<50%, we seem to withhold (declaration of) belief regardless of what our value assignments are. That being the case, I’m not entirely sure how to answer questions about belief in the case of precautions against dangers with P<50%.
I find it extremely interesting, however, that Pascal actually seems to have bit the bullet and advocated (declaration of) belief even when P<<50%, for sufficiently extreme value assignments.
I think this is the most common position held on this board—that’s why I found your model confusing.
It seems the edge cases that make it break are very common (for example, taking precautions against a flip of heads and a flip of tails). Moreover, I think the reason it doesn’t work on probabilities below 50% is the same as the reason it doesn’t work on probabilities >= 50%. What lesson do you intend to impart by it?
As an aside, my understanding of Pascal’s wager is that it is an exhortation to seek out the best possible evidence, rather than to “believe something because it would be beneficial if you did” (which doesn’t really make a lot of sense).
That’s a very interesting notion of what “convinced” means. It seems far from what most people would say (I don’t think that term when generally used takes the pay-off into account). I would however suggest that a delusion about a major branch of academia could potentially have serious results unless the belief is very carefully compartmentalized from impacting other beliefs.
I’m curious, given this situation, what evidence would you consider sufficient to convince you that Andrew is right? What evidence would convince you that Andrew is wrong?
That is essentially what I was getting at in paragraph 4.
This supports my position. While delusion is low-cost for most people (as I explain in paragraph 3), it is not low-cost for everyone (as I explain in paragraph 4). When delusion is high-cost, then a good strategy is to avoid commitment, to admit ignorance, when the assigned probability is below a high threshold. Paragraph 5 says that this is usually true of facts critical to the success of everyday actions. For example, crossing the street: it is a good idea to look carefully both ways before crossing a street. It’s not enough to be 90% sure that there are no cars coming close enough to run over you. That is insufficiently high, because you’ll be run over within days if you cross the street with such a low level of certainty. You need to be well north of 99.9% certain that there are no cars coming before you act on the assumption that there are no cars (i.e. by crossing the street). That’s the only way you can cross the street day after day for eighty years without coming to harm.
People don’t consciously consider it, but the brain is a machine that furthers the interest of the animal, and so the brain can I think be relied upon to take costs and benefits into account in decisions, and therefore in beliefs. For example, what does it take for a person to be convinced that there are no cars coming? If people were willing to cross the street with less than 99.9% probability that there are no cars coming, we would be seeing vastly more accidents than we do. It seems clear then to me that people don’t act as if they’re convinced unless the probability is extremely high. We can tell from the infrequency of accidents, that people aren’t satisfied that there are no cars coming unless they’ve assigned an extremely high probability to it. This must be the case whatever they admit consciously.
In the meantime this does not extend to other matters. People are easily satisfied of claims about society, the economy, the government, celebrities, where the assigned probability has to be well below 99.9%.
That’s a very difficult question to answer. I think it’s hard to know ahead of time, hard to model the hypothetical situation before it happens. But I can try to reason from analogous claims. Humans are complex, and so is their biology. So, let’s ask how much evidence it takes to convince the FDA that a drug works, that it does more good than harm. As you know, it’s quite expensive to conduct a study that would be convincing to the FDA. Now, it could be that the FDA is far too careful. So let’s suppose that the FDA is far too careful by a factor of 100. So, whatever it typically costs to prove to the FDA that a drug work, divide that by 100 to get a rough estimate of what it should take to establish whether what Andrew says is true (or false).
The first article I found says:
And since we’re talking clinical trials, we’re talking p-value of 5. That means that, if the drug doesn’t work at all, there’s a 1 in 20 chance that the trial will spuriously demonstrate that it works. While it depends on the particular case, my guess is that a Bayesian watching the experiment will not assign a probability all that high to the value of the drug. Add to this that even many drugs that work on average don’t work at all on an alarming fraction of patients, and the fact that the drug works is a statistical fact, not a fact about each application. So we’re not getting a high probability about the success of individual application from these expensive trials.
Dividing by 100, that’s $8 million to $20 million.
Okay, let’s divide by 100 again. That’s $80 thousand to $200 thousand.
So, now I’ve divided by ten thousand, and the cost of establishing the truth to a sufficiently high standard comes to around a hundred thousand dollars—about a year’s pay for a bright, well-educated, hard-working individual.
That doesn’t seem that unreasonable to me, because the notion of a person taking a year out of his life to check something seems not at all unusual. But what about crossing the street? It doesn’t cost a hundred thousand dollars to tell whether there are cars coming. Indeed not—but it’s a concrete fact about a specific time and place, something we can easily and inexpensively check. There are different kinds of facts, some harder than others to check. So the question is, what kind of fact is Andrew’s claim? My sense of it is that it belongs to the category of difficult-to-check.
But it might not. That really depends on what method a person comes up with to check the claim. Emily Rosa’s experiment on therapeutic touch is praised because it was so inexpensive and yet so conclusive. So maybe there is an inexpensive and conclusive demonstration either pro or con Andrew’s claim.
Ah, I think I see the problem. It seems that you acting under the assumption that conscious declaration of being “convinced” should cause you to act like the claim in question has probability 1. Thus, one shouldn’t say one is “convinced” unless one has a lot of evidence. May I suggest that you are possibly confusing cognitive biases with epistemology?
Possibly. But asking oneself what evidence would drastically change one’s confidence in a hypothesis one way or another is a very useful exercise. I would hesitantly suggest that for most questions if one can’t conceive easily of what such evidence would look like then one probably hasn’t thought much about the matter.
So, say math had some terribly strong political bias, what would we expect? Do we see that? Do we not see it? How would we go about testing this assuming we had a lot of resources allocated to testing just this?
Not at all. In fact I pointed out that my account of being “convinced” is continuous with Pascal’s Wager, and Pascal argued in favor of believing on the basis of close to zero probability. As the Stanford Encyclopedia introduces the wager:
Everyone is familiar with it of course. I only quote the Stanford to point out that it was in fact about “believing”. And of course nobody gets into heaven without believing. So Pascal wasn’t talking about merely making a bet without an accompanying belief. He was talking about, must have been talking about, belief, must have been saying you should believe in God even though there is no evidence of God.
The issue is two-fold: whether mathematicians are less interested in elementary proofs than before, and if they are, why. So, how would you go about checking to see whether mathematicians are less interested in elementary proofs? What if they do fewer elementary proofs? But it might be because there aren’t elementary proofs to do. So you would need to deal with that possibility. How would you do that? Would you survey mathematicians? But the survey would give little confidence to someone who suspect mathematicians of being less interested.
As part of the reason “why”, one possible answer is, “because elementary proofs aren’t that important, really.” I mean, it might be the right thing. How would I know whether it was the right thing? I’m not sure. I’m not sure that it’s not a matter of preference. Well, maybe elementary proofs have a better track record of not ultimately being overturned. How would we check that? Sounds hard.
Well, as I recall, his actual claim was that liberalism causes mathematicians to evade accountability, and part of that evasion is abandoning the search for elementary proofs. So one question to ask is whether liberalism causes a person to evade accountability. There is a lot about liberalism that can arguably be connected to evasion of personal accountability. The specific question is whether liberalism would cause mathematicians to evade mathematical accountability—that is, accountability in accordance with traditional standards of mathematics. If so, this would be part of a more general tendency of liberal academics, liberal thinkers, to seek to avoid personal accountability.
In order to answer this I really think we need to come up with an account of what, exactly, liberalism is. A lot of people have put a lot of work into coming up with an account of what liberalism is, and each person comes up with a different account. For example there is Thomas Sowell’s account of liberals in his Conflict of Visions.
What, exactly, liberalism is, would greatly affect the answer to the question of whether liberalism accounts for the avoidance (if it exists) of personal accountability.
I will go ahead and give you just one, highly speculative, account of liberalism and its effect on academia. Here goes. Liberalism is the ideology of a certain class of people, and the ideology grows in part out of the class. We can think of it as a religion, which is somewhat adapted to the people it occurs in, just as Islam is (presumably) somewhat adapted to the Middle East, and so on. Among other things, liberalism extols bureaucracy, such as by preferring regulation of the marketplace, which is rule by bureaucrats over the economy. This is in part connected to the fact that liberalism is the ideology of bureaucrats. However, internally, bureaucracy grows in accordance with a logic that is connected to the evasion of personal responsibility by bureaucrats. If somebody does something foolish and gets smacked for it, the bureaucratic response is to establish strict rules to which all must adhere. Now the next time something foolish is done, the person can say, “I’m following the rules”, which he is. It is the rules which are foolish. But the rules aren’t any person. They can’t be smacked. Voila—evasion of personal responsibility. This is just one tiny example.
So, to recap, liberalism is the ideology of bureaucracy, and extols bureaucracy, and bureaucracy is in no small part built around the ideal of the avoidance of personal responsibility. One is, of course, still accountable in some way—but the nature of the accountability is radically different. One is now accountable for following the intricate rules of the bureaucracy to the letter. One is not personally accountable for the real-world disasters that are produced by bureaucracy which has gone on too long.
The liberal mindset, then, is the bureaucratic mindset, and the bureaucratic mindset revolves around the evasion of personal accountability, at least has a strong element of evasion.
Now we get to the universities. The public universities are already part of the state. The professors work for the state. They are bureaucratized. What about private universities? They are also largely connected with the state, especially insofar as professors get grants from the state. Long story short, academic science has turned into a vast bureaucracy, scientists have turned into bureaucrats. Scientific method has been replaced by such things as “peer review”, which is a highly bureaucratized review by anonymous (and therefore unaccountable) peers. Except that the peers are accountable—though not to the truth. They are accountable to each other and to the writers they are reviewing, much as individual departments within a vast bureaucracy are filled with people who are accountable—to each other. What we get is massive amounts of groupthink, echo chamber, nobody wanting to rock the boat, same as we get in bureaucracy.
So now we get to mathematicians.
Within a bureaucracy, your position is safe and your work is easy. There are rules, probably intricate rules, but as long as you follow the rules, and as long as you’re a team player, you can survive. You don’t actually have to produce anything valuable. The rules are originally intended to guide the production of valuable goods, but in the end, just as industries capture their regulatory authority, so do bureaucrats capture the rules they work under. So they push a lot of paper but accomplish nothing.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
And in fact this is what we see. So the theory is confirmed! Not so fast—I already knew about the academic paper situation, so maybe I concocted a theory that was consistent with this.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I’m not sure what a good definition of “liberalism” is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn’t the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise definition of these sorts of terms since what policy attitudes are common is to a large extent a product of history and social forces rather than coherent ideology.
Well, nice of you for admitting that you already new this. But, at the same time, this seems to be a terribly weak prediction even if one didn’t know about it. One expects as fields advance and there becomes less low-hanging fruit that more and more seemingly minor papers will be published (I’m not sure there are many papers published which are trivial, minor and trivial are not the same thing).
Mm. I’m not quite sure this is true. Many liberals I know are perfectly content with the level of government involvement in (for example) marriage—we just want the nature of that involvement to not discriminate against (for example) gays.
Almost all hypotheses have this property. If you’re really in event X, then you’d be better off believing that you’re in X.
I think what Joshua meant was that the situation rewards the belief directly rather than the actions taken as a result of the belief, as is more typical.
Yes, but there was no explanation of why it’s “particularly difficult”, and the only property listed as justifying this characterization is almost universally present everywhere, including the cases that are not at all difficult. I pointed out how this property doesn’t work as an explanation.
I think the phrase “entity that actively rewards one for giving a higher probability...” made the point clear enough. If my state of information implies a 1% probability that a large asteroid will strike Earth in the next fifty years, then I would be best off assigning 1% probability to that, because the asteroid’s behaviour isn’t hypothesized to depend at all on my beliefs about it. If my state of information implies a 1% probability that there is a God who will massively reward only those who believe in his existence with 100% certainty, and who will punish all others, then that’s an entity that’s actively rewarding certain people based on having overconfident probability assignments; so the difficulty is in the possibility and desirability of treating one’s own probability assignments as just another thing to make decisions about.
I understand where the difficulty comes from, my complaint was with justification of the presence of the difficulty given in Joshua’s comment. Maybe you’re right, and the onus of justification was on the word “actively”, even though it wasn’t explained.
Let belief A include “having at least .9 belief in A has a great outcome, independant of actions”, where the great outcome in question is worth a dominating amount of utility. If an agent somehow gets into the epistemic state of having .5 belief in A, (and not having any opposing beliefs of direct punishments for believing A), (and updating its beliefs without evidence is an available action), it will update to have .9 belief in A. If it encounters evidence against A that wouldn’t reduce the probability low enough to counter the dominating utility of the great outcome, it would ignore it. If it does not keep a record of evidence it processed, just updating incrementally, it would not notice that if it accumulates enough evidence to discard A.
Of course, this illustration of the problem depends on the agent having certain heuristics and biases.
This is a good start, but on Conservapedia “liberal” and “liberalism” are pretty much local jargon and their meanings have departed the normative usages in the real world. It is not overstating the case to say that Schlafly uses “liberal” to mean pretty much anything he doesn’t like.
JoshuaZ:
That is true. The easy case is when clear ideological rifts can be seen even in the disputes among credentialed experts, as in economics. The much more difficult case is when there is a mainstream consensus that looks suspiciously ideological.
This sounds like it’s probably a hoax by hostile editors. It reminds me of the famous joke from Sokal’s hoax paper in which he described the feminist implications of the axioms of equality and choice. Come to think of it, it might even be inspired directly by Sokal’s joke.
No, the comments have been made by the project’s founder Andrew Schlafly. He’s also claimed that the Fields Medal has a liberal bias (disclaimer: that’s a link to my own blog.) Andrew also has a page labeled Counterexamples to Relativity written almost exclusively by him that claims among other things that “The theory of relativity is a mathematical system that allows no exceptions. It is heavily promoted by liberals who like its encouragement of relativism and its tendency to mislead people in how they view the world.”
I will add to help prevent mind-killing that Conservapedia is not taken seriously by much of the American right-wing, and that this sort of extreme behavior is not limited to any specific end of the political spectrum.
1) It is plausible that an element of affirmative action could have crept into the awarding of the Fields Medal. It is not unreasonable to suspect that it has. Any number of biases might creep in to the awarding of a prize, however major it is. For example, it could well be that a disproportionate number of Norwegians or Swedes have won the Nobel relative to their accomplishments, because of location.
2) That the mathematics of relativity (either special or general) “allows no exceptions” is trivial but as far as I can see true, because it is true of any mathematical system that exceptions to the system are, pretty much by definition, not included inside the system. Anything inside the system itself is not an exception to it. So, trivial. But not false. What we really need to to do is to see why the point is brought up.
Looking further into the matter of “exceptions”, to see why he brought up the true but trivial point with respect to relativity, in the main article I found this:
He appears to be saying that relativity breaks down at the Big Bang. He doesn’t appear to provide any ground for making this claim, but it seems likely. Wikipedia says something similar in its article on black holes:
The big bang is a singularity, and in that respect is similar to black holes, so if general relativity breaks down completely in a black hole then I would imagine it would also be likely to break down completely at the Big Bang.
3) That people have often speciously used Einstein’s relativity as a metaphor to promote all sorts of relativism is well known. People have similarly speciously used QM to promote all sorts of nonsense. So that particular point is hardly controversial, I think.
I have never relied on Conservapedia and don’t intend to start whereas I use Wikipedia several times a day, but these particular attacks on the Conservapedia seem weak.
I’m not particularly inclined towards a charitable interpretation of arguments written by Andrew Schafly. In my own short time frequenting the site, I found him rendering judgments on others’ work based on the premise that
“No facts conflict with conservative ideology
therefore, anything which conflicts with conservative ideology is not a fact.”
If you try to interpret his views in the most reasonable light you can, you probably haven’t understood him. He’s a living embodiment of Poe’s Law
Did you read the page in question or the entire quote I gave? The first sentence isn’t a big problem (although I think you aren’t parsing correctly what he’s trying to say). The second sentence I quoted was “It is heavily promoted by liberals who like its encouragement of relativism and its tendency to mislead people in how they view the world.”
And yes, a small handful of his 33 “counterexamples” fall into genuine issues that we don’t understand and a handful (such as #33) are standard physics puzzles. Then you have things like #9 which claims that a problem with relativity is “The action-at-a-distance by Jesus, described in John 4:46-54. ” (I suppose you could argue that this is a good thing since he’s trying to make his beliefs pay rent.) And some of them are just deeply confusing such as #14 which claims that the changing mass of the standard kilogram is somehow a problem for relativity. I don’t know what exactly he’s getting at there.
But, the overarching point I was trying to make is somewhat besides the point: The problem I was illustrating was the danger in turning claims that others are being ideological into fully general counterarguments. Given the labeling of relativity as being promoted by “liberals” and the apparent conflation with moral relativism, this seems to be a fine example.
Incidentally, note that Conservapedia’s main article on relativity points out actual examples where some on the left have actually tried to make very poor analogies between general relativity and their politics, but they don’t seem to appreciate that just because someone claims that “Theory A supports my political belief B” doesn’t mean the proper response is to attack Theory A. This article also includes the interesting line “Despite censorship of dissent about relativity, evidence contrary to the theory is discussed outside of liberal universities.” This is consistent with the project’s apparent general approach, as with much in American politics, to make absolutely everything part of the great mindkilling.
I can see that he attacks relativity, devotes a disproportionate amount of space to attacks, and relatively little to an explanation, though comparing it to his article on quantum mechanics it’s not that small—his article on QM is the equivalent of a Wikipedia stub. But it’s not obvious to me that the liberalism of some of its supporters is the actual reason for the problems he has with it.
It is in general difficult to tell what the “actual” motivations are for an individual’s beliefs. Often they are complicated. Regarding math and physics there’s a general pattern that Andrew doesn’t like things that are counterintuitive. I suspect that the dislike of special and general relativity comes in part from that.
Sure. In the case of the Nobel prizes this claim has been made before. In particular, the claim is frequently made that the Nobel Prize in literature has favored northern Europeans and has had serious political overtones. There’s a strong argument that the committee has generally been unwilling to award the prize to people with extreme right-wing politics while being fine with rewarding them to those on the extreme left. Moreover, you have cases like Eyvind Johnson who got the prize despite being on the committee itself and being not well known outside Sweden. (I’m not sure if any of his major works had even been translated into English or French when he got the prize.) And every few years there’s a minor row when someone on the lit committee decides to bash US literature in general, connecting it to broad criticism of the US and its culture (see for example this).
There’s also no question that politics has played heavy roles in the awarding of the Peace Prize.
And in the sciences there has been serious allegations of sexism in the awarding of the prizes. The best source for this as far as I’m aware is “The Madame Curie Complex” by Julie Des Jardins (unfortunately it isn’t terribly well-written, at times exaggerates accomplishments of some individuals, sees patterns where they may not exist, and suffers from other problems.)
But, saying “it isn’t unreasonable to suspect X” is different from asserting X without any evidence.
Isn’t this a bit like saying “politics has played a heavy role in electing the President of the United States?” The Peace Prize is a political award.
True, but this appears to be from a more free-wheeling, conservative-pundit blog-like section of the ’pedia, rather than from its articles. I think that once it’s understood that this section is a highly opinionated blog, the particular assertion seems to fit comfortably. For instance, right now, one of the entries reads:
Socialist England! Not enough to say “England”.
The “Socialist England” article is from the news section, and does not have an article on Conservapedia. It links to a Reuters article. It’s also nowhere near as dire as the Conservapedia headline makes it out to be.
The relativity article, and the other main articles linked on the main page, are clearly standard articles and not intended to be viewed as simple opinion blogs. It has no attribution, and lists eighteen references in the exact same manner as a Wikipedia article.
At best it is misguided, at worst it is intended to intentionally misinform people about the theory.
At the end of the article counterexamples to evolution, an old earth, and the Bible are linked to, with exactly the same format (and worse mischaracterizations than the Relativity article).
Random articles of more innocuous subjects (like book) have exactly the same format.
Again, it’s clearly the meat of the website, as more mundane articles do no more than go out of their way to add a mention of the Bible or Jesus in some way.
Ouch. I’ve never read more than one or two Conservapedia articles before, and I didn’t know it was that bad.
Conservapedia is so gibberingly insane it inspired the creation of RationalWiki. (Which has its bouts of reversed stupidity.)
http://rationalwiki.org/wiki/Conservapedia:Conservapedian_relativity came to some prominence last year when Prof Brian Cox discovered the Conservapedia article, then getting some blogosphere interest.
It is important to note here that Andrew Schlafly, founder of Conservapedia and author of most of these articles, has a degree in electrical engineering and worked as an engineer for several years before becoming a lawyer. He would not only be capable of understanding the mathematics, he would have used concepts from the theory in his professional work. At least most engineer cranks aren’t this bad.
David_Gerard:
In fairness to relativity crackpots, unless things have changed since my freshman days, the way special relativity is commonly taught in introductory physics courses is practically an invitation for the students to form crackpot ideas. Instead of immediately explaining the idea of the Minkowski spacetime, which reduces the whole theory almost trivially to some basic analytic geometry and calculus and makes all those so-called “paradoxes” disappear easily in a flash of insight, physics courses often take the godawful approach of grafting a mishmash of weird “effects” (like “length contraction” and “time dilatation”) onto a Newtonian intuition and then discussing the resulting “paradoxes” one by one. This approach is clearly great for pop-science writers trying to dazzle and amaze their lay audiences, but I’m at a loss to understand why it’s foisted onto students who are supposed to learn real physics.
I thought Conservapedia as a whole was a hoax. Poe’s law...
As far as I can tell a lot of it is a hoax, though the founder may have a hard time telling which editors are creative trolls and which editors (if any) are serious.
It is periodically asserted by people claiming to be former contributors to Conservapedia that the founder simply endorses contributors who overtly support him and rejects those who overtly challenge him.
If that were true, I’d expect that editors who are willing to craft contributions that overtly support the main themes of the site get endorsed, even if their articles are absurd to the point of self-parody.
I haven’t made a study of CP, but that sounds awfully plausible to me.
You will be unsurprised to hear that CP has played out in precisely that manner: a parodist coming in, dancing on the edges of Poe and wreaking havoc by feeding Schlafly’s biases.
I am hereby stealing the phrase “Dancing on the edge of Poe.”
I figured I should let you know.
So very true. :)
Does “ideological influces” include fiscal influences? Because most of the contrarian nutritionists I’ve read say that the mainstream is swayed by heavily funded groups who’d like to see people eat more corn, dairy products, etc.
Nutrition’s also entangled with a horrific mess of body-image issues and cultural expectations. These aren’t essential to any of the strains of cultural criticism that they intersect, so I don’t think I’d call them ideological; but because they’re so closely linked to people’s identities, they exhibit a lot of the problems we associate with ideology.
Same goes for related fields like exercise. The mind-killer here doesn’t metastasize like ideology tends to, but it’s every bit as pathological if you accidentally end up poking one of its hosts in the wrong spot.
Nornagest:
Well said. As you say, “ideological” is not a very accurate term here, but I meant it to also encompass this sort of thing.
Vaccine would be another charged topic where I think academia is mostly right.
Am I really in the minority in not wanting political discussion on the site, at least without special precautions?
I do not consider this post to be political. It is practical look at how and when to update on evidence of orthodox opinion. It could not be more relevant.
I wholly agree that it’s relevant, but I think that’s compatible with it running afoul of politics as the mind-killer.
Do you think my post goes too far in this direction, or are you referring to some of the comments?
Macroeconomics and global warming seem to me like intrinsically political topics, in that the vast majority of us don’t have the expertise to comment on them on the object level, and so we’re forced to use the indirect evidence provided by the opinions of others; but as I think everyone agrees, at least some of the relevant thinkers believe what they do because of ideological bias one way or the other, and so discussion of these topics either turns into discussion of such bias, or skirts around a crucial part of the problem.
And while I didn’t see anything inflammatory in your post, even the least inflammatory comments about an ideologically-charged issue can serve as an invitation for people to empty their cached opinions on the subject in the comments.
I’m not even confident that it’s better to completely avoid politics on LW; it’s just that it seems to me we’ve been getting there less through a conscious collective decision than through a general apathy about on-topic and other site norms.
steven0461:
In your opinion, has this actually happened? Do you see something among the comments that, in your opinion, represents a negative contribution so that provoking it should be counted against the original post? (I understand you might not want to point fingers at concrete people, so feel free to answer just yes or no.)
I have to say I totally support the appropriateness of this post. It is not politics in the mind killer sense. Mind killing comes in when the social politics of the immediate participants corrupt the issue—not when abstract global or national issues come up.
Finding ways to work out how much to trust an academic field is a critical skill. When we can’t trust science or academia to give us straight answers we really put our rational thinking to the test. And sometimes it really matters. Most notably with respect to mainstream opinion in the medical and pharmaceutical realms. There is more fiscal (and hence political) incentive for bias there than anywhere else and getting things right determines your health outcomes in the future.
I would like to see more posts in this vein, perhaps picking specific fields and giving a brief overview of credibility and whether there are correct contrarians to pay particular attention to.
Every contribution starts out negative by default because it takes up space in recent comments and elsewhere, occupies the minds of LW commenters, and takes time to read. Beyond that, I admit your post caused no serious negative contributions. Combined with some other recent harmless threads, that counts against the “no politics” guideline. On the other hand, harmless violations of such guidelines can cause harmful violations in future top-level posts, and most of the harm may be in low-probability large-scale arguments, like the ones we had about gender.
I do think we keep avoiding crucial parts of the problem that are a bad idea to talk about, but that are frustrating to avoid talking about once the topic has been brought up (if only because of the sense that what has been said will be taken for a community consensus), and this frustration is probably what’s actually causing me to complain.
steven0461:
Fair enough. What we’re facing here is the same ongoing conflict of visions about what the range of appropriate topics on LW should be. My opinion is that if the forum as presently constituted isn’t capable of handling sensitive topics in a rational manner, and if any topic with even the remotest sensitive implications should therefore be avoided, then the whole project should be written off as a failure and the website reconstituted along the standard guidelines for technical forums (i.e. with a list of precise and strict definitions of suitable technical topics, and rigorous moderation to eradicate off-topic comments).
Certainly, I find it comically absurd that there should be a community of people boasting about their “rationality” who at the same time have to obsessively self-censor to avoid turning their discussions into food fights. I’m surely not alone in this assessment, and the bad PR from such a situation should be a sufficient reason for the owners of LW to undertake some radical steps (in one direction or another) to avoid it.
I’m not sure I understand what you’re saying here. Are you saying that there are some points relevant to this discussion that you’re reluctant to bring up because they are “a bad idea to talk about”?
The official motto in the logo is “refining the art of human rationality”, which implies that our rationality is still imperfect. I don’t see why it’s absurd or bad PR to say that we’re more rational than most other communities, but still not rational enough to talk about politics.
It’s still imperfect, but can’t people try a little harder?
When will we be rational enough to talk about politics (or subjects with political implications)? I am skeptical that any of the justifications for not talking about politics will ever change. Right now, we have a bunch of intelligent, rationalist people who have read at a least a smattering of Eliezer’s writings, yet who have differing experiences and perspectives on certain subjects, with a lot of inferential distance in between. We have veteran community members, and we have new members. In a few years, we will have exactly the same thing, and people will still be saying that politics is the “mind-killer.”
I have to wonder, if LW isn’t ready to talk about politics now, will we ever be ready (on our current hardware)? I am skeptical that we all can just keep exercising our rationality on non-political subjects, and then one day a bell will go ding, and suddenly a critical mass of us will be rational enough to discuss politics.
You can’t learn to discuss politics rationally merely by studying rationality in the abstract, or studying it when applied to non-political subjects. Rationality applied to politics is a particular skill that must be exercised. Biases will flare up even for intelligent, rationalist people who know better. The only way for LW to become good at discussing politics is to practice and get better.
(And even now, LW is not bad at discussing politics, and there have been many great political discussions here. While many of them have been a bit heated by the standards of LW, they are downright friendly compared to practically anywhere else.)
Unfortunately, the rest of the world doesn’t have the same level of humility about discussing political subjects. Many of the people most capable of discussing politics rationally seem to have the most humility. How long can we afford to have rationalists sit out of politics?
Hang on. Instrumental rationality.
If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.
If that doesn’t sound appealing (to me, it doesn’t, but people I admire often do throw themselves into political work) then talking politics is just shooting the shit. Even if you’re very serious and rational about it, it’s pretty much recreation.
I used to really like politics as recreation—it made me feel good—but it has its downsides. One, it can take up a lot of time that you could use to build skills, get work done, or have more intense fun (a night out on the town vs. a night in on the internet.) Two, it can make you dislike people that you’d otherwise like; it screws with personal relationships. Three, there’s something that bothers me morally, a little, about using issues that are serious life-and-death problems for other people as my form of recreation. Four, in some cases, including mine, politics can hurt your personal development in a particular way: I would palliate my sense of not being a good person by reassuring myself that I had the right opinions. Now I’m trying to actually be a better person in practice, and also trying to worry less about imaginary sins; it’s work in progress, of course, but I feel I don’t need my “fix” of righteous anger as much.
This is a personal experience, of course, but I think that it’s worth it for everyone to ask, “Why do I talk politics? Do I want to talk politics?”
“If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.”
You omit the most important step, which comes before starting an organization. That’s figuring out what politics this organization should espouse and how it should espouse those politics.
If my views are almost diametrically opposed to Robin Hanson’s, and I have no good reason to think I’m more rational than Robin or otherwise in a better epistemic position, I’m not rationally justified in setting up an organization to espouse my views because I should consider, in that event, that my views have at least a .5 chance of being wrong, probably much higher. The worst think people can do is set up political projects based on ill-considered principles to end up advocating the wrong policies. As long as rational, informed people disagree, one isn’t entitled to a strongly held political position.
What you said might make sense if political debate were strictly about means and there was general agreement on ends. But it is not. And your views on the ends of policy are worth every bit as much as Dr. Hanson’s, however much you worry that his thinking might be better than yours concerning means.
Do you think having LW discuss politics will help save the world? If so, how do you envision it happening?
Just to make sure there is no confusion about who stands where on the issue, I’d like to re-emphasize that I definitely don’t support making politics a prominent item on the discussion agenda of LW. What I am concerned about are topics that are on LW’s discussion agenda as presently defined, but have some implications about political and other charged issues, and the question of whether these should be avoided. (Though of course this is complicated by the fact that the present discussion agenda is somewhat vague and a matter of some disagreement.)
Why do you find it beneficial to bring up implications about political and other charged issues, when discussing topics that are on LW’s discussion agenda?
I can understand it if you’re making some point about improving rationality in general, and the best example to illustrate your point happens to be political, and you judge the benefit of using that example to be worth the cost (e.g., the risk that LW slides down the slippery slope towards politics being prominently debated, and others finding it difficult to respond to your point because they want to avoid contributing to sliding down that slippery slope).
If it’s more like “btw, here are some political implications of the idea I was talking about” then I think we should avoid those.
It could be that by far the main corruptor of rationality, which does by far the most damage however you want to measure it, is the struggle for political power. If that’s the case, then it may be unavoidable to discuss power and therefore politics.
The high point of human rationality is science, but as it happens, the scientific establishment has been so thoroughly dominated by the government (government supports much of academia, government supports much of science through grants, government passes laws which make it difficult to conduct science without official government approval, government controls the dissemination of scientific claims) that corruption of science by politics seems inevitable. If in fact science is corrupt from top to bottom (as it may be), then such corruption is almost certainly almost entirely at the hands of the state, and is therefore almost certainly political. So, if science is thoroughly corrupt, then it is almost certainly virtually impossible to discuss that corruption at all seriously without getting heavily into politics.
The poster child perhaps, but I wouldn’t go as far as to say the high point. :)
Wei_Dai:
I don’t think one should bring up such implications just for the hell of it, when they contribute nothing of substance. I also agree that among otherwise equally useful examples, one should use those that are least distracting and that minimize the danger of dissension. There’s a simple cost-benefit case there, which I don’t dispute. However, it seems to me that many relevant topics are impossible to discuss without bringing up such implications.
Take for example my original post that started this discussion. For anyone who strives to be less wrong about almost anything, one of the absolutely crucial questions is what confidence should be assigned to what the academic mainstream says, and in this regard, I consider the topic of the post extremely relevant for LW. (If you believe otherwise, I would be curious to see the argument why—and note that what I’m arguing now is independent of what you might think about the quality of its content.) Now, I think nobody could dispute that on many topics the academic opinion is biased to some extent due to political and ideological influences, so it’s important to be able to recognize and evaluate such situations. Moreover, as far as I see, this represents a peculiar class of bias that cannot be adequately illustrated and discussed without bringing up some concrete examples of biases due to ideological or political influences. So, how could one possibly approach this issue while strictly avoiding the mention of anything that’s ideologically charged at least by implication?
Yet some people apparently believe that this line of inquiry already goes too far towards dangerous and undesirable topics. If this belief is correct, in the sense that maintaining a high quality of discourse really demands such a severe restriction on permissible topics, then this, in my opinion, decisively defeats the idea of having a forum like LW, under any reasonable interpretation of its mission statement, vague as it is. It effectively implies that people are inherently incapable of rational discourse unless it’s stringently disciplined and focused on a narrow range of topics, the way expert technical forums are. Because this is definitely not the only example of how charged issues will inevitably be arrived upon by people discussing the general problems of sorting out truth from bias and nonsense.
There are also other important points here, on which I’ve already elaborated in my other comments, which all stem from the same fundamental observation, namely that those topics where one needs an extraordinary level of rationality to escape bias and delusion and often exactly those that are commonly a matter of impassioned and polarized opinion. In other words, general skills in rational thinking and overcoming bias are of little use if one will stick to technical topics in which experts already have sophisticated, so to say, application-specific techniques for eliminating bias and nonsense. (Which often work well—one can easily think of brilliant scientists and technical experts with outright delusional opinions outside of their narrow specialties—and when they don’t, the issue may well be impossible to analyze correctly without getting into charged topics.) But even if you disagree with my view expressed in this last paragraph, I think your question is adequately answered by the points I made before that.
How about using an example from the past? A controversy that was ideologically charged at some point, but no longer inflames passions in the present? I’m not sure if there are such examples that would suit your purpose, but it seems worth looking into, if you hadn’t already.
Overall I don’t think we disagree much. We both think whether to bring up political implications is a matter of cost-benefit analysis and we seem to largely agree on what count as costs and what as benefits. I would just caution that we’re probably biased to over-estimate the net benefit of bringing up political implications since many of us feel strongly motivated to spread our favorite political ideas. (If you’re satisfied that you’ve already taken into account such biases, then that’s good enough for me.)
Wei_Dai:
Trouble is, the present system that produces reputable and accredited science and scholarship is a rather novel creation. Things worked very differently as recently as two or three generations ago, and I believe that an accurate general model for assessing its soundness on various issues necessarily has to incorporate judgments about some contemporary polarized and charged topics, which have no historical precedent that would be safely remote from present-day controversies. As Constant wrote in another reply to your above comment, modern science is so deeply intertwined with the modern system of government that it’s impossible to accurately analyze one without asking any questions about the other.
And to emphasize this important point again, I believe that coming up with such a model is a matter of supreme importance to anyone who wants to have correct views on almost any topic outside of one’s own narrow areas of expertise. Our society is historically unique in that we have these vast institutions whose mission is to produce and publish accurate insight on all imaginable topics, and for anyone intellectually curious, the skill of assessing the quality of their output is as important as recognizing edible from poisonous fruit for a forager.
That is surely a valid concern, and I probably display this bias myself at least occasionally. Like most biases, however, it also has its mirror image, i.e. the bias to avoid questions for fear of stirring up controversy, which one should also watch for.
This is not only because excessive caution means avoiding topics that would in fact be worth pursuing, but also because of a more subtle problem. Namely, the set of all questions relevant for a topic may include some safe and innocent ones alongside other more polarizing and charged ones. Deciding to include only the former into one’s assessment and ignoring the latter for fear of controversy may in fact fatally bias one’s final conclusions. I have seen instances of posts and articles on LW that, in my opinion, suffer from this exact problem.
As far as I know, nobody cares what LessWrong commenters think about political issues. LessWrong should concentrate on less crowded topics where it potentially has actual influence, like AI risks.
Do you (pl.) think it would be valuable to have a discussion topic on whether political discussion could be fruitful (possibly with links to relevant discussions, etc.)?
(Not to say “take it elsewhere”, but rather, “should we have this discussion somewhere it’ll be easier to keep track of”.)
Merely saying that there are topics too inflammatory even for LW is one thing, but remember that the context of my remark was a discussion of whether topics should be avoided even if they have only indirect implications about something that might inflame passions. The level of caution that some people seem to believe should be exercised would in my opinion, if really necessary, constitute evidence against the supposedly high level of rationality on LW. (And on many people, the contradiction would also have a bad PR effect.)
Please also see my above reply to Vladimir Nesov in which I elaborate on this further.
Fallacy of gray. Nobody is perfectly rational, but that doesn’t make all people equally rational. Also, you used the inflammatory and imprecise “boasting” characterization.
While not relying on helpful techniques is a good way of signaling ability, it’s a bad way of boosting performance. The virtue of humility is in taking every precaution even if all seems fine already, or even if the situation looks hopeless.
On the practical question, I think eliminating politics was an inspired decision that should continue to be followed, and I think the lead article was not political; I also think it’s the best post in a good while. Nevertheless, I find the fact that we must avoid politics troubling. If we’re succeeding in making ourselves rational, this—one would think—would lead to a political convergence. This is a nice empirical test of the value and possibility of becoming more rational by the methods we employ, a perspective we should consider an empirical question. It’s a shame we can’t conduct this test.
I will be very impressed if we can get Aumann agreement on hot political issues.
I suspect that the result on many of them would be convergence to realizing that we don’t know what the best solution is, but that might be my prejudices talking.
It’s worth noting that “we” is ill-defined here.
Supposing that what this site does successfully improves rationality among its participants, then we should expect that someone like me who has only been here for a few months would be less rational than the folks who have been around for years and benefiting from the site.
But a discussion of politics here would not exclude me, so even in that scenario we would expect such a discussion not to lead to convergence.
The proper empirical test, I suppose, would be to identify cohorts based on their tenure here, and conduct a series of such conversations within each such cohort—say, once a year—and evaluate whether a given cohort comes closer to convergence from year to year.
Politics includes much which is a matter of preference, not just accurate beliefs about the world. For example “I like it when I get more money when X is done” is the core of many political issues. Perhaps more importantly, different preferences with respect to aggregation of human experiences can lead to genuine disagreement about political policy even among altruists. For example, an altruist who had values similar to those that Robin Hanson blogs about will inevitably have a political disagreement with me no matter how rational we both are.
Political beliefs should converge. And if that happens, whatever differences remain won’t be resolved by discussion, because there’s nothing left to discuss.
If we can distinguish between preference and accuracy claims, that would be quite a large step towards rationality.
Indeed, but the trouble is of course that often the optimal strategy for promoting one’s preferences is to convince people that opposing them is somehow objectively wrong and delusional, rather than a matter of a fundamental clash of power and interest. (Which of course typically involves convincing oneself too, since humans tend to be bad at lying and good at sniffing out liars, and they appreciate sincerity a lot.)
That said, one of the main reasons why I find discussions on LW interesting is the unusually high ability of many participants to analyze issues in this regard, i.e. to separate correctly the factual from the normative and preferential. The bad examples where people fail to do so and the discourse breaks down tend to stick out unpleasantly, but overall, I’d say the situation is not at all bad, certainly by any realistic standards for human discourse in general.
Vladimir_Nesov:
It would be such a fallacy if I had claimed that one must either reach absolute perfection in this regard or admit being no better than others. In reality, however, I claimed that people who have to avoid any discussion at all that has even indirect and remote implications about sensitive topics for fear of discourse breaking down have no grounds for claiming to be somehow more “rational” than others (controlling of course for variables like intelligence and real-life accomplishment).
In retrospect, yes, I should have expressed myself more diplomatically. Also, I didn’t mean to imply that everyone or even a large part of the participants behave like that. However, it is not at all rare to see people on LW making remarks about “rationality” whose self-congratulatory aspect is, if not explicit, not too terribly subtle either. This, in my opinion, is bad PR already because qualifying oneself is low status as a general principle, and a combination of such statements with an admission of inability to maintain the quality of discourse about all but the most innocent topics gives the whole thing a tinge of absurdity. That, at least, is my honest impression of how many people are going to see these things, with clear implications on the PR issues.
On the other hand, if the quality of discourse outside of technical topics really cannot be maintained, then the clear solution is to formulate a strict policy for what’s considered on-topic, and enforce it rigorously. That would not only make things function much better, but it would also be excellent from a PR perspective. (Rather than giving off a bad “we can’t handle sensitive topics” impression, it would give off a high status “we don’t want to be bothered with irrelevancies” impression.)
Maybe they do, maybe they don’t, but you didn’t ask. Basically you infer a conclusion here, and claim that no proof to the contrary is therefore possible.
When you have made this argument before, I responded:
It seems inappropiate to me for you to repeat this argument without addressing my response.
JGWeissman,
Please pardon my lack of response to your argument—back in that thread the volume of replies to my comments became too large for me to respond to all of them. Better late than never, though, so here is my response.
I certainly don’t think constant discussions of everyday politics on LW would be interesting or desirable. Someone who wants to do that has countless other places on the internet, tailored to all possible opinions and tastes, and there is absolutely no need to clutter up LW with it. However, what we’re debating is at the other extreme, namely whether there should be a strict censorship (voluntary or not) of all discussions that have even remote implications in politics and other topics that are likely to inflame passions.
I think the answer is no, for several reasons. First, there are interesting questions relevant for issues at the core of LW’s mission statement that inevitably touch on sensitive topics. Second, for some potentially sensitive questions I find extremely interesting (and surely not just I), LW really is a venue where it’s possible to get a uniquely cool-headed and rational analysis, so avoiding those would mean forsaking some of the forum’s greatest potentials. Finally, as I’ve already mentioned, the idea of a self-congratulatory “rationalist” community that in fact suffers from the same problems as any other place whenever it comes to sensitive topics is comically bad PR for whatever causes LW is associated with.
Of course, it may be that LW is not capable of handling sensitive topics after all. But then, in my opinion, the present way it’s constituted doesn’t make much sense, and it would benefit from a reorganization that would impose much more precisely defined topic requirements and enforce them rigorously.
You seem to be restating your position, without actually addressing my point that a policy that takes into account the likely behaviours of LW members of various levels of skill and experience, including those who have recently joined, does not reflect on the capabilities of the experienced, high level members.
If you can not address this point, you should stop repeating your argument that such rational people should be able to handle political discussion.
JGWeissman:
I don’t see how this objection is specific to sensitive topics. Assuming that regular participants maintain high enough standards, incompetent attempts by newbies to comment on sensitive topics should be effectively discouraged by downvoting, as in all other debates. Even in the most innocent technical discussions, things will go downhill if there is no mechanism in place to discourage unproductive and poorly thought out comments. In either case, if the voting system is ineffective, it means that more stringent moderation is in order.
On the other hand, if even the behavior of regular participants is problematic, then we get back to the problems I was writing about.
In innocent technical discussions, users will generally base their votes only on the merits of the comments they’re voting on. In sensitive political discussions, some will vote based on ideological agreement.
A problem common to both cases is that LessWrong is hesitant to vote anything down below zero, possibly for good morale-related reasons.
I’m not necessarily advocating complete censorship. Special cautionary reminders around political topics and disciplined downvoting might do the trick.
I don’t see evidence for bad PR here. I haven’t seen anyone cite the politics taboo as a reason to shun LessWrong, and in general it isn’t unusual for sites to have rules like this. While it would certainly be embarrassing if the average LessWrong commenter weren’t at least a little more rational than the average internet commenter, productive political discussion between internet commenters not pre-selected for agreement is a notoriously hard problem.
If you’re worried about bad PR, I suspect there’s a better case that bad PR will be caused by LessWrong arriving at conclusions that are true but disreputable.
Sure.
Could someone point me to where the politics taboo is actually articulated? After re-reading Eliezer’s post politics is the mindkiller, he identifies many of the pitfalls of discussing gender politics, but I never got the sense that he advocated prohibiting discussion of controversial political subjects:
steven0461:
That is indeed a good point. Still, I do think my original concern is valid too.
In any case, given the opinions exchanged in this discussion (and other similar ones), I do believe that LW is in need of a clearer official policy for what is considered on-topic. I find commenting here a lot of fun, and what I write is usually well received as far as the votes and replies appear to indicate, but occasional comments like yours leave me with an unpleasant impression that a significant number of people might strongly disapprove of my attitudes and choices of topics. I certainly have no desire to do anything that breeds ill will, but lacking clearer rules, it seems to me that this conflict (assuming it’s significant) is without an obvious resolution, unless we are to treat any complaint as a liberum veto (which I don’t think would be workable as a general principle).
Well, you have sure whetted my curiosity with that. I honestly don’t see anything in the post and the subsequent comments that warrants such grave observations, but it might be my failure of imagination.
Apologies if I sounded snippy, or if I demotivated you from commenting. I like your attitudes and topic choices generally; it’s just that I’m worried about the effects of creating a precedent for people to be talking about such topics on this particular site. Again, I’m not even confident that the effects are harmful on net, but there seems to have been widespread support of the recommendation to avoid politically charged examples, and it bothered me that people seemed to be letting that slip just because it’s what happens by default. In any case, the length of this thread probably suggests I care more about this issue than I actually do, and for now I’ll just agree that it would be nice to have clearer rules and bow out.
Why did you think it was low-probability? I put it at very high probability.
I didn’t think footnotes 1 or 7 were very good examples. The fact that low quality work gets published is not enough to establish the soundness of the “academic mainstream”. Given enough journals we should expect that to happen, and we should also expect most hypotheses to be false. Low quality work being cited and relied upon is a more serious problem.
Poser was not firmly dismissing the attempted solution as unsound. He said that there wasn’t enough information given to properly evaluate the idea (although he could speculate on what the methods might have been), which is why it should have been a full-paper rather than a letter.
teageegeepea:
Regarding (1), it’s an example arguably showing much more than just low quality work being published. Based on the affair and the accompanying public debates, one gets the impression that in some more or less narrow fields the standards for distinguishing sound work from nonsense have collapsed altogether. What I was most struck with was not the apparent carelessness or incompetence of the few people directly involved in the affair, but the fact that even after the affair had become a subject of wide controversy, there was the apparent inability of reputable physicists to come to any clear consensus over whether the work makes any sense. And it’s not like the dispute was over some deep controversy, but about whether a given piece of work is a hoax or not. I would expect that in a healthy field a question like that should meet an instant unanimous answer.
Regarding (7), I actually presented it as an example of unsubstantiated work being commendably rejected by the academic mainstream despite its strong seductive qualities.
Whatever words he chose to employ, the mainstream consensus remains that the question is without answer, unmoved by the numerous attempts to answer it by methods similar to the one in that paper. Thus, papers using such methods are effectively rejected by the mainstream, regardless of whether they get more or less harshly worded reviews in the process. Which is in my opinion correct because their attempts at rigor are a house built on sand in terms of their fundamental assumptions.
Are scientists still claiming that Bogdanovs were hoaxers rather than producers of shoddy work? It seems that the idea arose because they had been TV presenters and the relative recency of the Sokal affair made that possibility salient.
The authors of the linguistics letter never revealed all their assumptions, which is why Poser could not fully critique it. As evidence for your argument you’d have to cite an example where such assumptions were revealed and deemed unsuitable by the academic mainstream.
teageegeepea:
Maybe it wasn’t clear enough from my writing, but this is not an isolated phenomenon. There have been many attempts at quantitative methods along these lines that are supposed to yield numerical estimates of the timing of language divergence. The approach is known as glottochronology (be warned that the Wikipedia article isn’t very good, though), and there’s a large literature discussing it. For a summary of the mainstream criticism, see e.g. the section on glottochronology in Historical Linguistics: An Introduction by Lyle Campbell (you might be able to find it on Google Books preview).
What is important in this context is that the mainstream consensus has never accepted any such estimates into its body of established knowledge, even though they provide superficially plausible answers to tantalizing questions. (This in contrast to the results obtained using the traditional comparative method, which are a matter of consensus.)
Glottochronology seems to deal primarily with vocabulary and cognates. Many criticisms there aren’t on point for examining trends of changes in conjugation of verbs. The latter approach seems both less suspect and less potentially useful.
Do you know of any concrete breakthroughs in historical linguistics achieved by studying trends in verb conjugation?
That paper you link to isn’t very impressive. It dredges the English data to derive a rule that I’d bet would be falsified if one were to study other languages.
Off the top of my head, I can think of one striking counterexample. Proto-Slavic had a small class of irregular verbs (the so-called athematic ones), with only five verbs. Yet in modern Croatian (and Bosnian/Serbian/whatever), the 1st person singular of this irregular conjugation has spread to nearly all verbs, and is now the regular one—with only two exceptions. (In Russian, in contrast, there are only two verbs that still have the old athematic 1sg suffix. In various other Slavic languages, its current extent can be anywhere in-between.)
So we have a language where the entire verbal system analogized to a tiny irregular class. With this in mind, I find it absurd to postulate such simple general rules about irregular verbs.
I wasn’t arguing they aren’t dismissed, just that perhaps they shouldn’t be. Arguing that there are no accepted breakthroughs is weak evidence against that, if the method is rejected by the mainstream the strongest favorable evidence we might have expected to see would be theories proven by other methods, theories found in hypothesis space by using the questionable method. I don’t know if this is even possible in linguistics.
The scope of the rule can be the wrongest part, and it would still be useful. The rule as stated might be specific to Germanic languages but be an instantiation of a more general concept.
The counterexample you spent the most words describing would be the typical strongest sort to give to a hypothesis in that you described the most extreme cases of irregulars becoming regular. But the hypothesis of the paper, read charitably at least, is not challenged by it. It allows for “the 1st person singular of this irregular conjugation has spread to nearly all verbs, and is now the regular one,” as it’s about the rate of change in conjugation once a regular rule takes over and begins spreading. At some point in Croatian the irregular conjugation had enough momentum to fit under a moderately changed version of the hypothesis.
The counterexamples you only hinted at would be stronger. Are there coexisting regular rules of conjugation in other Slavic languages, with irregulars assimilated variously into one or another regular rule? If so, I think that wouldn’t challenge the thrust of the argument unless verbs changed between rules.
My entirely uninformed perception of the verb-based method is that it has low sensitivity but isn’t invalid compared to the other linguistic methods.
Ideology is a quite interesting factor.
Hypnosis is a nice example. For a long time there wasn’t good academic research about the topic because of idealogical conflict. At the moment we know that it can be used to lower pain but the exact extent of what it can do is still quite unclear.
Hypnosis has also another trait. There’s no financial incentive to research it in the way that drugs get researched.
It’s also devilishly hard to accurately research. To isolate the effects of hypnosis from the desire of the subject to please the researcher you need to administer a placebo hypnosis. However, this is very difficult as hypnosis by it’s very nature creates an immediate experiential effect on the patient that you somehow must replicate in the placebo (so the subjects can’t just guess whether they got real hypnosis) without actually performing hypnosis.
Also there really aren’t any good precise questions to ask about hypnosis. Does it help people quit smoking. Yes, but so does placebo. Does it help people deal with pain. Yes, but again so does placebo. Before you start damning people for not investigating hypnosis describe what precisely you want them to figure out.
This is an amusing sentence for me because I suspect that the way hypnosis works is by taking advantage of the desire of the subject to please the hypnotist. In other words, I supect that a placebo hypnosis is just a hypnosis.
You aren’t looking from the perspective of a patient.
A patient might ask themselves: Should I take Morphine or should I get a hypnosis treatment? What’s more likely to help me: A or B? You don’t need to isolate anything. For the patient it’s irrelevant why A beats B.
Once you add ideology it becomes important why A beats B.
Allows you to test hypnosis against morphine, not hypnosis against celery sticks.
Never heard of him.
Russia is a small nation trapped in a large nation’s body :).
http://lesswrong.com/lw/4ba/some_heuristics_for_evaluating_the_soundness_of/ckd2
Can you check a favorite theory of mine?
If we categorize nations as habitual war winners / war losers, occupiers / occupied, strong or weak, we see the following. Pretty much every ideology or ideological keyword as created by the winners, the strong at the height of their power, left and right was invented just before the French Revolution, liberalism and conservatism descends from the Gladstone-Disraeli era and so on. Ultimately the ideologies are all about how to handle conflict INSIDE a society, like a rich vs. poor, state vs. capitalists, religious vs. atheists and so on. All this because the winner, strong nations can afford to have such internal conflicts, as they were not threatened much from abroad. And the winners being winners, they export their culture and ideologies so now anywhere you go on this planet you find people who describe themselves as left or right, liberal or conservative, but often they are meaningless terms. (Boris “all power to the presidency, fsck parliamentarism, charge ’em with tanks” Yeltsin as a “liberal”, really?)
However these ideological categories do not reflect the actual experiences of weaker, defeated nations. They could never really afford having such internal conflicts, external threats were more important than internal conflicts. Their experience is more like that of internal cooperation in defense. Their primary political categories are the 1) rebel, patriot, who defends the country 2) the quisling who cooperates with foreign, often occupying powers.
This does not map to conventional Western left or right or liberal or conservative. The 1) patriot-rebel is often nationalistic, even racist, hates cosmopolitanism, but with leftie economic views and ultimately their goals are lefty in the sense of liberatory and emancipatory on the grand scale, independence of weaker nations both politically and economically, national self-determination and anti-colonialism and all that, however they will have little patience for lifestyle liberalism, rather they will have a warrior ethic that requires social conservatism about gender roles, gays etc. The 2) quisling-cooperator will be a cosmopolite, often coming accross as enlightened, humanist, clever and non-provincial, but ultimately he is selling out an oppressed and exploited population to ruthless international profit-making forces, so often you will shockingly discover how little empathy he has with the poor of his own nation—all those people who dug ditches all their lives and have nothing to show for should just have modernized themselves and adapted to capitalism better instead of being stupid smelly superstitious peasants—roughly like that.
And these two really don’t map well over.
Sometimes I try to “translate” between Western Europeans and Russians i.e. trying to get people to understand each others political views better in order to reduce these tensions we tend to have these days because it is not smart to hate each with the guys who heat your house in the winter :)
I tend to tell Russians that basically the way Westerners see things is that they are far more afraid of their own leaders than foreign forces. Their No. 1 goal is to prevent tyranny at home and this is what all the talk about liberal democracy reduces to, and this why they call Putin an anti-democrat, an anti-liberal, a tyrant in the making. Russians simply tend not to understand this. They think defending the country is far more important than preventing tyranny, they fear foreign forces far more than their own leaders. They are aware of Stalin’s many crimes against his own people as well, but due to his leadership role in the Great Patriotic War he is still seen somewhere between mixed to somewhat positive. They simply don’t see why would a leader of their own be more dangerous than external powers. Westerns see it oppositely. They tend to see emphasizing foreign threats and using it to sell authoritarianism at home is pretty much a nazi trick. Which it is, but it is essentially just an exploitation of a basic tribal trait that was there all the way through history and prehistory. And this divide, I think, it is not even new, it is not even about the modern ideologies created in the Age of Enlightenment. Having these kinds of internal conflicts, fearing oppression and tyranny at home far more than occupation from abroad is the defining trait of the West since it exists: it was already there in Shakespeare’s Macbeth, in Ciceor’s speeches or Athenian democracy. Every ideology the West created, left or right, is based on this rift: not trusting one’s own leaders completely. This simply does not work in a non-Western environment where foreign occupation or influence is seen as far more dangerous than homegrown tyranny. And again this all reduces to being habitual war winners / war losers, occupiers / occupied.
What do you think?
I think the problem here is when you try to apply it to Russians, who are actually very much on the side of habitual military victory (albeit often in scorched-earth form), and in fact have crossed over into habitual colonialism of their own quite regularly.
The argument works well for some place like South America that hasn’t had sovereignty or hosted a homegrown empire for hundreds of years, or maybe India and Africa, but when you try to apply it to much of the rest of the “anti-colonialist” Old World (ie: Russia, East Asia, Southeast Asia, West Asia aka the Middle East) you mostly find that the societies so obsessed with repelling foreign invaders are mainly just butthurt that in the 19th and 20th centuries, they lost for the very first time.
I’d say the major point of my argument would be the internal tyranny danger vs. foreign attack danger, and the minor part would be that those who tend to lose wars feel the external danger even more. But even if they win, a lot of wars close to home (in case of Russia: Poland, Tatars/Mongols etc.) will tend to focus more on external dangers than on potential inner tyrannies.
Close to home is a key point. Here is it is not really useful that we have one noun, war, for both. It is an entirely different experience when you send soldiers far away and the worst thing that can happen is that they won’t return, or when you live in a constant danger of some troops visiting your village for some looting, rape and arson. The second is probably so much more stressful that instead of war it would be more expressive to have two different nouns for this.
Let me put it differently: it seems focusing more on external danger is the default mode of humankind. It is a really unique and special thing that the West invented this whole “don’t trust your own leaders much either” thing, which perhaps is what defines the West as such as it was already there in Greco-Roman times. You could call it the invention of politics a such, very literally, Aristotle’s Politika was already about the different kinds of Greek constitutions which reduce to different amounts and ways to trusting leaders: monarchy, aristcracy, democracy, politeia. But this is not humankinds defaut mode. The default mode is to focus on defeating the external enemy which means there is no politics as such: just follow whatever leader there happens to be as he seems to be winning. When not ,someone will take his place.
So leader-distrust is all of politics in the Western sense, not only liberal politics, not only modern politics, but its whole history, even if someone would argue for theocratical monarchy or something similar, it is political in the sense that there is an argument at all. I don’t really know to formulate it better, but this is in stark contrast with the non-political views outside the West where you just simply whoever is there as long as he is winning, so politics is replaced with fighting the enemy.
Thus the West is completely mistake when it tells others their politics is wrong because e.g. not democratic enough. It is not sure at all they want to have politics.
If we attempt to take your theory as making serious predictions, you’ve completely failed to explain why Germany or Poland want to operate as democratic societies but Russia doesn’t care. All three of these countries have repeated invasion as their original historical experience prior to the formation of the nation-state—especially Poland, which has been the regular victim of both of its neighbors’ imperial ambitions.
And yet the Poles have a politics beyond “Fight Russia”.
Russian xenophobia isn’t really an argument that Russia faces fundamentally different historical-material circumstances from, say, the countries Russia regularly invades and colonizes. It’s just evidence that Russians have been fed quite a lot of propaganda designed to make them fear the outside world.
They’re not dealing with new and original existential challenges. They just got unlucky in which form of politics took root there. The neoliberal “shock therapy” after the fall of the Soviet Union certainly didn’t help.
Interesting.
I co-operate, you collaborate, he is a quisling.
Aside from the question of who is right, I think that there is a second or third axis, of dependence-independence, even .in western countries, although it’s a minority interest. Populist parties, as they are often known, want freedom from foreign influence, whether it’s the states, the EU, or immigrants. Superficially, populist parties seem to be on the right, but people often profess themselves puzzled why they back fairly leftist economics, such as a strong welfare state (albeit for genuine Freedonians). Thats easily explained, though, by their drawing support from poorer, less educated voters, who need those services. In medium sized countries, educated elites recognise the influence of large power blocks, and aim for compromises that aren’t too unfavourable. Micronations are only too happy to become protectiartes, it us advantageous for them.
Regarding endnote [4]: I’d be as interested in examples where we should read contrarian history as in any of your other examples; I’m interested in history. However, I think that you’d probably fall into mind-killing territory.
ETA: Thanks for the suggestions!
Google “Mencius Moldbug”, “Unqualified Reservations”. Read until you get bored.
Alternatively read Thomas Carlyle, (long dead historian) or actual primary documents. The TIME magazine archives are pretty cool for this, as is Google Books.
To give some concrete examples, some topics where the conventional wisdom can be very inaccurate are, for example, wars and revolutions that have significant ideological bearing (like e.g. the world wars, or the French, American, or 1848 revolutions), and the evaluations of the historical performance of various systems of governance.
For some general contrarianism, I second the Moldbug recommendation. Be warned, however, that his writing features some spectacularly good insight but also some serious blind spots, so caveat lector. Generally, worthwhile contrarian sources tend to be good on some particulars but bad on others, so it’s not like you can get a fully accurate opinion on any given topic from a single contrarian author.
But you can’t “confirm or disprove” your heuristics unless you have independent access to the truth about the health of the various academic disciplines. All you can do is to compare the opinions generated by your heuristics with other people’s opinions.
For what it’s worth, personally, I agree with most of your opinions, but have reservations about the heuristics. Two places where I disagree with your opinions are macroeconomics and climate modeling. Both are politicized, and it shows up in the press-release science, but I think that anyone familiar with the fields is capable of filtering out that noise. So, in those fields, I think it is safe to trust the orthodox positions.
I can’t even identify the orthodox positions in macroeconomics.
Hell Yes!
Macroeconomics is either a construction site or a graveyard
Really? If you were to take the four top-selling Macroeconomics textbooks for undergrad Econ majors, or the four top-selling Macroeconomics textbooks for economics graduate schools, those books would be presenting different models? That would surprise me, though I have to admit, I haven’t looked at a modern econ textbook in thirty years.
The humanities. Literary theory, culture and media studies, as well as philosophy (continental philosophy in particular) are fields filled with nonsense. The main problem with these fields stems from the lack or difficulty of an objective judgment, in my opinion. In literary theory, for example, it’s more important to be interesting than to be right.
I have to admit that they fail the heuristic of ideological interest as well. Even if we ignore for a moment Nobel and other prizes in literature (which have always been seriously biased), as well as culture studies in totalitarian states (where they were completely ideologized), we see that the most influential “schools” of literary theory in Western academia are ideologically charged: Feminism, Marxism, Postcolonialism etc.
It’s a shame, especially because there are more than a few low-hanging fruits in literature studies. Whatever you think about it, literary criticism has plenty of possible objectives that are both interesting and useful:
to help the reader to select texts that merit his attention; the critic should serve the public as a filter, as an independent evaluator of texts, as someone with the hidden knowledge how to distinguish “good” books from “mediocre” ones;
to help the reader to gain better understanding of existing texts;
to help the writer to write better: deeper, clearer, using a richer set of literary devices and methods.
There are lists like 1000 books to read before you die for the first objective (which are mostly useless, but that’s not the point); there are books like Unlocking Harry Potter: Five Keys for the Serious Reader for the second objective; there are books like How to Write a Damn Good Novel for the third one; but none of these objectives apparently are interesting enough for literature departments in elite universities.
I’m not knowledgeable about the whole field of literary studies and I’m sure there’s plenty to criticize there. But at least the first two objectives you mentioned are actually things that literary critics do, at least sometimes.
If you read something like a Norton Critical Edition of a classic book, the introduction and critical essays are written by literary scholars, and they can be lifesavers for the recreational reader. Historical context and formal analysis really helps for some books. I read Moby Dick back in high school, and I read a lot of commentary off JSTOR, and it probably doubled the value of the book for me. Think about Shakespeare—you really want commentary and context for that. You get more out of it if you dip into the scholarship, even a little.
As for the curatorial function, a lot of that is taken over by book reviewers, but Arts and Letters Daily probably falls into that category.
How to write well is more properly the function of creative writing departments than literary criticism departments, if you want to look for it in academia. The two branches have pretty much separated by now. (Possibly interesting: Elif Batuman’s prickly take on the difference between creative writing degrees and literature degrees.)
That these functions need to be “taken over” by someone else is exactly what signals existence of a problem. They are taken over in the same way in computer science programming language design was taken over by smart hackers like van Rossum, operating system design was taken over by smart hackers like Torvalds, search engine design was taken over by smart hackers like Brin & Page, etc. - you got the point.
Umm, I would say Guido is seriously harmed by his lack of strong background in programming language theory. I agree he came up with a useful language but I’ll take ruby’s lisp influenced design over python any day. Heck Larry Wall is up to his ears in programming language theory in the design of perl 6 and I think it will be better for it (though ultimately still focused on being a practical tool).
Torvalds was hardly uninfluenced by academic OS design. He just didn’t buy into the overhyped microkernel approach.
Ohh and Brin & Page are straight out of academic research. Their page rank algorithm is pure academic CS. They just applied it.
Could you give me an example where Ruby is clearly superior? I’m currently transitioning from Python to Ruby and would appreciate some strong selling point I haven’t noticed yet.