I interpret you as making the following criticisms:
1. People disagree with each other, rather than use Aumann agreement, which proves we don’t really believe we’re rational
Aside from Wei’s comment, I think we also need to keep track of what we’re doing.
If we were to choose a specific empirical fact or prediction—like “Russia will invade Ukraine tomorrow”—and everyone on Less Wrong were to go on Prediction Book and make their prediction and we took the average—then I would happily trust that number more than I would trust my own judgment. This is true across a wide variety of different facts.
But this doesn’t preclude discussion. Aumann agreement is a way of forcing results if forcing results were our only goal, but we can learn more by trying to disentangle our reasoning processes. Some advantages to talking about things rather than immediately jumping to Aumann:
We can both increase our understanding of the issue.
We may find a subtler position we can both agree on. If I say “California is hot” and you say “California is cold”, instead of immediately jumping to “50% probability either way” we can work out which parts of California are hot versus cold at which parts of the year.
We may trace part of our disagreement back to differing moral values. If I say “capital punishment is good” and you say “capital punishment is bad”, then it may be right for me to adjust a little in your favor since you may have evidence that many death row inmates are innocent, but I may also find that most of the force of your argument is just that you think killing people is never okay. Depending on how you feel about moral facts and moral uncertainty, we might not want to Aumann adjust this one. Nearly everything in politics depends on moral differences at least a little.
We may trace our disagreement back to complicated issues of worldview and categorization. I am starting to interpret most liberal-conservative issues as a tendency to draw Schelling fences in different places and then correctly reason with the categories you’ve got. I’m not sure if you can Aumann-adjust that away, but you definitely can’t do it without first realizing it’s there, which takes some discussion.
So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it’s great that we have discussions—even heated discussions—first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.
2. It is possible that high IQ people can be very wrong and even in a sense “stupidly” wrong, and we don’t acknowledge this enough.
I totally agree this is possible.
The role that IQ is playing here is that of a quasi-objective Outside View measure of a person’s ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of “I feel I’m definitely right; this other person has nothing to teach me.”
So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong—like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we’re too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says “I’m sure that creationism is true, and it doesn’t matter whether really fancy scientists who use big words tell me it isn’t.”
We end up in a kind of bravery debate situation here, where we have to decide whether it’s worth warning people more against the first failure mode (at the risk it will increase the second), or against the second failure mode more (at the risk that it will increase the first).
And, well, studies pretty universally find everyone is overconfident of their own opinions. Even the Less Wrong survey finds people here to be really overconfident.
So I think it’s more important to warn people to be less confident they are right about things. The inevitable response is “What about creationism?!” to which the counterresponse is “Okay, but creationists are stupid, be less confident when you disagree with people as smart or smarter than you.”
This gets misinterpreted as IQ fetishism, but I think it’s more of a desperate search for something, anything to fetishize other than our own subjective feelings of certainty.
3. People are too willing to be charitable to other people’s arguments.
This is another case where I think we’re making the right tradeoff.
Once again there are two possible failure modes. First, you could be too charitable, and waste a lot of time engaging with people who are really stupid, trying to figure out a smart meaning to what they’re saying. Second, you could be not charitable enough by prematurely dismissing an opponent without attempting to understand her, and so perhaps missing out on a subtler argument that proves she was right and you were wrong all along.
Once again, everyone is overconfident. No one is underconfident. People tell me I am too charitable all the time, and yet I constantly find I am being not-charitable-enough, unfairly misinterpreting other people’s points, and so missing or ignoring very strong arguments. Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice “be less charitable” is more helpful than the advice “be more charitable”.
As I said above, you can try to pinpoint where to apply this advice. You don’t need to be charitable to really stupid people with no knowledge of a field. But once you’ve determined someone is in a reference class where there’s a high prior on them having good ideas—they’re smart, well-educated, have a basic committment to rationality—advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less—it might be useful in a couple of extreme cases, but I really doubt it’s where the gain for the average person lies.
In fact, it’s hard for me to square your observation that we still have strong disagreements with your claim that we’re too charitable. At least one side is getting things wrong. Shouldn’t they be trying to pay a lot more attention to the other side’s arguments?
I feel like utter terror is underrated as an epistemic strategy. Unless you are some kind of freakish mutant, you are overconfident about nearly everything and have managed to build up very very strong memetic immunity to arguments that are trying to correct this. Charity is the proper response to this, and I don’t think anybody does it enough.
4. People use too much jargon.
Yeah, probably.
There are probably many cases in which the jargony terms have subtly different meaning or serve as reminders of a more formal theory and so are useful (“metacontrarian” versus “showoff”, for example), but probably a lot of cases where people could drop the jargon without cost.
I think this is a more general problem of people being bad at writing—“utilize” vs. “use” and all that.
5. People are too self-congratulatory and should be humbler
What’s weird is that when I read this post, you keep saying people are too self-congratulatory, but to me it sounds more like you’re arguing people are being too modest, and not self-congratulatory enough.
When people try to replace their own subjective analysis of who can easily be dismissed (“They don’t agree with me; screw them”) with something based more on IQ or credentials, they’re being commendably modest (“As far as I can tell, this person is saying something dumb, but since I am often wrong, I should try to take the Outside View by looking at somewhat objective indicators of idea quality.”)
And when people try to use the Principle of Charity, once again they are being commendably modest (“This person’s arguments seem stupid to me, but maybe I am biased or a bad interpreter. Let me try again to make sure.”)
I agree that it is an extraordinary claim to believe anyone is a perfect rationalists. That’s why people need to keep these kinds of safeguards in place as saving throws against their inevitable failures.
So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it’s great that we have discussions—even heated discussions—first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.
I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer’s disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don’t know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn’t remotely surprise me. Et cetera.
The role that IQ is playing here is that of a quasi-objective Outside View measure of a person’s ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of “I feel I’m definitely right; this other person has nothing to teach me.”
So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong—like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we’re too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says “I’m sure that creationism is true, and it doesn’t matter whether really fancy scientists who use big words tell me it isn’t.”
I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them. This is actually a thing that happens with some creationists—people thinking “because I’m an , I can see those evolutionary biologists are talking nonsense.” Creationists would do better to attend to the domain expertise of evolutionary biologists. (See also: my post on the statistician’s fallacy.)
I’m also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?
Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice “be less charitable” is more helpful than the advice “be more charitable”.
As I said above, you can try to pinpoint where to apply this advice. You don’t need to be charitable to really stupid people with no knowledge of a field. But once you’ve determined someone is in a reference class where there’s a high prior on them having good ideas—they’re smart, well-educated, have a basic committment to rationality—advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less—it might be useful in a couple of extreme cases, but I really doubt it’s where the gain for the average person lies.
On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you’re probably right that people aren’t too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people’s in-group. And you seem to endorse this selective charity.
I’ve already said why I don’t think high IQ is super-relevant to deciding who you should read charitably. Overall education also doesn’t strike me as super-relevant either. In the US, better educated Republicans are more likely to deny global warming and think that Obama’s a Muslim. That appears to be because (a) you can get a college degree without ever taking a class on climate science and (b) more educated conservatives are more likely to know what they’re “supposed” to believe about certain issues. Of course, when someone has a Ph.D. in a relevant field, I’d agree that you should be more inclined to assume they’re not saying anything stupid about that field (though even that presumption is weakened if they’re saying something that would be controversial among their peers).
As for “basic commitment to rationality,” I’m not sure what you mean by that. I don’t know how I’d turn it into a useful criterion, aside from defining it to mean people I’d trust for other reasons (e.g. endorsing standard attitudes of mainstream academia). It’s quite easy for even creationists to declare their commitment to rationality. On the other hand, if you think someone’s membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I’m calling that self-congratulatory nonsense.
And that’s the essence of my reply to your point #5. It’s not people having self-congratulatory attitudes on an individual level. It’s the self-congratulatory attitudes towards their in-group.
I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer’s disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don’t know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn’t remotely surprise me. Et cetera.
Are ethics supposed to be Aumann-agreeable? I’m not at all sure the original proof extends that far. If it doesn’t, that would cover your disagreement with Alicorn as well as a very large number of other disagreements here.
I don’t think it would cover Eliezer vs. Robin, but I’m uncertain how “real” that disagreement is. If you forced both of them to come up with probability estimates for an em scenario vs. a foom scenario, then showed them both each other’s estimates and put a gun to their heads and asked them whether they wanted to Aumann-update or not, I’m not sure they wouldn’t agree to do so.
Even if they did, it might be consistent with their current actions: if there’s a 20% chance of ems and 20% chance of foom (plus 60% chance of unpredictable future, cishuman future, or extinction) we would still need intellectuals and organizations planning specifically for each option, the same way I’m sure the Cold War Era US had different branches planning for a nuclear attack by USSR and a nonnuclear attack by USSR.
I will agree that there are some genuinely Aumann-incompatible disagreements on here, but I bet it’s fewer than we think.
I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them.
So I want to agree with you, but there’s this big and undeniable problem we have and I’m curious how you think we should solve it if not through something resembling IQ.
You agree people need to be more charitable, at least toward out-group members. And this would presumably involve taking people whom we are tempted to dismiss, and instead not dismissing them and studying them further. But we can’t do this for everyone—most people who look like crackpots are crackpots. There are very likely people who look like crackpots but are actually very smart out there (the cryonicists seem to be one group we can both agree on) and we need a way to find so we can pay more attention to them.
We can’t use our subjective feeling of is-this-guy-a-crackpot-or-not, because that’s what got us into this problem in the first place. Presumably we should use the Outside View. But it’s not obvious what we should be Outside Viewing on. The two most obvious candidates are “IQ” and “rationality”, which when applied tend to produce IQ fetishism and in group favoritism (since until Stanovich actually produces his rationality quotient test and gives it to everybody, being in a self-identified rationalist community and probably having read the whole long set of sequences on rationality training is one of the few proxies for rationality we’ve got available).
I admit both of these proxies are terrible. But they seem to be the main thing keeping us from, on the one side, auto-rejecting all arguments that don’t sound subjectively plausible to us at first glance, and on the other, having to deal with every stupid creationist and homeopath who wants to bloviate at us.
There seems to be something that we do do that’s useful in this sphere. Like if someone with a site written in ALL CAPS and size 20 font claims that Alzheimers is caused by a bacterium, I dismiss it without a second thought because we all know it’s a neurodegenerative disease. But a friend who has no medical training but whom I know is smart and reasonable recently made this claim, I looked it up, and sure enough there’s a small but respectable community of microbiologists and neuroscientists investigating that maybe Alzheimers is triggered by an autoimmune response to some bacterium. It’s still a long shot, but it’s definitely not crackpottish. So somehow I seem to have some sort of ability for using the source of an implausible claim to determine whether I investigate it further, and I’m not sure how to describe the basis on which I make this decision beyond “IQ, rationality, and education”.
I’m also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?
Well, empirically I did try to investigate natural law theology based on there being a sizeable community of smart people who thought it was valuable. I couldn’t find anything of use in it, but I think it was a good decision to at least double-check.
On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you’re probably right that people aren’t too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people’s in-group. And you seem to endorse this selective charity.
If you think people are too uncharitable in general, but also that we’re selectively charitable to the in-group, is that equivalent to saying the real problem is that we’re not charitable enough to the out-group? If so, what subsection of the out-group would you recommend we be more charitable towards? And if we’re not supposed to select that subsection based on their intelligence, rationality, education, etc, how do we select them?
And if we’re not supposed to be selective, how do we avoid spending all our time responding to total, obvious crackpots like creationists and Time Cube Guy?
On the other hand, if you think someone’s membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I’m calling that self-congratulatory nonsense. And that’s the essence of my reply to your point #5. It’s not people having self-congratulatory attitudes on an individual level. It’s the self-congratulatory attitudes towards their in-group.
Yeah, this seems like the point we’re disagreeing on. Granted that all proxies will be at least mostly terrible, do you agree that we do need some characteristics that point us to people worth treating charitably? And since you don’t like mine, which ones are you recommending?
I question how objective these objective criterion you’re talking about are. Usually when we judge someone’s intelligence, we aren’t actually looking at the results of an IQ test, so that’s subjective. Ditto rationality. And if you were really that concerned about education, you’d stop paying so much attention to Eliezer or people who have a bachelors’ degree at best and pay more attention to mainstream academics who actually have PhDs.
FWIW, actual heuristics I use to determine who’s worth paying attention to are
What I know of an individual’s track record of saying reasonable things.
Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it.
Looking for other crackpot warning signs I’ve picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views.
Which may not be great heuristics, but I’ll wager that they’re better than IQ (wager, in this case, being a figure of speech, because I don’t actually know how you’d adjudicate that bet).
It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: “The thing that people forget sometimes, is that even though appearances can be misleading, they’re usually not.”
You’ve also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I’m fuzzy on what exactly they think about cryonics (more here).
Your heuristics are, in my opinion, too conservative or not strong enough.
Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you’re a creationist, you can rule out paying attention to Richard Dawkins, because if he’s wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you’re anti-transhumanism, you can rule out cryonicists because they tend to say lots of other unreasonable things like that computers will be smarter than humans, or that there can be “intelligence explosions”, or that you can upload a human brain.
Status within mainstream academia is a really good heuristic, and this is part of what I mean when I say I use education as a heuristic. Certainly to a first approximation, before investigating a field, you should just automatically believe everything the mainstream academics believe. But then we expect mainstream academia to be wrong in a lot of cases—you bring up the case of mainstream academic philosophy, and although I’m less certain than you are there, I admit I am very skeptical of them. So when we say we need heuristics to find ideas to pay attention to, I’m assuming we’ve already started by assuming mainstream academia is always right, and we’re looking for which challenges to them we should pay attention to. I agree that “challenges the academics themselves take seriously” is a good first step, but I’m not sure that would suffice to discover the critique of mainstream philosophy. And it’s very little help at all in fields like politics.
The crackpot warning signs are good (although it’s interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out, and it also seems like people have a bad habit of being very sensitive to crackpot warning signs the opposing side displays and very obtuse to those their own side displays). But once again, these signs are woefully inadequate. Plantinga doesn’t look a bit like a crackpot.
You point out that “Even though appearances can be misleading, they’re usually not.” I would agree, but suggest you extend this to IQ and rationality. We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it’s hard to remember that stupid things are still much, much likelier to be believed by stupid people.
(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of “deserve further investigation and charitable treatment”.)
You are right that I rarely have the results of an IQ test (or Stanovich’s rationality test) in front of me. So when I say I judge people by IQ, I think I mean something like what you mean when you say “a track record of making reasonable statements”, except basing “reasonable statements” upon “statements that follow proper logical form and make good arguments” rather than ones I agree with.
So I think it is likely that we both use a basket of heuristics that include education, academic status, estimation of intelligence, estimation of rationality, past track record, crackpot warning signs, and probably some others.
I’m not sure whether we place different emphases on those, or whether we’re using about the same basket but still managing to come to different conclusions due to one or both of us being biased.
Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, “academic philosophy sucks” is a Crackpot Warning Sign, ie “don’t listen to the hidebound establishment”.
So I normally defend the “trust the experts” position, and I went to grad school for philosophy, but… I think philosophy may be an area where “trust the experts” mostly doesn’t work, simply because with a few exceptions the experts don’t agree on anything. (Fuller explanation, with caveats, here.)
Also, from the same background, it is striking to me that a lot of the criticisms Less Wrong people make of philosophers are the same as the criticisms philosophers make of one another. I can’t really think of a case where Less Wrong stakes out positions that are almost universally rejected by mainstream philosophers. And not just because philosophers disagree so much, though that’s also true, of course; it seems rather that Less Wrong people greatly exaggerate how different they are and how much they disagree with the philosophical mainstream, to the extent that any such thing exists (again, a respect in which their behavior resembles how philosophers treat one another).
Since there is no consensus among philosophers, respecting philosophy is about respecting the process. The negative .claims LW makes about philosophy are indeed similar to the negative claims philosophy makes about itself. LW also makes the positive claim that it has a better, faster method than philosophy but in fact just has a truncated version of the same method.
But Alexander misunderstands me when he says I accuse Yudkowsky “of being against publicizing his work for review or criticism.” He’s willing to publish it–but only to enlighten us lesser rationalists. He doesn’t view it as a necessary part of checking whether his views are actually right. That means rejecting the social process of science. That’s a problem.
Or, as I like to put it, if you half bake your bread, then you get your bread quicker...but its half baked,
You might be interested in this article and this sequence (in particular, the first post of that sequence). “Academic philosophy sucks” is a Crackpot Warning Sign because of the implied brevity. A measured, in-depth criticism is one thing; a smear is another.
Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with.
Counterexample: your own investigation of natural law theology. Another: your investigation of the Alzheimer’s bacterium hypothesis. I’d say your own intellectual history nicely demonstrates just how to pull off the seemingly impossible feat of detecting reasonable people you disagree with.
But then we expect mainstream academia to be wrong in a lot of cases—you bring up the case of mainstream academic philosophy, and although I’m less certain than you are there, I admit I am very skeptical of them.
With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don’t agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything.
The crackpot warning signs are good (although it’s interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out...
Examples?
We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it’s hard to remember that stupid things are still much, much likelier to be believed by stupid people.
(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of “deserve further investigation and charitable treatment”.)
I don’t think “smart people saying stupid things” reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that “I know God really exists and I have no doubts about it,” which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.)
So when I say I judge people by IQ, I think I mean something like what you mean when you say “a track record of making reasonable statements”, except basing “reasonable statements” upon “statements that follow proper logical form and make good arguments” rather than ones I agree with.
Proper logical form comes cheap, just add a premise which says, “if everything I’ve said so far is true, then my conclusion is true.” “Good arguments” is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say “this guy’s arguments are terrible,” and you say, “you should read those arguments more charitably,” it doesn’t do much good for you to defend that claim by saying, “well, he has a track record of making good arguments.”
I agree that disagreement among philosophers is a red flag that we should be looking for alternative positions.
But again, I don’t feel like that’s strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?
Examples?
Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that “my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age.”
So Marshall decided since he couldn’t get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick.
Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission “talked about alien babies being adopted by Nancy Reagan”, before they made it into legitimate medical journals.
I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index...”believes the establishment is ignoring him because of their biases”, “believes his discovery will instantly solve a centuries-old problem with no side effects”, “does his studies on himself”, “studies get published in tabloid rather than journal”, but these were just things he naturally felt or had to do because the establishment wouldn’t take him seriously and he couldn’t do things “right”.
I don’t think “smart people saying stupid things” reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that “I know God really exists and I have no doubts about it,” which is maybe less than the general public but still a sizeable minority
I think it is much much less than the general public, but I don’t think that has as much to do with IQ per se as with academic culture. But although I agree that the finding that IQ isn’t a stronger predictor of correct beliefs than it is is interesting, I am still very surprised that you don’t seem to think it matters at all (or at least significantly). What if we switched gears? Agreeing that the fact that a contrarian theory is invented or held by high IQ people is no guarantee of its success, can we agree that the fact that a contrarian theory is invented and mostly held by low IQ people is a very strong strike against it?
Proper logical form comes cheap, just add a premise which says, “if everything I’ve said so far is true, then my conclusion is true.”
Proper logical form comes cheap, but a surprising number of people don’t bother even with that. Do you frequently see people appending “if everything I’ve said so far is true, then my conclusion is true” to screw with people who judge arguments based on proper logical form?
Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?
What’s your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we’re basically talking about people who all have PhDs, so education can’t be the heuristic. You also proposed IQ and rationality, but admitted we aren’t going to have good ways to measure them directly, aside from looking for “statements that follow proper logical form and make good arguments.” I pointed out that “good arguments” is circular if we’re trying to decide who to read charitably, and you had no response to that.
That leaves us with “proper logical form,” about which you said:
Proper logical form comes cheap, but a surprising number of people don’t bother even with that. Do you frequently see people appending “if everything I’ve said so far is true, then my conclusion is true” to screw with people who judge arguments based on proper logical form?
In response to this, I’ll just point out that this is not an argument in proper logical form. It’s a lone assertion followed by a rhetorical question.
Ethics ought to be Aumann-agreeable. That would only imply uFAI is a non-issue if AGI developers were ideal Bayesians (improbable) and aware of claims of uFAI risks.
I don’t know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn’t remotely surprise me.
That’s a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn’t be able to reach consensus on that no matter how hard you tried.
For a moral realist, moral disagreements are factual disagreements.
I’m not sure that humans can actually have radically different terminal values from one another; but then, I’m also not sure that humans have terminal values.
It seems to me that “deontologist” and “consequentialist” refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. (“Moral responses” are things like approving, disapproving, praising, punishing, feeling pride or guilt, and so on. They are adaptations being executed, not optimized reflections of fundamental values.)
the problem is selective charity—a specific kind of selective charity, slanted towards favoring people’s in-group.
The danger of this approach is obvious, but it can have benefits as well. You may not know that a particular LessWronger is sane, but you do know that on average LessWrong has higher sanity than the general population. That’s a reason to be more charitable.
So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it’s great that we have discussions—even heated discussions—first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.
Besides which, we’re human beings, not fully-rational Bayesian agents by mathematical construction. Trying to pretend to reason like a computer is a pointless exercise when compared to actually talking things out the human way, and thus ensuring (the human way) that all parties leave better-informed than they arrived.
IQ is playing here is that of a quasi-objective Outside View measure of a person’s ability to be correct and rational.
FYI IQ, whatever it measures, has little to no correlation with either epistemic or instrumental rationality, For extensive discussion of this topic see Keith Stanovich’s What Intelligence Tests Miss
In brief, intelligence (as measured by an IQ test), epistemic rationality (the ability to form correct models of the world), and instrumental rationality (the ability to define and carry out effective plans for achieving ones goals) are three different things. A high score on an IQ test does not correlate with enhanced epistemic or instrumental rationality.
For examples, of the lack of correlation between IQ and epistemic rationality, consider the very smart folks you have likely met who have gotten themselves wrapped up in incredibly complex and intellectually challenging belief systems that do not match the world we live in: Objectivism, Larouchism, Scientology, apologetics, etc.
For examples of the lack of correlation between IQ and instrumental rationality, consider the very smart folks you have likely met who cannot get out of their parents basement, and whose impact on the world is limited to posting long threads on Internet forums and playing WoW.
I interpret you as making the following criticisms:
1. People disagree with each other, rather than use Aumann agreement, which proves we don’t really believe we’re rational
Aside from Wei’s comment, I think we also need to keep track of what we’re doing.
If we were to choose a specific empirical fact or prediction—like “Russia will invade Ukraine tomorrow”—and everyone on Less Wrong were to go on Prediction Book and make their prediction and we took the average—then I would happily trust that number more than I would trust my own judgment. This is true across a wide variety of different facts.
But this doesn’t preclude discussion. Aumann agreement is a way of forcing results if forcing results were our only goal, but we can learn more by trying to disentangle our reasoning processes. Some advantages to talking about things rather than immediately jumping to Aumann:
We can both increase our understanding of the issue.
We may find a subtler position we can both agree on. If I say “California is hot” and you say “California is cold”, instead of immediately jumping to “50% probability either way” we can work out which parts of California are hot versus cold at which parts of the year.
We may trace part of our disagreement back to differing moral values. If I say “capital punishment is good” and you say “capital punishment is bad”, then it may be right for me to adjust a little in your favor since you may have evidence that many death row inmates are innocent, but I may also find that most of the force of your argument is just that you think killing people is never okay. Depending on how you feel about moral facts and moral uncertainty, we might not want to Aumann adjust this one. Nearly everything in politics depends on moral differences at least a little.
We may trace our disagreement back to complicated issues of worldview and categorization. I am starting to interpret most liberal-conservative issues as a tendency to draw Schelling fences in different places and then correctly reason with the categories you’ve got. I’m not sure if you can Aumann-adjust that away, but you definitely can’t do it without first realizing it’s there, which takes some discussion.
So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it’s great that we have discussions—even heated discussions—first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.
2. It is possible that high IQ people can be very wrong and even in a sense “stupidly” wrong, and we don’t acknowledge this enough.
I totally agree this is possible.
The role that IQ is playing here is that of a quasi-objective Outside View measure of a person’s ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of “I feel I’m definitely right; this other person has nothing to teach me.”
So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong—like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we’re too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says “I’m sure that creationism is true, and it doesn’t matter whether really fancy scientists who use big words tell me it isn’t.”
We end up in a kind of bravery debate situation here, where we have to decide whether it’s worth warning people more against the first failure mode (at the risk it will increase the second), or against the second failure mode more (at the risk that it will increase the first).
And, well, studies pretty universally find everyone is overconfident of their own opinions. Even the Less Wrong survey finds people here to be really overconfident.
So I think it’s more important to warn people to be less confident they are right about things. The inevitable response is “What about creationism?!” to which the counterresponse is “Okay, but creationists are stupid, be less confident when you disagree with people as smart or smarter than you.”
This gets misinterpreted as IQ fetishism, but I think it’s more of a desperate search for something, anything to fetishize other than our own subjective feelings of certainty.
3. People are too willing to be charitable to other people’s arguments.
This is another case where I think we’re making the right tradeoff.
Once again there are two possible failure modes. First, you could be too charitable, and waste a lot of time engaging with people who are really stupid, trying to figure out a smart meaning to what they’re saying. Second, you could be not charitable enough by prematurely dismissing an opponent without attempting to understand her, and so perhaps missing out on a subtler argument that proves she was right and you were wrong all along.
Once again, everyone is overconfident. No one is underconfident. People tell me I am too charitable all the time, and yet I constantly find I am being not-charitable-enough, unfairly misinterpreting other people’s points, and so missing or ignoring very strong arguments. Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice “be less charitable” is more helpful than the advice “be more charitable”.
As I said above, you can try to pinpoint where to apply this advice. You don’t need to be charitable to really stupid people with no knowledge of a field. But once you’ve determined someone is in a reference class where there’s a high prior on them having good ideas—they’re smart, well-educated, have a basic committment to rationality—advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less—it might be useful in a couple of extreme cases, but I really doubt it’s where the gain for the average person lies.
In fact, it’s hard for me to square your observation that we still have strong disagreements with your claim that we’re too charitable. At least one side is getting things wrong. Shouldn’t they be trying to pay a lot more attention to the other side’s arguments?
I feel like utter terror is underrated as an epistemic strategy. Unless you are some kind of freakish mutant, you are overconfident about nearly everything and have managed to build up very very strong memetic immunity to arguments that are trying to correct this. Charity is the proper response to this, and I don’t think anybody does it enough.
4. People use too much jargon.
Yeah, probably.
There are probably many cases in which the jargony terms have subtly different meaning or serve as reminders of a more formal theory and so are useful (“metacontrarian” versus “showoff”, for example), but probably a lot of cases where people could drop the jargon without cost.
I think this is a more general problem of people being bad at writing—“utilize” vs. “use” and all that.
5. People are too self-congratulatory and should be humbler
What’s weird is that when I read this post, you keep saying people are too self-congratulatory, but to me it sounds more like you’re arguing people are being too modest, and not self-congratulatory enough.
When people try to replace their own subjective analysis of who can easily be dismissed (“They don’t agree with me; screw them”) with something based more on IQ or credentials, they’re being commendably modest (“As far as I can tell, this person is saying something dumb, but since I am often wrong, I should try to take the Outside View by looking at somewhat objective indicators of idea quality.”)
And when people try to use the Principle of Charity, once again they are being commendably modest (“This person’s arguments seem stupid to me, but maybe I am biased or a bad interpreter. Let me try again to make sure.”)
I agree that it is an extraordinary claim to believe anyone is a perfect rationalists. That’s why people need to keep these kinds of safeguards in place as saving throws against their inevitable failures.
I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer’s disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don’t know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn’t remotely surprise me. Et cetera.
I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them. This is actually a thing that happens with some creationists—people thinking “because I’m an , I can see those evolutionary biologists are talking nonsense.” Creationists would do better to attend to the domain expertise of evolutionary biologists. (See also: my post on the statistician’s fallacy.)
I’m also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?
On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you’re probably right that people aren’t too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people’s in-group. And you seem to endorse this selective charity.
I’ve already said why I don’t think high IQ is super-relevant to deciding who you should read charitably. Overall education also doesn’t strike me as super-relevant either. In the US, better educated Republicans are more likely to deny global warming and think that Obama’s a Muslim. That appears to be because (a) you can get a college degree without ever taking a class on climate science and (b) more educated conservatives are more likely to know what they’re “supposed” to believe about certain issues. Of course, when someone has a Ph.D. in a relevant field, I’d agree that you should be more inclined to assume they’re not saying anything stupid about that field (though even that presumption is weakened if they’re saying something that would be controversial among their peers).
As for “basic commitment to rationality,” I’m not sure what you mean by that. I don’t know how I’d turn it into a useful criterion, aside from defining it to mean people I’d trust for other reasons (e.g. endorsing standard attitudes of mainstream academia). It’s quite easy for even creationists to declare their commitment to rationality. On the other hand, if you think someone’s membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I’m calling that self-congratulatory nonsense.
And that’s the essence of my reply to your point #5. It’s not people having self-congratulatory attitudes on an individual level. It’s the self-congratulatory attitudes towards their in-group.
Are ethics supposed to be Aumann-agreeable? I’m not at all sure the original proof extends that far. If it doesn’t, that would cover your disagreement with Alicorn as well as a very large number of other disagreements here.
I don’t think it would cover Eliezer vs. Robin, but I’m uncertain how “real” that disagreement is. If you forced both of them to come up with probability estimates for an em scenario vs. a foom scenario, then showed them both each other’s estimates and put a gun to their heads and asked them whether they wanted to Aumann-update or not, I’m not sure they wouldn’t agree to do so.
Even if they did, it might be consistent with their current actions: if there’s a 20% chance of ems and 20% chance of foom (plus 60% chance of unpredictable future, cishuman future, or extinction) we would still need intellectuals and organizations planning specifically for each option, the same way I’m sure the Cold War Era US had different branches planning for a nuclear attack by USSR and a nonnuclear attack by USSR.
I will agree that there are some genuinely Aumann-incompatible disagreements on here, but I bet it’s fewer than we think.
So I want to agree with you, but there’s this big and undeniable problem we have and I’m curious how you think we should solve it if not through something resembling IQ.
You agree people need to be more charitable, at least toward out-group members. And this would presumably involve taking people whom we are tempted to dismiss, and instead not dismissing them and studying them further. But we can’t do this for everyone—most people who look like crackpots are crackpots. There are very likely people who look like crackpots but are actually very smart out there (the cryonicists seem to be one group we can both agree on) and we need a way to find so we can pay more attention to them.
We can’t use our subjective feeling of is-this-guy-a-crackpot-or-not, because that’s what got us into this problem in the first place. Presumably we should use the Outside View. But it’s not obvious what we should be Outside Viewing on. The two most obvious candidates are “IQ” and “rationality”, which when applied tend to produce IQ fetishism and in group favoritism (since until Stanovich actually produces his rationality quotient test and gives it to everybody, being in a self-identified rationalist community and probably having read the whole long set of sequences on rationality training is one of the few proxies for rationality we’ve got available).
I admit both of these proxies are terrible. But they seem to be the main thing keeping us from, on the one side, auto-rejecting all arguments that don’t sound subjectively plausible to us at first glance, and on the other, having to deal with every stupid creationist and homeopath who wants to bloviate at us.
There seems to be something that we do do that’s useful in this sphere. Like if someone with a site written in ALL CAPS and size 20 font claims that Alzheimers is caused by a bacterium, I dismiss it without a second thought because we all know it’s a neurodegenerative disease. But a friend who has no medical training but whom I know is smart and reasonable recently made this claim, I looked it up, and sure enough there’s a small but respectable community of microbiologists and neuroscientists investigating that maybe Alzheimers is triggered by an autoimmune response to some bacterium. It’s still a long shot, but it’s definitely not crackpottish. So somehow I seem to have some sort of ability for using the source of an implausible claim to determine whether I investigate it further, and I’m not sure how to describe the basis on which I make this decision beyond “IQ, rationality, and education”.
Well, empirically I did try to investigate natural law theology based on there being a sizeable community of smart people who thought it was valuable. I couldn’t find anything of use in it, but I think it was a good decision to at least double-check.
If you think people are too uncharitable in general, but also that we’re selectively charitable to the in-group, is that equivalent to saying the real problem is that we’re not charitable enough to the out-group? If so, what subsection of the out-group would you recommend we be more charitable towards? And if we’re not supposed to select that subsection based on their intelligence, rationality, education, etc, how do we select them?
And if we’re not supposed to be selective, how do we avoid spending all our time responding to total, obvious crackpots like creationists and Time Cube Guy?
Yeah, this seems like the point we’re disagreeing on. Granted that all proxies will be at least mostly terrible, do you agree that we do need some characteristics that point us to people worth treating charitably? And since you don’t like mine, which ones are you recommending?
I question how objective these objective criterion you’re talking about are. Usually when we judge someone’s intelligence, we aren’t actually looking at the results of an IQ test, so that’s subjective. Ditto rationality. And if you were really that concerned about education, you’d stop paying so much attention to Eliezer or people who have a bachelors’ degree at best and pay more attention to mainstream academics who actually have PhDs.
FWIW, actual heuristics I use to determine who’s worth paying attention to are
What I know of an individual’s track record of saying reasonable things.
Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it.
Looking for other crackpot warning signs I’ve picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views.
Which may not be great heuristics, but I’ll wager that they’re better than IQ (wager, in this case, being a figure of speech, because I don’t actually know how you’d adjudicate that bet).
It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: “The thing that people forget sometimes, is that even though appearances can be misleading, they’re usually not.”
You’ve also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I’m fuzzy on what exactly they think about cryonics (more here).
Your heuristics are, in my opinion, too conservative or not strong enough.
Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you’re a creationist, you can rule out paying attention to Richard Dawkins, because if he’s wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you’re anti-transhumanism, you can rule out cryonicists because they tend to say lots of other unreasonable things like that computers will be smarter than humans, or that there can be “intelligence explosions”, or that you can upload a human brain.
Status within mainstream academia is a really good heuristic, and this is part of what I mean when I say I use education as a heuristic. Certainly to a first approximation, before investigating a field, you should just automatically believe everything the mainstream academics believe. But then we expect mainstream academia to be wrong in a lot of cases—you bring up the case of mainstream academic philosophy, and although I’m less certain than you are there, I admit I am very skeptical of them. So when we say we need heuristics to find ideas to pay attention to, I’m assuming we’ve already started by assuming mainstream academia is always right, and we’re looking for which challenges to them we should pay attention to. I agree that “challenges the academics themselves take seriously” is a good first step, but I’m not sure that would suffice to discover the critique of mainstream philosophy. And it’s very little help at all in fields like politics.
The crackpot warning signs are good (although it’s interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out, and it also seems like people have a bad habit of being very sensitive to crackpot warning signs the opposing side displays and very obtuse to those their own side displays). But once again, these signs are woefully inadequate. Plantinga doesn’t look a bit like a crackpot.
You point out that “Even though appearances can be misleading, they’re usually not.” I would agree, but suggest you extend this to IQ and rationality. We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it’s hard to remember that stupid things are still much, much likelier to be believed by stupid people.
(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of “deserve further investigation and charitable treatment”.)
You are right that I rarely have the results of an IQ test (or Stanovich’s rationality test) in front of me. So when I say I judge people by IQ, I think I mean something like what you mean when you say “a track record of making reasonable statements”, except basing “reasonable statements” upon “statements that follow proper logical form and make good arguments” rather than ones I agree with.
So I think it is likely that we both use a basket of heuristics that include education, academic status, estimation of intelligence, estimation of rationality, past track record, crackpot warning signs, and probably some others.
I’m not sure whether we place different emphases on those, or whether we’re using about the same basket but still managing to come to different conclusions due to one or both of us being biased.
Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, “academic philosophy sucks” is a Crackpot Warning Sign, ie “don’t listen to the hidebound establishment”.
So I normally defend the “trust the experts” position, and I went to grad school for philosophy, but… I think philosophy may be an area where “trust the experts” mostly doesn’t work, simply because with a few exceptions the experts don’t agree on anything. (Fuller explanation, with caveats, here.)
Also, from the same background, it is striking to me that a lot of the criticisms Less Wrong people make of philosophers are the same as the criticisms philosophers make of one another. I can’t really think of a case where Less Wrong stakes out positions that are almost universally rejected by mainstream philosophers. And not just because philosophers disagree so much, though that’s also true, of course; it seems rather that Less Wrong people greatly exaggerate how different they are and how much they disagree with the philosophical mainstream, to the extent that any such thing exists (again, a respect in which their behavior resembles how philosophers treat one another).
Since there is no consensus among philosophers, respecting philosophy is about respecting the process. The negative .claims LW makes about philosophy are indeed similar to the negative claims philosophy makes about itself. LW also makes the positive claim that it has a better, faster method than philosophy but in fact just has a truncated version of the same method.
As Hallquist notes elsewhere
But Alexander misunderstands me when he says I accuse Yudkowsky “of being against publicizing his work for review or criticism.” He’s willing to publish it–but only to enlighten us lesser rationalists. He doesn’t view it as a necessary part of checking whether his views are actually right. That means rejecting the social process of science. That’s a problem.
Or, as I like to put it, if you half bake your bread, then you get your bread quicker...but its half baked,
If what philosophers specialise in clarifying questions, they can trusted to get the question right.
A typical failure mode of amateur philosophy is to substitute easier questions for harder ones.
You might be interested in this article and this sequence (in particular, the first post of that sequence). “Academic philosophy sucks” is a Crackpot Warning Sign because of the implied brevity. A measured, in-depth criticism is one thing; a smear is another.
Read them ,not generally impressed.
Counterexample: your own investigation of natural law theology. Another: your investigation of the Alzheimer’s bacterium hypothesis. I’d say your own intellectual history nicely demonstrates just how to pull off the seemingly impossible feat of detecting reasonable people you disagree with.
With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don’t agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything.
Examples?
I don’t think “smart people saying stupid things” reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that “I know God really exists and I have no doubts about it,” which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.)
Proper logical form comes cheap, just add a premise which says, “if everything I’ve said so far is true, then my conclusion is true.” “Good arguments” is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say “this guy’s arguments are terrible,” and you say, “you should read those arguments more charitably,” it doesn’t do much good for you to defend that claim by saying, “well, he has a track record of making good arguments.”
I agree that disagreement among philosophers is a red flag that we should be looking for alternative positions.
But again, I don’t feel like that’s strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?
Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that “my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age.”
So Marshall decided since he couldn’t get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick.
Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission “talked about alien babies being adopted by Nancy Reagan”, before they made it into legitimate medical journals.
I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index...”believes the establishment is ignoring him because of their biases”, “believes his discovery will instantly solve a centuries-old problem with no side effects”, “does his studies on himself”, “studies get published in tabloid rather than journal”, but these were just things he naturally felt or had to do because the establishment wouldn’t take him seriously and he couldn’t do things “right”.
I think it is much much less than the general public, but I don’t think that has as much to do with IQ per se as with academic culture. But although I agree that the finding that IQ isn’t a stronger predictor of correct beliefs than it is is interesting, I am still very surprised that you don’t seem to think it matters at all (or at least significantly). What if we switched gears? Agreeing that the fact that a contrarian theory is invented or held by high IQ people is no guarantee of its success, can we agree that the fact that a contrarian theory is invented and mostly held by low IQ people is a very strong strike against it?
Proper logical form comes cheap, but a surprising number of people don’t bother even with that. Do you frequently see people appending “if everything I’ve said so far is true, then my conclusion is true” to screw with people who judge arguments based on proper logical form?
The extent to which science rejected the ulcer bacterium theory has been exaggerated. (And that article also addresses some quotes from Marshall himself which don’t exactly match up with the facts.)
What’s your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we’re basically talking about people who all have PhDs, so education can’t be the heuristic. You also proposed IQ and rationality, but admitted we aren’t going to have good ways to measure them directly, aside from looking for “statements that follow proper logical form and make good arguments.” I pointed out that “good arguments” is circular if we’re trying to decide who to read charitably, and you had no response to that.
That leaves us with “proper logical form,” about which you said:
In response to this, I’ll just point out that this is not an argument in proper logical form. It’s a lone assertion followed by a rhetorical question.
If they were, uFAI would be a non-issue. (They are not.)
Ethics ought to be Aumann-agreeable. That would only imply uFAI is a non-issue if AGI developers were ideal Bayesians (improbable) and aware of claims of uFAI risks.
How do you know thatthey are not?
Not being charitable to people isn’t a problem, providing you don’t mistake your lack of charity for evidence that they are stupid or irrational.
That’s a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn’t be able to reach consensus on that no matter how hard you tried.
Three somewhat disconnected responses —
For a moral realist, moral disagreements are factual disagreements.
I’m not sure that humans can actually have radically different terminal values from one another; but then, I’m also not sure that humans have terminal values.
It seems to me that “deontologist” and “consequentialist” refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. (“Moral responses” are things like approving, disapproving, praising, punishing, feeling pride or guilt, and so on. They are adaptations being executed, not optimized reflections of fundamental values.)
The danger of this approach is obvious, but it can have benefits as well. You may not know that a particular LessWronger is sane, but you do know that on average LessWrong has higher sanity than the general population. That’s a reason to be more charitable.
Besides which, we’re human beings, not fully-rational Bayesian agents by mathematical construction. Trying to pretend to reason like a computer is a pointless exercise when compared to actually talking things out the human way, and thus ensuring (the human way) that all parties leave better-informed than they arrived.
FYI IQ, whatever it measures, has little to no correlation with either epistemic or instrumental rationality, For extensive discussion of this topic see Keith Stanovich’s What Intelligence Tests Miss
In brief, intelligence (as measured by an IQ test), epistemic rationality (the ability to form correct models of the world), and instrumental rationality (the ability to define and carry out effective plans for achieving ones goals) are three different things. A high score on an IQ test does not correlate with enhanced epistemic or instrumental rationality.
For examples, of the lack of correlation between IQ and epistemic rationality, consider the very smart folks you have likely met who have gotten themselves wrapped up in incredibly complex and intellectually challenging belief systems that do not match the world we live in: Objectivism, Larouchism, Scientology, apologetics, etc.
For examples of the lack of correlation between IQ and instrumental rationality, consider the very smart folks you have likely met who cannot get out of their parents basement, and whose impact on the world is limited to posting long threads on Internet forums and playing WoW.
LW discussion.