I think it’s worth noting that we are (yet again) having a self-criticism session because a leftist (someone so far to the left that they consider liberal egalitarian Yvain to be beyond the pale of tolerability) complained that people who disagree with them are occasionally tolerated on LW.
Come on. Politics is rarely discussed here to begin with and something like 65*% of LWers are liberals/socialists. If the occasional non-leftist thought that slips through the cracks of karma-hiding and (more importantly) self-censorship is enough to drive you away, you probably have very little to offer.
*I originally said 80%, but I checked the survey and it’s closer to 65%. I think my point still stands. Only 3% of LWers surveyed described themselves as conservatives.
Only 3% of LWers surveyed described themselves as conservatives.
Interesting. I wonder why LW has so few conservatives. Surely, just like there isn’t masculine rationality and feminine rationality, there shouldn’t be conservative rationality and liberal rationality. It also makes me wonder how valid the objections are in the linked post if the political views of LW skew vastly away from conservative topics.
Full disclosure: I’m a black male who grew up in the inner city and I don’t find anything particularly offensive about topics on LW. There goes my opposing anecdote to the one(s) presented in the linked blog.
At a guess, I’d say this is linked to religion. Once you split out the libertarian faction (as the surveys historically have), it’s quite rare for people on the conservative side of the fence (at least in the US) to be irreligious, and LW is nothing if not outspokenly secular.
People in the rationality community tend to believe that there’s a lot of low-hanging fruit to be had in thinking rationally, and that the average person and the average society is missing out on this. This is difficult to reconcile with arguments for tradition and being cautious about rapid change, which is the heart of (old school) conservatism.
My steelman of the conservative position is ‘empirical legislation’ : do not make new laws until you have decent evidence they achieve the stated policy goals. “Ah, but while you are gathering your proof, the bad thing X is still happening!” “Too bad.”
FAI is a conservative position.
To respond to the grandparent, I think in the US conservatives ceded all intellectual ground, and are therefore not a sexy position to adopt. (If this is true, I think one should view this as a bad thing regardless of one’s political affiliation, because ‘loyal opposition’ is needed to sharpen teeth).
There is a big difference between what sex you are and what beliefs you profess: The first should not be determined by how rational you are, while the second very much should. There should be nothing surprising about the fact that more intelligent and more rational people would have different beliefs about reality than less intelligent and less rational people.
Or to put it another way: If you believe that all political affiliations should be represented equally in the sceptic/rationalist community, you are implicitly assuming that political beliefs are merely statements of personal preference instead of seeing them as claims about reality. While personal preference plays a role, I would hope that there’s more to it than that.
There is a big difference between what sex you are and what beliefs you profess: The first should not have anything to do with how rational you are...
Why not? Men and women are different in many ways. Why did you decide that a disposition to rationality can’t possibly depend on your sex (and so your hormones, etc.)?
It’s in reply to Quinton saying that there should be no masculine and feminine types of rationality. In other words, whether you are a man or a woman should not determine what the correct/rational answer is to a particular question (barring obvious exceptions). This is in stark contrast to asking whether or not political affiliation should be determined by how rational you are, which is another question entirely.
In other words: Just because correct answers to factual questions should not be determined by gender does not mean that political affiliation should not be determined by correct answers to factual questions.
I think political differences come down to values moreso than beliefs about facts.
Sometimes it is difficult to find out what is the different value and what is essentially the same value but different models.
For example two people can have a value of “it would be bad to destroy humanity”, but one of them has a model that humanity will likely destroy itself with ongoing capitalism, while the other has a model that humanity would be likely destroyed by some totalitarian movement like communism.
But instead of openly discussing their models and finding the difference, the former will accuse the latter of not caring about human suffering, and the latter will accuse the former of not caring about human suffering. Or they will focus on different applause lights, just to emphasise how different they are.
I probably underestimate the difference of values. Some people are psychopaths; and they might not be the only different group of people. But it seems to me that a lot of political mindkilling is connected with overestimating the difference, instead of admitting that our values in connection with a different model of the world would lead to different decisions. (Because our values are good, the different decisions are evil, and good cannot be evil, right?)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
(And more generally: If you meet a person with strange political opinions, try to imagine a least convenient world, where your values would lead to the same opinions. Even if that would be a wrong model of our world, it still may be the model the other person believes to be correct.)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don’t find them to map too closely to actual situations.
I’m not sure I can aptly articulate by intuition here. By differences in values, I don’t really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.
But I’ll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.
I’m always a little suspicious of this line of thinking. Partly because the terminal/instrumental value division isn’t very clean in humans—since more deeply ingrained values are harder to break regardless of their centrality, and we don’t have very good introspective access to value relationships, it’s remarkably difficult to unambiguously nail down any terminal values in real people. Never mind figuring out where they differ. But more importantly, it’s just too convenient: if you and your political enemies have different fundamental values, you’ve just managed to absolve yourself of any responsibility for argument. That’s not connotationally the same as saying the people you disagree with are all evil mutants or hapless dupes, but it’s functionally pretty damn close.
That doesn’t prove it wrong, of course, but I do think it’s grounds for caution.
How about different factions (landowners, truck drivers, soldiers, immigrants, etc.) all advocating their own interests? Doesn’t that count as “different values”?
Or, more simply, I value myself and my family, you value yourself and your family, so we have dufferent values. Ideologies are just a more general and complicated form.
Well, it depends what you mean by values. I was mainly discussing Randy_M’s comment that rationalism doesn’t dictate terminal values; while different perspectives probably mean the evolution of different value systems even given identical hardwiring, that doesn’t necessarily reflect different terminal values. Those don’t reflect preferences but rather the algorithm by which preferences evolve; and self-interest is one module of that, not seven billion.
No, I think people can be persuaded on terminal values, although to an extent that modifies my response above; rationality will tell you that certain values are more likely to conflict, and noticing internal contradictions—pitting two vales against each other—is one way to convince someone to alter—or just adjust the relative worth of—their terminal values.
Due to the complexity of social reality I don’t think you are going to find too many with beliefs that are perfectly consistent; that is, any mainstream political affiliations is unlikely to be a shinning paragon of coherance and logical progression built upon core principles relative to its competitors.
But demonstrate with examples if I’m wrong.
If you can persuade someone to alter (not merely ignore) a value they believe to have been terminal, that’s good evidence that it wasn’t a terminal value.
This is only true if you think humans actually hold coherent values that are internally designated as “terminal” or “instrumental”. Humans only ever even designate statements as terminal values once you introduce them to the concept.
To clarify, I suspect most neurotypical humans may possess features of ethical development which map reasonably well to the notion of terminal values, although we don’t know their details (if we did, we’d be most of the way to solving ethics) or the extent to which they’re shared. I also believe that almost everyone who professes some particular terminal (fundamental, immutable) value is wrong, as evidenced by the fact that these not infrequently change.
“The first should not have anything to do with how rational you are, while the second very much should. ”
What does should mean there, and from where do you derive it?
Why are you bringing it up, though? As an aspiring rationalist, I believe it should be possible in principle to discuss whether one sex is more rational than the other, on average. However, it makes me feel uncomfortable that a considerable number of people here feel the need to inject the topic into a conversation where it’s not really relevant. If I were a woman, I can imagine I would feel more hesitant to participate on Less Wrong as a result of this, and that would be a pity.
So: Do I really believe that the heritability of IQ is zero? Well, I hope by this point I’ve persuaded you that’s not a well-posed question. What I hope you really want to ask is something like: Do I think there are currently any genetic variations which, holding environment fixed to within some reasonable norms for prosperous, democratic, industrial or post-industrial societies, would tend to lead to differences in IQ? There my answer is “yes, of course”. I’ve mentioned phenylketonuria and hypothyroidism already, and many other in-born errors of metabolism also lead to cognitive deficits, including lower IQ, at least in certain environments. [...]
I suspect this answer will still not satisfy some people, who really want to know about differences between people who do not have significant developmental disorders. Here, my honest answer would be that I presently have no evidence one way or the other. If you put a gun to my head and asked me to guess, and I couldn’t tell what answer you wanted to hear, I’d say that my suspicion is that there are, mostly on the strength of analogy to other areas of biology where we know much more. I would then — cautiously, because you have a gun to my head — suggest that you read, say, Dobzhansky on the distinction between “human equality” and “genetic identity”, and ask why it is so important to you that IQ be heritable and unchangeable.
First, about the consequences: the theatrics of the “unspeakable” are getting a little tiresome. Shalizi is a statistics professor at Carnegie-Mellon. The Mainstream Science on Intelligence was signed by 52 professors and included very clear statements about interracial IQ differences, lack of culture bias, and explicit heritability estimates. I would ask you to name the supposedly inescapable and grave “consequences for career and social life” these 52 professors brought on their heads.
Second, about the subject matter: this quote comes at the end of a long post in which Shalizi challenges the accepted estimates of IQ heritability, and criticizes at length the frequent but confused interpretation of heritability as lack of malleability. In his next post on the subject, he criticizes the notion of a single g factor as standing on a shaky ground, having been inferred by intelligence researchers on the basis of factor analysis that is known to statisticians to be inadequate for such a conclusion. Basically, Shalizi criticizes the statistical foundations employed by IQ researchers as being statistically unsound, and he carries out this critique on a much deeper technical level than what normally makes it into summaries, popular books and blog posts. On the face of it, this isn’t a completely ridiculous idea: we know that much of psychology and medicine routinely misuses statistics in ways that make experts wince, although we might also expect IQ researchers to have their statistical shit together much more decisively than your average soft-psychology paper.
There have been replies to Shalizi’s critique on the same technical level, and further debates. Frankly, most of this goes over my head. I know just about enough basic statistics to understand most of Shalizi’s critique but not assess it intelligently on my own, and certainly not to follow the ensuing debate. I doubt, however, that your dismissal of Shalizi’s honesty is based on a solid understanding of the arguments in this debate about statistical foundations of IQ research.
That flat and unconditional statement seems to be mismatched with your sentence a bit later:
Frankly, most of this goes over my head.
Given that you say you lack the capability to “assess it intelligently on my own” and given that I don’t see the basis on which you decide I am statistically incompetent, I am rather curious why did you decide that I am wrong. Especially given that I was talking about my personal conclusions and not stating a falsifiable fact about reality.
You’re wrong because your conclusion that Shalizi was either blind or lying rested on two premises: one, that heritability in racial IQ differences has been proven, and two, that for Shalizi to admit this fact would be uttering the “unspeakable” and would carry severe social and career-wise consequences. I wrote a detailed explanation about the way Shalizi challenges the first premise on statistical grounds, in the field where he’s an expert (and in a way that’s neither blind nor dishonest, albeit it could be wrong). I gave an example that illustrates that the second premise is wildly exaggerated, especially when applied to an academic such as Shalizi. That’s why you are wrong.
Your response was to twist my words into a claim that you are “statistically incompetent”, where in fact I emphasized that Shalizi’s critique was on a deep technical level, and that I myself lacked knowledge to assess it. That is cheap emotional manipulation. You also cited a paper about Gottfredson that wasn’t relevant to what I said. Given this unpromising situation, I’m sure you’ll understand if I neglect to address further responses of that kind.
How could you possibly do that for a subject about which you said that “most of this goes over my head”?
Your response was to twist my words into a claim that you are “statistically incompetent”, where in fact I emphasized that Shalizi’s critique was on a deep technical level, and that I myself lacked knowledge to assess it.
Short memory, too. Your words: “I doubt, however, that your dismissal of Shalizi’s honesty is based on a solid understanding of the arguments in this debate about statistical foundations of IQ research.”
I’m sure you’ll understand if I neglect to address further responses of that kind.
Linda Gottfredson doesn’t seem to have been “silenced”, though. (But I have a libertarian, rather than a left/right partisan, view on that concept. Someone who takes grants from wealthy ideological supporters instead of from government institutions is not thereby silenced; on the contrary, that would seem pretty darn liberating.)
The “Look Inside” button will give you the first two pages. I am not sure why the publisher of the journal is relevant unless you’re going to claim the paper is an outright lie.
It’s evidence of what? That the paper fits well with the ideological orientation of the journal? Sure, but I’m not interested in that. Is it evidence that the paper incorrectly describes the relevant facts? I don’t think so.
The paper is from 1991 and seems to be about something that happened between 1988 and Gottfredson receiving a full professorship from U. Delaware in 1990? I’m not clear on the story there. But so far I’m not seeing silencing — just controversy and a question of whether the governors of an institution would choose to associate with a particular wealthy donor.
But again, I’ll admit I’m coming from a libertarian background — I see a big difference between what I’d call silencing (e.g. violence or threats of violence to get someone to stop speaking their views) and withdrawing association (e.g. choosing not to cooperate with someone on account of their views). The former is really scarily common, especially in online discourse today, so I’m kinda sensitive on that. :( That’s all complicated again by it being a government university involved, but except in really politicized cases that usually doesn’t affect the way the institution operates internally all that much.
a question of whether the governors of an institution would choose to associate with a particular wealthy donor.
Not quite. My reading is that Gottfredson was explicitly prohibited from accepting funding coming from the Pioneer Fund.
I agree that this is not true silencing, but I do not wish to defend the title of the article, anyway. It’s just a result of a quick Google search for “consequences” to holding, um, non-mainstream views on race and intelligence.
He, of course, knows very well what the consequences for his career and social life would be were he to admit the unspeakable.
What you & Anatoly_Vorobey have quoted is talking about heritable IQ differences between individuals (“who do not have significant developmental disorders”). Is it possible you’re conflating that with talking about heritable IQ differences between races or sexes?
That you use the word “unspeakable” suggests you are, as does the fact that your twocases of scientists suffering career consequences (Gottfredson & Cattell) are cases where they suggested genetic racial differences as well as genetic individual differences. (In fact, if I remember rightly, both went further and inferred likely policy implications of genetic racial differences.)
What you & Anatoly_Vorobey have quoted is talking about heritable IQ differences between individuals (“who do not have significant developmental disorders”). Is it possible you’re conflating that with talking about heritable IQ differences between races or sexes?
That’s a good point, I think the two issues got a bit conflated in the discussion here.
However I can’t but see it as a reinforcement of my scepticism. My impression is that the partial heritability of IQ in individuals is well established. At most you can talk about doubting the evidence or not believing it or something like that. Shalizi says he “has no evidence” which is not credible at all.
However I can’t but see it as a reinforcement of my scepticism.
Yes, I think it supports your dim view of what Shalizi wrote. I also think it detracts from your implication that he’s simply evading saying the “unspeakable”, since heritable IQ differences between individuals are a much less contentious topic than heritable racial (or sexual) IQ differences.
As reasonable as that person sounds, I feel the need to point out that IQ differences between race has little or nothing to do with IQ differences between sexes (and even less with rationality, but I guess we gravitated away from that). Even if there is a “stupid gene”, to phrase it very dumbly, there is still no reason to believe that someone with 2 X chromosomes would inherit this gene while someone with the same parents but with a Y chromosome would not.
If you (or anyone) want to argue that women naturally have lower IQ than men, I would go with an argument based on hormones instead. Sounds much more plausible to me.
Food, genes, certain types of activity such as sports and competitiveness in general, the environment you grow up in, being in a position of authority, to name some factors that influence hormone production.
It’s certainly not just the gender divide. If you think that testosterone makes men smarter than women on average, you would also have to accept the conclusion that women with more testosterone than men will be smarter than men on average. All other things being equal, of course.
Testosterone levels in men and women are in completely different ballparks, and there is no overlap in healthy individuals of the different sexes beyond puberty. This would make me think the difference is mainly genetic.
I’m not arguing for anything beyond this point, so we don’t have to go there.
I stand corrected on the testosterone levels: The difference is indeed greater than I thought. I will accept that the difference is mainly, but certainly not solely, genetic.
You are absolutely correct on the facts, and in a saner world I could leave it at that, but you seem to have missed an unspoken part of the argument;
The common factor isn’t genetics per se but rather an appeal to inherent nature. Whether that nature is the genetic legacy of selection for vastly different ancestral environments or due to the epigenetics of sexual dimorphism is very important in a scientific sense but not in the metaphysical sense of presenting a challenge to the ideals of “equality” or the “psychic unity of mankind.”
When Dr Shalizi writes the rhetorical question “why it is so important to you that IQ be heritable and unchangeable?” in the context of “‘human equality’ and ‘genetic identity’” his tone is not that of scientific skepticism of an unproven claim but rather an apologetic defense of an embattled creed. Really, why is it so important to you what the truth is? After all, we don’t have any evidence to suggest that the doctrines are wrong, so why not just repeat the cant like everyone else? Who else but a heretic would feel need to ask uncomfortable questions?
For the most part, scientists writing against the hereditarian position don’t bother debating the facts anymore; now that actual genetic evidence is starting to come out they know it’ll just make them look foolish in a few years, and the psychometric evidence has survived four decades of concentrated attack already. It’s all about implications and responsibility now, or in other words that the lie is too big to fail. It’s hardly important to them if the truth at hand is a genetic or an hormonal inequality, they just want it to go away.
I think you misinterpret Dr Shalizi, and do him a disservice. I think his answer is perfectly reasonable from a bayesian point of view. Basically, I see three common reasons to spend time researching difference between races:
A) People who are genuinely interested in the answer, for pragmatic or intellectual reasons B) People who are a racist and want to hear a particular answer that fits their preconceived views C) People who are trying to be controversial/contrarian/want to provoke people
Certainly there are people who are genuinely curious towards the answer, purely for intellectual reasons (A). I am somewhat interested myself. However, the fact of the matter is that many others are interested purely for racist reasons (B). Many racists aren’t open in their racism, and as such mask their racism as honest scientific inquiry, making B indistinguishable from A. Showing interest in the subject is therefore Bayesian evidence for B as much as it is for A. Even worse is the fact that everyone knows that everyone realizes this on an intuitive level, which causes most As to shut up for fear of being identified as Bs, while Bs continue what they are doing. This serves to compound the effect. Meanwhile, Cs arise expressly because it is a hot button topic. As a result it is entirely rational to conclude that someone who is constantly yelling about race and inserting the subject into other conversations is more likely to be a racist on average than others. And of course, it’s incredibly frustrating if you are an A and just want an honest conversation about the subject, which is now impossible (thanks, politics!).
I think Shalizi deals with this messed up situation admirably: Making clear what he believes while doing everything to avoid sounding controversial or giving fuel to racists. Of course this doesn’t work very well because people who call others racist fall into two categories themselves:
D) People who are genuinely worried about the dangerous effects of racist claims. E) People who realise they can win any argument by default by calling the other a racist
And people who fall under category E do not, of course, care about the truth of the matter in the slightest.
Kind of tempted to write a top-level post about this, now. Hmm...
I think that the fact that there is a debate and that the “good guys” use name-calling instead of scientific arguments, increases also the number of people in the group A.
It’s a bit like telling people not to think of an elephant, and then justify it by saying that elephant-haters are most obsessed about elephants, therefore thinking of an elephant is an evidence of being an evil person. Well, as soon as told everyone not to think of an elephant, this stopped being true.
Actually, it is more like not being allowed to talk about the elephant (...in the room. See what I did there?). Not talking about a subject is much easier than not thinking about it. And because everybody knows that talking about the elephant will cause you to be called an elephant hater and nothing good whatsoever will come of it in 95% of cases, the only people who continue to talk about elephants are people who care so strongly about the subject that they are willing to be called an elephant-hater just so that they can be heard. So that leaves people who either really hate elephants, and people who really can’t stand being told that they’re not allowed to say something (and super-dedicated elephant scientists I guess, but there’s not very many of those).
The most difficult part of not talking about the elephant is when someone suddently says: “There is no elephant in this room, and we all know it, don’t we?” Interpreting the rule as forbidding to talk about the elephant, but not about the absence of the elephant.
Specifically, if there is a rule against mentioning genetic differences—and the goal is to avoid the discussion about genetics, not to assert that there are no differences—the rule should equally forbid saying that there are genetic differences, and that there aren’t genetic differences.
The rule should make very clear whether its intent is to 1) stop both sides of the debate, or 2) stop only one side of the debate, letting the other side win. Both options make sense, but it is difficult to follow when it is not sure which of these two options was meant.
I’d say that the percentage of people showing interest in medicine that want to poison their neighbour is rather lower than the percentage of people talking about genetic differences between race being racist.
When Dr Shalizi writes the rhetorical question “why it is so important to you that IQ be heritable and unchangeable?” in the context of “‘human equality’ and ‘genetic identity’” his tone is not that of scientific skepticism of an unproven claim but rather an apologetic defense of an embattled creed. Really, why is it so important to you what the truth is?
I read Shalizi differently, as asking something like, “Really, is it because you care about the truth qua truth that you find this particular alleged truth so important?” Far from apologetic, he is — cautiously, because there is a counterfactual gun to his head — going on the offensive, hinting that the people insistently disagreeing with him are motivated by more than unalloyed curiosity. It is not, of course, dispassionate scientific scepticism, but nor is it a defensive crouch.
My interpretation could be wrong. Shalizi isn’t spelling things out in explicit, objective detail there. But my interpretation rings truer to my gut, and fits better with the fact that his peroration rounds off ten thousand words of blunt and occasionally snarky statistical critique.
Yes, Shalizi was talking about something completely different, but his attitude was similar to yours. He was saying: “sure, I could imagine that it might be so (that there might be a heritable difference), but why are you so invested in believing in that? Why do you fight for it so much?”. I meant for my quotation to bolster your case.
Would you predict that the average IQ among LW census responders who self label as conservatives is lower? If so, how strong would you predict the effect to be?
Hmmm, interesting question. If you were to ask about conservatives versus progressives in general, I would say yes: The fact that bible-thumping Christians are far more likely to be conservative alone is enough to skew the average downwards. But the people who partake in the Less Wrong census and identify as conservative are a very different demographic, most likely.
All in all, I would have to say yes: I think it is much more likely that those 3% of Less Wrongers identify as conservatives because they are the kind of people who fail to apply rationality to their politics (and therefore have lower IQ) than it is that they identify as conservative because they are free and independent thinkers who are willing to go against the consensus opinion on this website (higher IQ).
See the penultimate paragraph of this comment, take a look at this, and try to guess whether US::conservatives have higher or lower Openness in average than US::liberals.
LW is a US-centric site. When I saw the option, I assumed it meant the US interpretation of the “conservative” label, which (from Europe) seems impossible to distinguish from batshit crazy.
I like to see myself as somewhat conservative, but I even more like to see myself as not batshit crazy.
The definition given in the survey was “Conservative, for example the US Republican Party and UK Tories: traditional values, low taxes, low redistribution of wealth”.
LW is a US-centric site. When I saw the option, I assumed it meant the US interpretation of the “conservative” label, which (from Europe) seems impossible to distinguish from batshit crazy.
As a US conservative, I can assure you the feeling is mutual, BTW.
Not sure what you mean by that. You feel European conservativism is crazy? You feel the interpretation of US conservatism is crazy? You feel US conservatives are functionally identical to crazy, if not actually so?
something like 80% of LWers are liberals/socialists
60%. But yes, it was funny to find out who the evil person was.
Actually, no, it was quite sad. I mean, when reading Yvain’s articles, I often feel a deep envy of the peaceful way he can write. I am more likely to jump and say something agressive. I would be really proud of myself if I could someday learn to write the way Yvain does. … Which still would make me just another bad guy. Holy Xenu, what’s the point of even trying?
It starts rather well—discussing an interesting study by Galton. High brow, sophisticated style, almost convincing impression of an upper class liberal person, up until he gets to the issue that for some reason actually interests him—rationalizing the views of PUA community on women. I say rationalizing because, of course, mind projection fallacy would affect opinions of PUA on women just as much as it affects opinions of women on women, but of course it is only the latter in which the fallacy is noticed.
This by the way is a great example of how cognitive fallacies are typically used here.
I’m not the least bit surprised that he would also support eugenics via sterilization. edit: or express sympathy towards it, or the like.
Could you do me a BIG FAVOR and every time you write “Yvain says...” or “Yvain believes...” in the future, follow it with ”...according to my interpretation of him, which has been consistently wrong every time I’ve tried to use it before”? I am getting really tired of having to clean up after your constant malicious misinterpretations of me.
So everyone should be aware that whenever Dmytry/private_messaging claims Yvain said something, that’s almost always wrong according to Yvain’s own view of what Yvain said.
I suppose the difference is whether you’re doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we’re talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea.”
Emphasis mine. In this original quote, in the hypothetical future, where Intel is building brain simulations that seem likely to become artificial general intelligence, he supports violence. As clear as it can be.
His subsequent re-formulation to make himself look less bad was:
Even Yvain supports violence of AI seems imminent”. No, I might support violence if an obviously hostile unstoppable SKYNET-style AI seemed clearly imminent
Now, the caveat here is that he would use brain simulators built in the hypothetical future by Intel to be an example of “an obviously hostile unstoppable SKYNET-style AI” , a clear contradiction (if it was so obvious Intel wouldn’t be making those brain emulations)
Hmm. In all fairness I’m not quite sure what he means by eugenics. Historically, the term is virtually never applied to non-coercive measures (such as e.g. IQ cut-off at sperm banks).
“Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together – making the income conditional upon sterilization is a little too close to coercion for my purposes. Still, probably better than what we have right now.”
Politics is rarely discussed here to begin with and something like 65*% of LWers are liberals/socialists.
Yes, but people on the far right are disproportionately active in political discussions here, probably because it is one of the very few internet venues where they can air their views to a diverse and intelligent readership without being immediately shouted down as evil. If you actually measured political comments, I suspect you’d find that the explicitly liberal/social ones represent much less than 65%.
I think it’s worth noting that we are (yet again) having a self-criticism session because a leftist (someone so far to the left that they consider liberal egalitarian Yvain to be beyond the pale of tolerability) complained that people who disagree with them are occasionally tolerated on LW.
Come on. Politics is rarely discussed here to begin with and something like 65*% of LWers are liberals/socialists. If the occasional non-leftist thought that slips through the cracks of karma-hiding and (more importantly) self-censorship is enough to drive you away, you probably have very little to offer.
*I originally said 80%, but I checked the survey and it’s closer to 65%. I think my point still stands. Only 3% of LWers surveyed described themselves as conservatives.
Interesting. I wonder why LW has so few conservatives. Surely, just like there isn’t masculine rationality and feminine rationality, there shouldn’t be conservative rationality and liberal rationality. It also makes me wonder how valid the objections are in the linked post if the political views of LW skew vastly away from conservative topics.
Full disclosure: I’m a black male who grew up in the inner city and I don’t find anything particularly offensive about topics on LW. There goes my opposing anecdote to the one(s) presented in the linked blog.
At a guess, I’d say this is linked to religion. Once you split out the libertarian faction (as the surveys historically have), it’s quite rare for people on the conservative side of the fence (at least in the US) to be irreligious, and LW is nothing if not outspokenly secular.
People in the rationality community tend to believe that there’s a lot of low-hanging fruit to be had in thinking rationally, and that the average person and the average society is missing out on this. This is difficult to reconcile with arguments for tradition and being cautious about rapid change, which is the heart of (old school) conservatism.
I think futurism is anti-conservative.
My steelman of the conservative position is ‘empirical legislation’ : do not make new laws until you have decent evidence they achieve the stated policy goals. “Ah, but while you are gathering your proof, the bad thing X is still happening!” “Too bad.”
FAI is a conservative position.
To respond to the grandparent, I think in the US conservatives ceded all intellectual ground, and are therefore not a sexy position to adopt. (If this is true, I think one should view this as a bad thing regardless of one’s political affiliation, because ‘loyal opposition’ is needed to sharpen teeth).
There is a big difference between what sex you are and what beliefs you profess: The first should not be determined by how rational you are, while the second very much should. There should be nothing surprising about the fact that more intelligent and more rational people would have different beliefs about reality than less intelligent and less rational people.
Or to put it another way: If you believe that all political affiliations should be represented equally in the sceptic/rationalist community, you are implicitly assuming that political beliefs are merely statements of personal preference instead of seeing them as claims about reality. While personal preference plays a role, I would hope that there’s more to it than that.
Why not? Men and women are different in many ways. Why did you decide that a disposition to rationality can’t possibly depend on your sex (and so your hormones, etc.)?
It’s in reply to Quinton saying that there should be no masculine and feminine types of rationality. In other words, whether you are a man or a woman should not determine what the correct/rational answer is to a particular question (barring obvious exceptions). This is in stark contrast to asking whether or not political affiliation should be determined by how rational you are, which is another question entirely.
In other words: Just because correct answers to factual questions should not be determined by gender does not mean that political affiliation should not be determined by correct answers to factual questions.
I think political differences come down to values moreso than beliefs about facts. Rationalism doesn’t dictate terminal values.
Sometimes it is difficult to find out what is the different value and what is essentially the same value but different models.
For example two people can have a value of “it would be bad to destroy humanity”, but one of them has a model that humanity will likely destroy itself with ongoing capitalism, while the other has a model that humanity would be likely destroyed by some totalitarian movement like communism.
But instead of openly discussing their models and finding the difference, the former will accuse the latter of not caring about human suffering, and the latter will accuse the former of not caring about human suffering. Or they will focus on different applause lights, just to emphasise how different they are.
I probably underestimate the difference of values. Some people are psychopaths; and they might not be the only different group of people. But it seems to me that a lot of political mindkilling is connected with overestimating the difference, instead of admitting that our values in connection with a different model of the world would lead to different decisions. (Because our values are good, the different decisions are evil, and good cannot be evil, right?)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
(And more generally: If you meet a person with strange political opinions, try to imagine a least convenient world, where your values would lead to the same opinions. Even if that would be a wrong model of our world, it still may be the model the other person believes to be correct.)
I agree, though I’ll add that what facts people find plausible are shaped by their values.
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don’t find them to map too closely to actual situations.
I’m not sure I can aptly articulate by intuition here. By differences in values, I don’t really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.
But I’ll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.
I’m always a little suspicious of this line of thinking. Partly because the terminal/instrumental value division isn’t very clean in humans—since more deeply ingrained values are harder to break regardless of their centrality, and we don’t have very good introspective access to value relationships, it’s remarkably difficult to unambiguously nail down any terminal values in real people. Never mind figuring out where they differ. But more importantly, it’s just too convenient: if you and your political enemies have different fundamental values, you’ve just managed to absolve yourself of any responsibility for argument. That’s not connotationally the same as saying the people you disagree with are all evil mutants or hapless dupes, but it’s functionally pretty damn close.
That doesn’t prove it wrong, of course, but I do think it’s grounds for caution.
How about different factions (landowners, truck drivers, soldiers, immigrants, etc.) all advocating their own interests? Doesn’t that count as “different values”?
Or, more simply, I value myself and my family, you value yourself and your family, so we have dufferent values. Ideologies are just a more general and complicated form.
Well, it depends what you mean by values. I was mainly discussing Randy_M’s comment that rationalism doesn’t dictate terminal values; while different perspectives probably mean the evolution of different value systems even given identical hardwiring, that doesn’t necessarily reflect different terminal values. Those don’t reflect preferences but rather the algorithm by which preferences evolve; and self-interest is one module of that, not seven billion.
No, I think people can be persuaded on terminal values, although to an extent that modifies my response above; rationality will tell you that certain values are more likely to conflict, and noticing internal contradictions—pitting two vales against each other—is one way to convince someone to alter—or just adjust the relative worth of—their terminal values. Due to the complexity of social reality I don’t think you are going to find too many with beliefs that are perfectly consistent; that is, any mainstream political affiliations is unlikely to be a shinning paragon of coherance and logical progression built upon core principles relative to its competitors. But demonstrate with examples if I’m wrong.
If you can persuade someone to alter (not merely ignore) a value they believe to have been terminal, that’s good evidence that it wasn’t a terminal value.
This is only true if you think humans actually hold coherent values that are internally designated as “terminal” or “instrumental”. Humans only ever even designate statements as terminal values once you introduce them to the concept.
I don’t think we disagree.
To clarify, I suspect most neurotypical humans may possess features of ethical development which map reasonably well to the notion of terminal values, although we don’t know their details (if we did, we’d be most of the way to solving ethics) or the extent to which they’re shared. I also believe that almost everyone who professes some particular terminal (fundamental, immutable) value is wrong, as evidenced by the fact that these not infrequently change.
If terminal values are definitionally immutable, than I used the wrong term.
“The first should not have anything to do with how rational you are, while the second very much should. ” What does should mean there, and from where do you derive it?
But it might affect how rational you are.
It’s possible.
Why are you bringing it up, though? As an aspiring rationalist, I believe it should be possible in principle to discuss whether one sex is more rational than the other, on average. However, it makes me feel uncomfortable that a considerable number of people here feel the need to inject the topic into a conversation where it’s not really relevant. If I were a woman, I can imagine I would feel more hesitant to participate on Less Wrong as a result of this, and that would be a pity.
It’s an interesting topic, the moreso because it is taboo, and not exactly tangential to the subject, I think.
Compare with Cosma Shalizi on the heritability of IQ (emphasis mine):
At this point I would have to conclude that the guy is either very deliberately blind or is lying through his teeth.
He, of course, knows very well what the consequences for his career and social life would be were he to admit the unspeakable.
You’re wrong.
First, about the consequences: the theatrics of the “unspeakable” are getting a little tiresome. Shalizi is a statistics professor at Carnegie-Mellon. The Mainstream Science on Intelligence was signed by 52 professors and included very clear statements about interracial IQ differences, lack of culture bias, and explicit heritability estimates. I would ask you to name the supposedly inescapable and grave “consequences for career and social life” these 52 professors brought on their heads.
Second, about the subject matter: this quote comes at the end of a long post in which Shalizi challenges the accepted estimates of IQ heritability, and criticizes at length the frequent but confused interpretation of heritability as lack of malleability. In his next post on the subject, he criticizes the notion of a single g factor as standing on a shaky ground, having been inferred by intelligence researchers on the basis of factor analysis that is known to statisticians to be inadequate for such a conclusion. Basically, Shalizi criticizes the statistical foundations employed by IQ researchers as being statistically unsound, and he carries out this critique on a much deeper technical level than what normally makes it into summaries, popular books and blog posts. On the face of it, this isn’t a completely ridiculous idea: we know that much of psychology and medicine routinely misuses statistics in ways that make experts wince, although we might also expect IQ researchers to have their statistical shit together much more decisively than your average soft-psychology paper.
There have been replies to Shalizi’s critique on the same technical level, and further debates. Frankly, most of this goes over my head. I know just about enough basic statistics to understand most of Shalizi’s critique but not assess it intelligently on my own, and certainly not to follow the ensuing debate. I doubt, however, that your dismissal of Shalizi’s honesty is based on a solid understanding of the arguments in this debate about statistical foundations of IQ research.
That flat and unconditional statement seems to be mismatched with your sentence a bit later:
Given that you say you lack the capability to “assess it intelligently on my own” and given that I don’t see the basis on which you decide I am statistically incompetent, I am rather curious why did you decide that I am wrong. Especially given that I was talking about my personal conclusions and not stating a falsifiable fact about reality.
P.S. Oh, and the bit about consequences for career? Try Blits, Jan H. The silenced partner: Linda Gottfredson and the University of Delaware
You’re wrong because your conclusion that Shalizi was either blind or lying rested on two premises: one, that heritability in racial IQ differences has been proven, and two, that for Shalizi to admit this fact would be uttering the “unspeakable” and would carry severe social and career-wise consequences. I wrote a detailed explanation about the way Shalizi challenges the first premise on statistical grounds, in the field where he’s an expert (and in a way that’s neither blind nor dishonest, albeit it could be wrong). I gave an example that illustrates that the second premise is wildly exaggerated, especially when applied to an academic such as Shalizi. That’s why you are wrong.
Your response was to twist my words into a claim that you are “statistically incompetent”, where in fact I emphasized that Shalizi’s critique was on a deep technical level, and that I myself lacked knowledge to assess it. That is cheap emotional manipulation. You also cited a paper about Gottfredson that wasn’t relevant to what I said. Given this unpromising situation, I’m sure you’ll understand if I neglect to address further responses of that kind.
How could you possibly do that for a subject about which you said that “most of this goes over my head”?
Short memory, too. Your words: “I doubt, however, that your dismissal of Shalizi’s honesty is based on a solid understanding of the arguments in this debate about statistical foundations of IQ research.”
Oh, I’m the understanding kind :-P
That’s a locked-up paper printed in a journal operated by a political advocacy group.
Linda Gottfredson doesn’t seem to have been “silenced”, though. (But I have a libertarian, rather than a left/right partisan, view on that concept. Someone who takes grants from wealthy ideological supporters instead of from government institutions is not thereby silenced; on the contrary, that would seem pretty darn liberating.)
The “Look Inside” button will give you the first two pages. I am not sure why the publisher of the journal is relevant unless you’re going to claim the paper is an outright lie.
It’s evidence. Are you advising to ignore it? Argument from authority is fallacious but reversed stupidity is not intelligence.
It’s evidence of what? That the paper fits well with the ideological orientation of the journal? Sure, but I’m not interested in that. Is it evidence that the paper incorrectly describes the relevant facts? I don’t think so.
Oh, I see. Thanks for the pointer.
The paper is from 1991 and seems to be about something that happened between 1988 and Gottfredson receiving a full professorship from U. Delaware in 1990? I’m not clear on the story there. But so far I’m not seeing silencing — just controversy and a question of whether the governors of an institution would choose to associate with a particular wealthy donor.
But again, I’ll admit I’m coming from a libertarian background — I see a big difference between what I’d call silencing (e.g. violence or threats of violence to get someone to stop speaking their views) and withdrawing association (e.g. choosing not to cooperate with someone on account of their views). The former is really scarily common, especially in online discourse today, so I’m kinda sensitive on that. :( That’s all complicated again by it being a government university involved, but except in really politicized cases that usually doesn’t affect the way the institution operates internally all that much.
Not quite. My reading is that Gottfredson was explicitly prohibited from accepting funding coming from the Pioneer Fund.
I agree that this is not true silencing, but I do not wish to defend the title of the article, anyway. It’s just a result of a quick Google search for “consequences” to holding, um, non-mainstream views on race and intelligence.
Here is another example.
What you & Anatoly_Vorobey have quoted is talking about heritable IQ differences between individuals (“who do not have significant developmental disorders”). Is it possible you’re conflating that with talking about heritable IQ differences between races or sexes?
That you use the word “unspeakable” suggests you are, as does the fact that your two cases of scientists suffering career consequences (Gottfredson & Cattell) are cases where they suggested genetic racial differences as well as genetic individual differences. (In fact, if I remember rightly, both went further and inferred likely policy implications of genetic racial differences.)
That’s a good point, I think the two issues got a bit conflated in the discussion here.
However I can’t but see it as a reinforcement of my scepticism. My impression is that the partial heritability of IQ in individuals is well established. At most you can talk about doubting the evidence or not believing it or something like that. Shalizi says he “has no evidence” which is not credible at all.
Yes, I think it supports your dim view of what Shalizi wrote. I also think it detracts from your implication that he’s simply evading saying the “unspeakable”, since heritable IQ differences between individuals are a much less contentious topic than heritable racial (or sexual) IQ differences.
As reasonable as that person sounds, I feel the need to point out that IQ differences between race has little or nothing to do with IQ differences between sexes (and even less with rationality, but I guess we gravitated away from that). Even if there is a “stupid gene”, to phrase it very dumbly, there is still no reason to believe that someone with 2 X chromosomes would inherit this gene while someone with the same parents but with a Y chromosome would not.
If you (or anyone) want to argue that women naturally have lower IQ than men, I would go with an argument based on hormones instead. Sounds much more plausible to me.
Where do you think the differences in hormone levels come from?
Food, genes, certain types of activity such as sports and competitiveness in general, the environment you grow up in, being in a position of authority, to name some factors that influence hormone production.
It’s certainly not just the gender divide. If you think that testosterone makes men smarter than women on average, you would also have to accept the conclusion that women with more testosterone than men will be smarter than men on average. All other things being equal, of course.
Testosterone levels in men and women are in completely different ballparks, and there is no overlap in healthy individuals of the different sexes beyond puberty. This would make me think the difference is mainly genetic.
I’m not arguing for anything beyond this point, so we don’t have to go there.
I stand corrected on the testosterone levels: The difference is indeed greater than I thought. I will accept that the difference is mainly, but certainly not solely, genetic.
You are absolutely correct on the facts, and in a saner world I could leave it at that, but you seem to have missed an unspoken part of the argument;
The common factor isn’t genetics per se but rather an appeal to inherent nature. Whether that nature is the genetic legacy of selection for vastly different ancestral environments or due to the epigenetics of sexual dimorphism is very important in a scientific sense but not in the metaphysical sense of presenting a challenge to the ideals of “equality” or the “psychic unity of mankind.”
When Dr Shalizi writes the rhetorical question “why it is so important to you that IQ be heritable and unchangeable?” in the context of “‘human equality’ and ‘genetic identity’” his tone is not that of scientific skepticism of an unproven claim but rather an apologetic defense of an embattled creed. Really, why is it so important to you what the truth is? After all, we don’t have any evidence to suggest that the doctrines are wrong, so why not just repeat the cant like everyone else? Who else but a heretic would feel need to ask uncomfortable questions?
For the most part, scientists writing against the hereditarian position don’t bother debating the facts anymore; now that actual genetic evidence is starting to come out they know it’ll just make them look foolish in a few years, and the psychometric evidence has survived four decades of concentrated attack already. It’s all about implications and responsibility now, or in other words that the lie is too big to fail. It’s hardly important to them if the truth at hand is a genetic or an hormonal inequality, they just want it to go away.
I think you misinterpret Dr Shalizi, and do him a disservice. I think his answer is perfectly reasonable from a bayesian point of view. Basically, I see three common reasons to spend time researching difference between races:
A) People who are genuinely interested in the answer, for pragmatic or intellectual reasons
B) People who are a racist and want to hear a particular answer that fits their preconceived views
C) People who are trying to be controversial/contrarian/want to provoke people
Certainly there are people who are genuinely curious towards the answer, purely for intellectual reasons (A). I am somewhat interested myself. However, the fact of the matter is that many others are interested purely for racist reasons (B). Many racists aren’t open in their racism, and as such mask their racism as honest scientific inquiry, making B indistinguishable from A. Showing interest in the subject is therefore Bayesian evidence for B as much as it is for A. Even worse is the fact that everyone knows that everyone realizes this on an intuitive level, which causes most As to shut up for fear of being identified as Bs, while Bs continue what they are doing. This serves to compound the effect. Meanwhile, Cs arise expressly because it is a hot button topic. As a result it is entirely rational to conclude that someone who is constantly yelling about race and inserting the subject into other conversations is more likely to be a racist on average than others. And of course, it’s incredibly frustrating if you are an A and just want an honest conversation about the subject, which is now impossible (thanks, politics!).
I think Shalizi deals with this messed up situation admirably: Making clear what he believes while doing everything to avoid sounding controversial or giving fuel to racists. Of course this doesn’t work very well because people who call others racist fall into two categories themselves:
D) People who are genuinely worried about the dangerous effects of racist claims.
E) People who realise they can win any argument by default by calling the other a racist
And people who fall under category E do not, of course, care about the truth of the matter in the slightest.
Kind of tempted to write a top-level post about this, now. Hmm...
I think that the fact that there is a debate and that the “good guys” use name-calling instead of scientific arguments, increases also the number of people in the group A.
It’s a bit like telling people not to think of an elephant, and then justify it by saying that elephant-haters are most obsessed about elephants, therefore thinking of an elephant is an evidence of being an evil person. Well, as soon as told everyone not to think of an elephant, this stopped being true.
Actually, it is more like not being allowed to talk about the elephant (...in the room. See what I did there?). Not talking about a subject is much easier than not thinking about it. And because everybody knows that talking about the elephant will cause you to be called an elephant hater and nothing good whatsoever will come of it in 95% of cases, the only people who continue to talk about elephants are people who care so strongly about the subject that they are willing to be called an elephant-hater just so that they can be heard. So that leaves people who either really hate elephants, and people who really can’t stand being told that they’re not allowed to say something (and super-dedicated elephant scientists I guess, but there’s not very many of those).
The most difficult part of not talking about the elephant is when someone suddently says: “There is no elephant in this room, and we all know it, don’t we?” Interpreting the rule as forbidding to talk about the elephant, but not about the absence of the elephant.
Specifically, if there is a rule against mentioning genetic differences—and the goal is to avoid the discussion about genetics, not to assert that there are no differences—the rule should equally forbid saying that there are genetic differences, and that there aren’t genetic differences.
The rule should make very clear whether its intent is to 1) stop both sides of the debate, or 2) stop only one side of the debate, letting the other side win. Both options make sense, but it is difficult to follow when it is not sure which of these two options was meant.
In the same sense that showing interest in medicine is Bayesian evidence for me wanting to poison my neighbors.
I’d say that the percentage of people showing interest in medicine that want to poison their neighbour is rather lower than the percentage of people talking about genetic differences between race being racist.
That depends on the definition of “racist” used.
I read Shalizi differently, as asking something like, “Really, is it because you care about the truth qua truth that you find this particular alleged truth so important?” Far from apologetic, he is — cautiously, because there is a counterfactual gun to his head — going on the offensive, hinting that the people insistently disagreeing with him are motivated by more than unalloyed curiosity. It is not, of course, dispassionate scientific scepticism, but nor is it a defensive crouch.
My interpretation could be wrong. Shalizi isn’t spelling things out in explicit, objective detail there. But my interpretation rings truer to my gut, and fits better with the fact that his peroration rounds off ten thousand words of blunt and occasionally snarky statistical critique.
Yes, Shalizi was talking about something completely different, but his attitude was similar to yours. He was saying: “sure, I could imagine that it might be so (that there might be a heritable difference), but why are you so invested in believing in that? Why do you fight for it so much?”. I meant for my quotation to bolster your case.
Ahhhh, you’re right, I completely misunderstood your intent. In that case we are in agreement.
It affects your argument that there is something wrong with having a skewed gender balance here.
Would you predict that the average IQ among LW census responders who self label as conservatives is lower? If so, how strong would you predict the effect to be?
Hmmm, interesting question. If you were to ask about conservatives versus progressives in general, I would say yes: The fact that bible-thumping Christians are far more likely to be conservative alone is enough to skew the average downwards. But the people who partake in the Less Wrong census and identify as conservative are a very different demographic, most likely.
All in all, I would have to say yes: I think it is much more likely that those 3% of Less Wrongers identify as conservatives because they are the kind of people who fail to apply rationality to their politics (and therefore have lower IQ) than it is that they identify as conservative because they are free and independent thinkers who are willing to go against the consensus opinion on this website (higher IQ).
Care to give odds? There is a narrow opportunity for betting (until Yvain releases results).
See the penultimate paragraph of this comment, take a look at this, and try to guess whether US::conservatives have higher or lower Openness in average than US::liberals.
LW is a US-centric site. When I saw the option, I assumed it meant the US interpretation of the “conservative” label, which (from Europe) seems impossible to distinguish from batshit crazy.
I like to see myself as somewhat conservative, but I even more like to see myself as not batshit crazy.
The definition given in the survey was “Conservative, for example the US Republican Party and UK Tories: traditional values, low taxes, low redistribution of wealth”.
As a US conservative, I can assure you the feeling is mutual, BTW.
Not sure what you mean by that. You feel European conservativism is crazy? You feel the interpretation of US conservatism is crazy? You feel US conservatives are functionally identical to crazy, if not actually so?
I meant that all the mainstream European parties seem crazy.
-- Stephen Colbert
60%. But yes, it was funny to find out who the evil person was.
Actually, no, it was quite sad. I mean, when reading Yvain’s articles, I often feel a deep envy of the peaceful way he can write. I am more likely to jump and say something agressive. I would be really proud of myself if I could someday learn to write the way Yvain does. … Which still would make me just another bad guy. Holy Xenu, what’s the point of even trying?
There’s one of his best articles:
http://lesswrong.com/lw/dr/generalizing_from_one_example/
It starts rather well—discussing an interesting study by Galton. High brow, sophisticated style, almost convincing impression of an upper class liberal person, up until he gets to the issue that for some reason actually interests him—rationalizing the views of PUA community on women. I say rationalizing because, of course, mind projection fallacy would affect opinions of PUA on women just as much as it affects opinions of women on women, but of course it is only the latter in which the fallacy is noticed.
This by the way is a great example of how cognitive fallacies are typically used here.
I’m not the least bit surprised that he would also support eugenics via sterilization. edit: or express sympathy towards it, or the like.
Yvain has told you in the past the following:
So everyone should be aware that whenever Dmytry/private_messaging claims Yvain said something, that’s almost always wrong according to Yvain’s own view of what Yvain said.
The original quote from Yvain was
Emphasis mine. In this original quote, in the hypothetical future, where Intel is building brain simulations that seem likely to become artificial general intelligence, he supports violence. As clear as it can be.
His subsequent re-formulation to make himself look less bad was:
Now, the caveat here is that he would use brain simulators built in the hypothetical future by Intel to be an example of “an obviously hostile unstoppable SKYNET-style AI” , a clear contradiction (if it was so obvious Intel wouldn’t be making those brain emulations)
I don’t think he does...
Hmm. In all fairness I’m not quite sure what he means by eugenics. Historically, the term is virtually never applied to non-coercive measures (such as e.g. IQ cut-off at sperm banks).
From this comment:
“Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together – making the income conditional upon sterilization is a little too close to coercion for my purposes. Still, probably better than what we have right now.”
Yes, but people on the far right are disproportionately active in political discussions here, probably because it is one of the very few internet venues where they can air their views to a diverse and intelligent readership without being immediately shouted down as evil. If you actually measured political comments, I suspect you’d find that the explicitly liberal/social ones represent much less than 65%.
I did not know that, thanks!
Turns out I was wrong, according to the 2012 survey only like 65% of LWers are socialist/liberals.
Ok, that sounds much more reasonable.