Reason as memetic immune disorder
A prophet is without dishonor in his hometown
I’m reading the book “The Year of Living Biblically,” by A.J. Jacobs. He tried to follow all of the commandments in the Bible (Old and New Testaments) for one year. He quickly found that
a lot of the rules in the Bible are impossible, illegal, or embarassing to follow nowadays; like wearing tassels, tying your money to yourself, stoning adulterers, not eating fruit from a tree less than 5 years old, and not touching anything that a menstruating woman has touched; and
this didn’t seem to bother more than a handful of the one-third to one-half of Americans who claim the Bible is the word of God.
You may have noticed that people who convert to religion after the age of 20 or so are generally more zealous than people who grew up with the same religion. People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them. Religious communities actually protect their members from religion in one sense—they develop an unspoken consensus on which parts of their religion members can legitimately ignore. New converts sometimes try to actually do what their religion tells them to do.
I remember many times growing up when missionaries described the crazy things their new converts in remote areas did on reading the Bible for the first time—they refused to be taught by female missionaries; they insisted on following Old Testament commandments; they decided that everyone in the village had to confess all of their sins against everyone else in the village; they prayed to God and assumed He would do what they asked; they believed the Christian God would cure their diseases. We would always laugh a little at the naivete of these new converts; I could barely hear the tiny voice in my head saying but they’re just believing that the Bible means what it says...
How do we explain the blindness of people to a religion they grew up with?
Cultural immunity
Europe has lived with Christianity for nearly 2000 years. European culture has co-evolved with Christianity. Culturally, memetically, it’s developed a tolerance for Christianity. These new Christian converts, in Uganda, Papua New Guinea, and other remote parts of the world, were being exposed to Christian memes for the first time, and had no immunity to them.
The history of religions sometimes resembles the history of viruses. Judaism and Islam were both highly virulent when they first broke out, driving the first generations of their people to conquer (Islam) or just slaughter (Judaism) everyone around them for the sin of not being them. They both grew more sedate over time. (Christianity was pacifist at the start, as it arose in a conquered people. When the Romans adopted it, it didn’t make them any more militaristic than they already were.)
The mechanism isn’t the same as for diseases, which can’t be too virulent or they kill their hosts. Religions don’t generally kill their hosts. I suspect that, over time, individual selection favors those who are less zealous. The point is that a culture develops antibodies for the particular religions it co-exists with—attitudes and practices that make them less virulent.
I have a theory that “radical Islam” is not native Islam, but Westernized Islam. Over half of 75 Muslim terrorists studied by Bergen & Pandey 2005 in the New York Times had gone to a Western college. (Only 9% had attended madrassas.) A very small percentage of all Muslims have received a Western college education. When someone lives all their life in a Muslim country, they’re not likely to be hit with the urge to travel abroad and blow something up. But when someone from an Islamic nation goes to Europe for college, and comes back with Enlightenment ideas about reason and seeking logical closure over beliefs, and applies them to the Koran, then you have troubles. They have lost their cultural immunity.
I’m also reminded of a talk I attended by one of the Dalai Lama’s assistants. This was not slick, Westernized Buddhism; this was saffron-robed fresh-off-the-plane-from-Tibet Buddhism. He spoke about his beliefs, and then took questions. People began asking him about some of the implications of his belief that life, love, feelings, and the universe as a whole are inherently bad and undesirable. He had great difficulty comprehending the questions—not because of his English, I think; but because the notion of taking a belief expressed in one context, and applying it in another, seemed completely new to him. To him, knowledge came in units; each unit of knowledge was a story with a conclusion and a specific application. (No wonder they think understanding Buddhism takes decades.) He seemed not to have the idea that these units could interact; that you could take an idea from one setting, and explore its implications in completely different settings. This may have been an extreme form of cultural immunity.
We think of Buddhism as a peaceful, caring religion. A religion that teaches that striving and status are useless is probably going to be more peaceful than one that teaches that the whole world must be brought under its dominion; and religions that lack the power of the state (e.g., the early Christians) are usually gentler than those with the power of life and death. But much of Buddhism’s kind public face may be due to cultural norms that prevent Buddhists from connecting all of their dots. Today, we worry about Islamic terrorists. A hundred years from now, we’ll worry about Buddhist physicists.
Reason as immune suppression
The reason I bring this up is that intelligent people sometimes do things more stupid than stupid people are capable of. There are a variety of reasons for this; but one has to do with the fact that all cultures have dangerous memes circulating in them, and cultural antibodies to those memes. The trouble is that these antibodies are not logical. On the contrary; these antibodies are often highly illogical. They are the blind spots that let us live with a dangerous meme without being impelled to action by it. The dangerous effects of these memes are most obvious with religion; but I think there is an element of this in many social norms. We have a powerful cultural norm in America that says that all people are equal (whatever that means); originally, this powerful and ambiguous belief was counterbalanced by a set of blind spots so large that this belief did not even impel us to free slaves or let women or non-property-owners vote. We have another cultural norm that says that hard work reliably and exclusively leads to success; and another set of blind spots that prevent this belief from turning us all into Objectivists.
A little reason can be a dangerous thing. The landscape of rationality is not smooth; there is no guarantee that removing one false belief will improve your reasoning instead of degrading it. Sometimes, reason lets us see the dangerous aspects of our memes, but not the blind spots that protect us from them. Sometimes, it lets us see the blind spots, but not the dangerous memes. Either of these ways, reason can lead an individual to be unbalanced, no longer adapted to their memetic environment, and free to follow previously-dormant memes through to their logical conclusions. To paraphrase Steve Weinberg: For a smart person to do something truly stupid, they need a theory.
The vaccines?
How can you tell when you have removed one set of blind spots from your reasoning without removing its counterbalances? One heuristic to counter this loss of immunity might be to be very careful when you find yourself deviating from everyone around you. But most people already do this too much
Another heuristic is to listen to your feelings. If your conclusions seem repulsive to you, you may have stripped yourself of cognitive immunity to something dangerous.
Perhaps the most-helpful thing isn’t to try to prevent memetic immune disorder, but to know that it could happen to you.
- Comment reply: my low-quality thoughts on why CFAR didn’t get farther with a “real/efficacious art of rationality” by 9 Jun 2022 2:12 UTC; 260 points) (
- Making your explicit reasoning trustworthy by 29 Oct 2010 0:00 UTC; 120 points) (
- Productivity: Working towards a summary of what we know by 9 Nov 2017 22:04 UTC; 81 points) (
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- 5 Jan 2013 23:49 UTC; 57 points) 's comment on [LINK] Why taking ideas seriously is probably a bad thing to do by (
- Group selection update by 1 Nov 2010 16:51 UTC; 49 points) (
- Rudimentary Categorization of Less Wrong Topics by 5 Sep 2015 7:32 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- [Link] Nerds are nuts by 7 Jun 2012 7:48 UTC; 35 points) (
- My Thoughts On Suffering by 21 Nov 2023 22:23 UTC; 34 points) (EA Forum;
- 16 Apr 2012 11:16 UTC; 32 points) 's comment on Our Phyg Is Not Exclusive Enough by (
- 28 Oct 2023 20:13 UTC; 31 points) 's comment on Value systematization: how values become coherent (and misaligned) by (
- What is bunk? by 8 May 2010 18:06 UTC; 26 points) (
- 18 Nov 2010 14:36 UTC; 25 points) 's comment on “Target audience” size for the Less Wrong sequences by (
- Buckets and memetic immune disorders by 3 Jan 2017 23:51 UTC; 23 points) (
- 1 Nov 2010 0:37 UTC; 22 points) 's comment on Making your explicit reasoning trustworthy by (
- 7 Jun 2012 10:45 UTC; 20 points) 's comment on [Link] Nerds are nuts by (
- 9 Feb 2020 8:44 UTC; 20 points) 's comment on A Cautionary Note on Unlocking the Emotional Brain by (
- Science and Math by 27 Nov 2022 4:05 UTC; 19 points) (
- Sequence thinking vs. cluster thinking by 25 Jul 2016 10:43 UTC; 17 points) (EA Forum;
- Does the “ancient wisdom” argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? by 4 Nov 2024 15:20 UTC; 17 points) (
- Religion = Cult + Culture by 2 Apr 2024 16:44 UTC; 17 points) (
- 22 Dec 2012 5:57 UTC; 17 points) 's comment on Dying in Many Worlds by (
- 16 Mar 2014 7:48 UTC; 17 points) 's comment on Intelligence-disadvantage by (
- 17 Mar 2012 9:44 UTC; 16 points) 's comment on Schelling fences on slippery slopes by (
- 19 Nov 2014 14:30 UTC; 15 points) 's comment on Neo-reactionaries, why are you neo-reactionary? by (
- 8 May 2010 18:48 UTC; 15 points) 's comment on What is bunk? by (
- 4 Nov 2014 9:54 UTC; 13 points) 's comment on Open thread, Nov. 3 - Nov. 9, 2014 by (
- Best resource to go from “typical smart tech-savvy person” to “person who gets AGI risk urgency”? by 15 Oct 2022 22:26 UTC; 13 points) (
- A Novice Buddhist’s Humble Experiences by 4 Oct 2010 10:40 UTC; 13 points) (
- 1 Jul 2015 17:51 UTC; 12 points) 's comment on Rationality Quotes Thread July 2015 by (
- 10 Dec 2010 17:06 UTC; 11 points) 's comment on How To Lose 100 Karma In 6 Hours—What Just Happened by (
- High school advice by 11 Sep 2023 1:26 UTC; 11 points) (
- Why Don’t Rationalists Win? by 5 Sep 2015 0:57 UTC; 11 points) (
- Unfriendly Natural Intelligence by 15 Apr 2014 5:05 UTC; 9 points) (
- Will reason ever outrun faith? by 7 Jan 2010 14:00 UTC; 9 points) (
- 15 Sep 2012 6:56 UTC; 9 points) 's comment on High School Lectures by (
- 7 Jan 2011 8:54 UTC; 9 points) 's comment on My story / owning one’s reasons by (
- 13 Sep 2010 19:53 UTC; 9 points) 's comment on The Effectiveness of Developing World Aid by (
- 6 Feb 2013 7:39 UTC; 8 points) 's comment on How to offend a rationalist (who hasn’t thought about it yet): a life lesson by (
- 6 Feb 2013 8:38 UTC; 8 points) 's comment on How to offend a rationalist (who hasn’t thought about it yet): a life lesson by (
- 2 Feb 2011 11:19 UTC; 8 points) 's comment on Is Atheism a failure to distinguish Near and Far? by (
- 8 Jun 2012 5:55 UTC; 8 points) 's comment on Debate between 80,000 hours and a socialist by (
- 30 Nov 2010 1:16 UTC; 8 points) 's comment on Belief in Belief vs. Internalization by (
- 23 Nov 2010 19:29 UTC; 8 points) 's comment on Rationality is Not an Attractive Tribe by (
- 5 Jan 2011 16:20 UTC; 7 points) 's comment on Possible Cockatrice in written form by (
- 16 Dec 2014 23:36 UTC; 7 points) 's comment on Open thread, Dec. 15 - Dec. 21, 2014 by (
- 1 Feb 2014 9:14 UTC; 7 points) 's comment on On saving the world by (
- 10 Mar 2012 4:55 UTC; 7 points) 's comment on How does real world expected utility maximization work? by (
- 8 Jun 2012 8:35 UTC; 7 points) 's comment on Debate between 80,000 hours and a socialist by (
- 19 May 2011 20:56 UTC; 7 points) 's comment on Rapture/Pet Insurance by (
- 16 Jul 2015 15:57 UTC; 7 points) 's comment on The Just-Be-Reasonable Predicament by (
- 23 Aug 2011 4:09 UTC; 6 points) 's comment on The basic questions of rationality by (
- 20 Oct 2022 3:14 UTC; 6 points) 's comment on Shortform by (
- 3 Jul 2018 20:29 UTC; 6 points) 's comment on Another take on agent foundations: formalizing zero-shot reasoning by (
- 2 May 2012 7:52 UTC; 6 points) 's comment on Rationality Quotes May 2012 by (
- 6 Oct 2011 3:55 UTC; 5 points) 's comment on On self-deception by (
- 28 Feb 2013 17:52 UTC; 5 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 7 Apr 2011 16:51 UTC; 5 points) 's comment on Autism and Lesswrong by (
- 18 Apr 2011 20:40 UTC; 5 points) 's comment on Build Small Skills in the Right Order by (
- 21 Dec 2012 19:10 UTC; 5 points) 's comment on 2012 Survey Results by (
- 20 Jun 2020 21:10 UTC; 4 points) 's comment on Lucent’s Shortform by (
- 29 Aug 2011 19:58 UTC; 4 points) 's comment on Book trades with open-minded theists—recommendations? by (
- 23 Apr 2020 21:14 UTC; 4 points) 's comment on My experience with the “rationalist uncanny valley” by (
- Meetup : Princeton NJ Meetup by 23 Mar 2014 0:22 UTC; 4 points) (
- 1 Apr 2010 19:22 UTC; 4 points) 's comment on NYC Rationalist Community by (
- 20 Mar 2023 19:06 UTC; 3 points) 's comment on Tabooing “Frame Control” by (
- 8 Apr 2013 8:25 UTC; 3 points) 's comment on Rationality Quotes April 2013 by (
- 24 May 2015 1:09 UTC; 3 points) 's comment on Open Thread, May 18 - May 24, 2015 by (
- 14 Dec 2010 23:45 UTC; 3 points) 's comment on The Sin of Persuasion by (
- 16 Sep 2010 18:40 UTC; 3 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 19 Jul 2010 4:23 UTC; 3 points) 's comment on Some Thoughts Are Too Dangerous For Brains to Think by (
- 25 Feb 2013 8:22 UTC; 3 points) 's comment on [LINK] Why taking ideas seriously is probably a bad thing to do by (
- 15 Apr 2013 6:48 UTC; 3 points) 's comment on Pay Other Species to Pandemize Vegetarianism for You by (
- 19 Sep 2014 1:19 UTC; 2 points) 's comment on The Economist on “extreme altruism” by (EA Forum;
- 22 Dec 2009 0:39 UTC; 2 points) 's comment on Karma Changes by (
- 19 Jul 2014 11:04 UTC; 2 points) 's comment on [LINK] Another “LessWrongers are crazy” article—this time on Slate by (
- 13 Jul 2011 15:19 UTC; 2 points) 's comment on A funny argument for traditional morality by (
- 26 Oct 2009 23:38 UTC; 2 points) 's comment on Computer bugs and evolution by (
- 12 Feb 2023 21:42 UTC; 2 points) 's comment on Droopyhammock’s Shortform by (
- 3 Feb 2013 18:51 UTC; 2 points) 's comment on On the Resolution of Frightening Paradoxes and Inferences by (
- 3 Sep 2010 2:53 UTC; 2 points) 's comment on Rationality quotes: September 2010 by (
- 9 Jul 2019 4:02 UTC; 2 points) 's comment on Black hole narratives by (
- 21 Apr 2015 22:12 UTC; 2 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 20 May 2015 19:28 UTC; 2 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 10 Dec 2010 19:58 UTC; 2 points) 's comment on Best career models for doing research? by (
- 1 Feb 2021 6:14 UTC; 2 points) 's comment on Self-Criticism Can Be Wrong And Harmful by (
- 2 Nov 2020 18:16 UTC; 2 points) 's comment on How do you read the news critically? by (
- 30 Jul 2022 15:35 UTC; 2 points) 's comment on Which singularity schools plus the no singularity school was right? by (
- 3 Jan 2015 18:29 UTC; 1 point) 's comment on Blind Spots: Compartmentalizing by (EA Forum;
- 19 Nov 2010 20:13 UTC; 1 point) 's comment on Help with (pseudo-)rational film characters by (
- 22 Sep 2009 18:13 UTC; 1 point) 's comment on Ethics as a black box function by (
- 20 Aug 2011 0:16 UTC; 1 point) 's comment on Why We Can’t Take Expected Value Estimates Literally (Even When They’re Unbiased) by (
- 8 Dec 2012 8:07 UTC; 1 point) 's comment on A definition of wireheading by (
- 16 Jul 2013 10:03 UTC; 0 points) 's comment on Open thread, July 16-22, 2013 by (
- 19 Mar 2012 10:01 UTC; 0 points) 's comment on 6 Tips for Productive Arguments by (
- 31 Oct 2014 9:10 UTC; 0 points) 's comment on A discussion of heroic responsibility by (
- 29 Nov 2010 19:43 UTC; 0 points) 's comment on Individuals angry with humanity as a possible existential risk? by (
- 26 Jan 2011 5:28 UTC; 0 points) 's comment on The Wannabe Rational by (
- 3 Aug 2010 4:44 UTC; 0 points) 's comment on Rationality quotes: August 2010 by (
- 21 Sep 2009 3:45 UTC; 0 points) 's comment on Reason as memetic immune disorder by (
- 16 Jul 2013 12:14 UTC; 0 points) 's comment on Open Thread, July 1-15, 2013 by (
- 15 Nov 2010 17:32 UTC; 0 points) 's comment on Do you visualize Omega? by (
- 9 Sep 2014 20:59 UTC; -1 points) 's comment on Is it a good idea to use Soylent once/twice a day? by (
- 13 Jul 2013 23:52 UTC; -4 points) 's comment on Open Thread, July 1-15, 2013 by (
- 23 May 2013 18:25 UTC; -7 points) 's comment on How to Build a Community by (
- [Link] Noam Chomsky Killed Aaron Schwartz by 16 Jan 2013 16:31 UTC; -18 points) (
Another reason converts are more zealous than people who grew up with a religion is that conversion is a voluntary act, whereas being born into a religious family is not. Converting to a religion late in life is a radical move, one that generally requires a certain amount of zeal and motivation to begin with, so converts are pre-selected to be zealous.
I’ve come at this from a similar angle that is, I think, different in the details; and that is rationality as a failure of compartmentalization—the attempt to take everything you hear seriously.
Michael Vassar, again, has a similar angle which is different in the details: nerds result from failing to learn the nonverbal rules of adulthood that are different from the verbal rules.
Many people enjoy reading books and watching films where the lead characters form a small group, pitted against all the odds to try to save the world. Many people—secular people—pay lip-service to the idea that every person in the world is equally important, and that we should value the life of an African peasant farmer as equal to our own.
It seems, however, that most people don’t actually take these notions seriously, because their actions seem to have little to do with such beliefs.
One day, a bunch of nerds got together and started a project called the Singularity Institute, and they actually took seriously the notion that they should try to save the world if it really was threatened, and that the lives of others should be assigned equal weigh to their own. Almost everyone else though they were really weird when they started to try to act on these beliefs.
This is a terribly counter-productive attitude to have. I don’t think trying to save the world is what people found weird. Lots of people, especially young people, have aspirations of saving the world. People think the Singularity Institute is weird because SIAI’s chosen method of saving the world is really unconventional, not marketable, and pattern matches with bizarre sci-fi fantasies (and some of the promoters of these fantasies are actually connected to the institute). If you think the pool of potential donors are all hypocrites you make it really difficult to bring them in.
There is a point I am trying to make with this: the human race is a collective where the individual parts pretend to care about the whole, but actually don’t care, and we (mostly) do this the insidious way, i.e. using lots of biased thinking. In fact most people even have themselves fooled, and this is an illusion that they’re not keen on being disabused of.
The results… well, we’ll see.
Look, maybe it does sound kooky, but people who really genuinely cared might at least invest more time in finding out how good its pedigree was. On the other hand, people who just wanted an excuse to ignore it would say “it’s kooky, I’m going to ignore it”.
But one could look at other cases, for example direct donation of money to the future (Robin has done this).
Or the relative lack of attention to more scientifically respectable existential risks, or even existential risks in general. (Human extinction risk, etc).
See, e.g. Eliezer writing in 2000:
“There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, nocancer patient, literally no one that I cannot look squarely in the eye. I’m working to save everybody, heal the planet, solve all the problems of the world.”
Michael Vassar also has Memes and Rational Decisions, which seems very close to the original post.
Thanks for the link, excellent reading. I really must read more of Michael’s stuff.
I realize this is almost 2 years after your original posting, I’m going through Jimmy’s top 100 articles now.
I just wanted to say that failing to learn the nonverbal rules of adulthood that are different from the verbal rules pretty much describes Aspies. Yes, there is a big overlap between Aspies and nerds- but I think you can be an Aspie and not a nerd and vice versa.
Oh, hm. I wish I had started taking these hypotheses seriously back in September 2009. /sigh. We need rationalist dojos so bad. It sucks that there’s no one qualified to teach them, yet.
On the whole a very good post. But here --
-- you misunderstand the position that you’re criticizing. The claim of the geneticists is not that race does not exist, but rather that it doesn’t map to the joints at which geneticists, qua geneticists, find it particularly useful to carve reality. But when trying to understand the social world, within which your kid in Detroit is steeped, Race is certainly a useful way to carve reality. And this is all that people mean when they say that Race is a social concept, not a genetic one.
That struck me as a stunning nonsequitur. The kid in Detroit has no possible way of knowing how much of what they see is genetic versus environmental—unless they go online and read the scientific literature. Offering that sort of surface observation as evidence is on the level of “any kid in Detroit can see the Earth is flat”.
Surely they could very easily observe that people with dark skin typically have parents with dark skin.
But the child has good evidence for the social concept, if not for the genetic one.
So he can disagree with “there is no such thing as race”.
Is this another one of those blegg/rube questions?
You’re right.
Some of the people making the claim probably have a more nuanced interpretation in mind. Many people repeating the claim have the simple interpretation in mind; or may have the nuanced interpretation, but are stating it in a way that they hope will be misinterpreted, yet give them plausible deniability.
I don’t remember now what the original “respectable geneticists” said. I have seen a summary of their work in Science magazine that used the simple interpretation. Does anyone have a link to some of the original publications?
A request to see the article corrected. This post has the combination of simplicity and relevance that makes me want to show it to others outside the lesswrong community, but the race anecdote damages its perceived credibility greatly.
PS: If you agree with me thus far, I would also recommend removing the self-help section: “The vaccines: Updating and emotions.” It’s lower quality than the rest of the article in that support for your claims here is very weak; it is also much narrower in relevance (as only hardcore rationalists would be interested.)
I’m not sure if Phil got the details right—but there are definitely a whole bunch of otherwise well-educated people who happily spout politically-correct nonsense on the issues of race and equality as though it was actually scientific truth. They typically cite Lewontin—but they ignore Lewontin’s fallacy.
Thank you for the mention of Lewontin’s Fallacy. I have been stuck trying to remember the name of that fallacy for half a year (although, to be fair, I had not looked very hard to find out its name), due to a discussion on the Forums at the Richard Dawkins’ website.
I am amazed at the level of discourse that many discussions on that site fall to. There are a number of very bright people there, yet it seems that many commit all manner of fallacies in the name of either political correctness, or because they fear giving ground to irrational theists/theism. A great example is about the term belief. Many on the RDF state that they “Have no Beliefs,” yet fail to realize that this statement itself is a belief.
They have had many discussions in which the issue of race has come up, and I remembered reading about Lewontin, yet could not recall his name… Thanks, again.
Don’t be too hard on them for that!
“Belief” is an overloaded word. Some use it to mean a p=1 concept, while others use it to mean a p > 0.95 concept. Of course, p=1 ideas are crazy faith issues, but some people seem to sustain them. The “I have no beliefs” crowd just mean to say that they “have no p=1 beliefs that they hold with absolute faith”—which is a fair enough thing to observe.
I wish that were the case, but it seems to me that the “I have no beliefs” crowd that I am familiar with means that they have no beliefs for which P<1.
In other words, they either know something with absolute certainty, or they give it no credence whatsoever.
I can’t think of how many times I have told them that they need to both reclaim the word “Belief” and to understand that they have many things for which P≠1 (P<1, but greater than .5, or some other arbitrary number for which they will accept some information as being true).
Yet, sometimes the certainties of faith get assumed by those without faith (of the religious kind)
I removed the not-very-good example, but I don’t want to remove the final section. It is lower quality. What bothers me more is that it doesn’t have a proper conclusive feeling about it; it doesn’t tie the post together and wrap it up in a satisfying way.
Phil,
If you are looking for others’ thoughts that may be able to tie your post together, you might want to read the Douglas Adams Artificial god that I cite here: http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/14az
Phil,
If you are looking for others’ thoughts that may be able to tie your post together, you might want to read the Douglas Adams Artificial god that I cite here: http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/14az
While looking for this info, I came across this tidbit: James Watson, who claimed blacks are intellectually inferior to whites, is 1⁄6 black. Wonder how he felt on finding that out, especially since he’s the guy who said, “I didn’t win a Nobel prize. I won the [meaning most important ever] Nobel prize.”
I know a woman who I think had negative feelings about blacks, who learned, as an adult, that her father was black. She’d always thought he was Hispanic. She was a little upset by it, but mostly thought it was a funny story to tell people.
One original publication is in the Feb. 8 2008 Nature Genetics. I don’t have it. Articles citing it here. None of the titles mention race.
I don’t think you understand Watson’s point of view.
If I understand Watson correctly, he thinks the evidence suggests that the average IQ of native Africans is below 100. He didn’t say that all Africans have IQs below 100. I don’t know why you think he’d care that he’s descended from a black person. Presumably he thinks a substantial minority of Africans still have higher IQs than 100, so if he really cares about the IQ of his black ancestor, it’s still plausible that s/he had a high IQ.
But why would anyone care about the IQ of their ancestors? Even if you do think there are racial cognitive differences, there are better ways to measure your own IQ than to guess based on the race of your ancestors.
True, we should have more faith in our own demonstrated intelligence. But humans place values on things, and then make associations, and then have feelings. Under the circumstances, I would not expect him to have the level of detachment you suggest.
Anyway, I didn’t mean to sound gleeful. (I’ll edit my original statement a bit to try to fix that.) Or, rather, what glee I had was motivated not by my liberal, forward-looking views on race, but by my impression that Watson is full of himself. I approve of scientists making politically-unpopular statements when based on evidence.
I suspect that the same people who want to say there is no such thing as race, also would enjoy saying that Watson is 1⁄6 “black”.
Are you suggesting that Watson’s statements were not based on evidence?
In the controversial comments that led to his retirement, Watson claimed of those in Africa:
‘’all our social policies are based on the fact that their intelligence is the same as ours – whereas all the testing says not really.″
The lower average test scores of Africans is surely an undisputed scientific fact.
Whatever you think about Watson, in this case, he had the scientific evidence firmly on his side—as far as any scientific issue was concerned.
But it’s fact that “all our social policies are based on the fact that their intelligence is the same as ours”? That is, not only is there a difference in IQ distribution, that difference is so significant that “all our social policies” are not going to help them.
I remember reading something by Flynn explaining that people with IQs below 70 today still have problems functioning even though they might score in the average range if given an IQ test normed on a population from the same country decades ago. From this I gather that the correlation between IQ and how well someone can function breaks down when you compare different populations.
In order to conclude that Watson’s quoted remark is scientific fact, you must not only prove that Africans have lower average IQ test scores, but you must prove that:
This interferes with our social policies towards Africa in some way.
Any evidence we draw about the capabilities of Africans with a certain IQ must be based on studies on the same population, not on Americans or Europeans or whatnot with the same IQ.
It’s unlikely that such a broad sweeping statement like “all our social policies”, applied to the whole of Africa, is correct, considering the considerable variation both of social policies and across the continent.
Additionally, I find it interesting that people see the backlash against these remarks as merely “politically correct” anti-racism. It seems clear that this is a challenge to an entrenched way of thinking about a wide range of problems including international relations and poverty. Watson is claiming (in a rather nonspecific and unsupported way from what I’ve heard, which is only second hand) that the status quo for trying to help or otherwise influence Africa isn’t working because we make bad assumptions about their intelligence. Now, I’m sure we make many, many bad assumptions about Africans that influence our social policies and that may break many or make them less efficient or keep us from hitting on something that really works. Intelligence is the most controversial candidate, of course, for historical reasons. But some of the backlash is embedded in our very lack of practice in treating any such assumptions as malleable.
Can we please not have this discussion here? Posters here are posting under their real names or lasting pseudonyms, so they can’t defend the un-PC arguments without making numerous crimethink statements that could rebound against them in real life. So those who advance the PC arguments will wind up shadowboxing with those who don’t fear retaliation or reputational costs, and we won’t get a real honest discussion.
Questions of race and intelligence will be settled decisively within 5 or 10 years when large scale whole-genome sequencing studies are done.
Oh, look honey! It’s someone who thinks zealots are willing to change their minds when presented with overwhelming evidence!
That’s nice, dear.
Is it just me misunderstanding the subtleties of a foreign language, or is this un-LW-ishly rude?
It’s complicated. Different people will probably interpret it differently. I figured the template is common enough that people would see it as a reference and not take the sarcasm personally, but still realize the argument rests on a shaky assumption. I got voted up a lot, so I figure people took it the way I intended.
While I’m not sure if avoiding the discussion altogether is an optimal solution I do share your frustration. It took me a while to realise that using my real name here was a bad idea. We aren’t all that much less wrong.
Yeah, rule numero uno of the internet is to remain ANON as much as possible.
Precisely. Especially since, while a lot of us have jobs where we either work for ourselves or our bosses just don’t care… some of us have those repressive nightmare jobs where our bosses google for us regularly outside of work hours.
But isn’t it easy to make a temporary pseudonymous account on this website?
14 years have passed. Has the issue been decisively settled?
I feel like a lot more direct genetic evidence has surfaced: 1, 2, 3, 4.
Those first 4 links, I think, are pretty unconvincing in isolation, but this one is fine.
[Disclaimer 1: I just linked things that I remembered off the top of my head.]
[Disclaimer 2: I think that the case for hereditarianism was quite overwhelming even 14 years ago, so you should consider me biased.]
It was political correctness—and transparently so—just as it was for Lawrence Summers, Chris Brand and Frank Ellis before him.
“’What is ethically wrong is the hounding, by what can only be described as an illiberal and intolerant “thought police”, of one of the most distinguished scientists of our time, out of the Science Museum, and maybe out of the laboratory that he has devoted much of his life to, building up a world-class reputation”
Richard Dawkins.
I have a proposed explanation for “backlash”: personal investment.
Some of us may have done well in IQ tests, and focused on intelligence (and the associated notion of rationality) as personal strengths. Accepting the notion that IQ tests don’t measure anything “real” (except in the sense that they measure “the real ability to perform well on IQ tests”), would also mean downgrading estimation of one’s personal worth.
Explaining away evidence against IQ tests as “merely politically correct anti-racism” allows retaining that sense of worth.
Check with what happened: Watson was castigated for his views on the lower intelligence of Africans—not because of his other views about social policies.
I know. I knew when I was writing that. The ideas in that paragraph were just forming as I typed them out, which is why I attributed cause where I didn’t mean to.
Something closer to what I mean: It’s fine to discuss intelligence differences between race. My intro psych textbook has a long discussion about it. People have an uproar when, instead of saying, oh, here’s what the test results are, here’s what the results of experiments that shed some insight into the cause of the differences (ie environment vs. genetic), and leaving it at that, someone says that there’s a difference in IQ and that that explains social inequity.
So, yeah, they’re objecting because it’s racist, not because it challenges institutions or policies (other than the institution of denying racial difference, which to me seems relatively rational considering all the sources of bias that would cause people to make too much of racial difference). But it’s not racist just because he says Africans have done poorly on IQ tests but because he defaults to assuming that that’s enough to be “gloomy about the prospects of Africa”.
Furthermore, his quote in this piece of the interview:
is pretty much as racist as you can get. His piece of evidence here is the anecdotal observations non-specific employers that fit right into a really old stereotype. Additionally, it seems odd—employers recruit who they employ, and you wouldn’t hire someone who had insufficient intelligence to do what you were hiring them for—the job selects for people of a certain intelligence range (which may be offset by, say, an intelligent person with a disability or who just didn’t get an education, or an average person who’s outperforming expectations of her intelligence due to hard work and a certain cultural background)--so race shouldn’t matter because you can only hire someone from a certain race for a job given they have adequate intelligence for the job.
All the press I’ve read so far on the topic stresses general racism, his tendency to make claims without scientific evidence, and his intentional offensiveness and doesn’t focus entirely on the issue of “lower intelligence of Africans”, which you seem to think. Maybe you’re talking about official reprimands or such that I haven’t read, but the public kinda objected to a lot more than just that. So I think you’re misguided in asserting that the only part of what he said that was controversial is low average African IQ and thereby claiming that he was on firm scientific ground.
Another part of the problem is intelligence = IQ. There’s evidence (from the Flynn effect and cross-cultural examination of answers given to standard IQ test type questions) that environment and culture strengthen specific cognitive abilities and predispose one to reason in certain ways or interpret questions in certain ways. So even if IQ scores show that average African IQ is whatever, that’s not indisputably the same as showing lower intelligence, because you could usefully define intelligence to include cognitive abilities/reasoning that Africans are stronger at than Westerners. And here I’ll mention that I don’t want to get in an argument over whether defining intelligence that way is good or not—I’m just saying it in response to this:
Because while that sentence can be true, it is not sufficient evidence to conclude, as Watson does, that the testing is adequate to say Africans have lower intelligence. That depends on how you define intelligence. (Although his actual words just say that their intelligence is different, which does seem clear, but from other remarks he seems to think that Africans have lower intelligence due to genes, which is not scientifically undisputed at all.)
I am bothered by the fact that I know the discussions on race and intelligence that I have read are heavily biased in the information they present—for instance, in the US, racial intelligence differences correlating better with degree of pigmentation than with amount of African genes—because this information seems like it’s picked in order to prove the politically correct point, whereas the other side likes to ignore all the evidence for the politically correct point and just simplify things because it seems obvious to them that the bigoted view is true. Point me to a transparent, relatively unbiased discussion of all available experimental evidence and I’ll thank you.
I lean toward the politically correct side because it’s the side that presents a lot of evidence and then says, “It’s kinda inconclusive and we don’t really know what causes group intelligence difference, although we do know a lot of it isn’t genetic.” Whereas the non-politically-correct side attempts to explain away a lot of the evils of the world by saying inequity is genetically based just because there are differences in the way groups perform on a psychometric instrument. But it seems like history and other social forces can greatly affect the conditions of one group: a few generations ago, when my ancestors were impoverished farmers in Europe, I have little doubt they would’ve failed modern IQ tests, but my race’s genes haven’t change since then, and the genes weren’t responsible for our economic, social, and political problems.
It’s both reasonable and humane to assume that, given Westerners spent a century gaining IQ points due to the Flynn effect, and given that the low quality of life in the West changed radically over spans of centuries or decades, one group currently doing poorly on IQ tests and living in poverty has the potential to change just as drastically. Any pessimism about their prospects can surely be more strongly justified by citing current and historical economic, political, social, and environmental trends, as well as unprecedented possible events like existential threats.
Taboo side. Complex empirical issues do not have sides. Humans, for their own non-truth-tracking reasons, group into sides, but it’s not Bayesian, and it has never been Bayesian.
Or we think we group up into sides, but I’m not even sure that’s true. You write that the egalitarians are nuanced and present evidence, whereas the human biodiversity crowd (or whatever words you want to use) are just apologists for their favorite narrative, but there are a lot of people who have the exact opposite perspective: that the hbd-ers are honest and nuanced and the egalitarians are blinded by ideology. But in fact, there are no sides physically out there: rather, there are only various people who have studied various facets of the topic to various degrees and who believe and profess various things for various reasons. And this question of what various people believe is distinct from the question of what’s actually true.
I realize that this kind of aggressive reductionism isn’t very predictively useful—that indeed, I’m probably just a few steps above saying, “Well it’s all just quarks and leptons anyway.” But sometimes it is worth saying just that, if only to wrench ourselves free of this adversarial framing so that we can actually look at the data.
Humaneness is central to policy, but it should have nothing to do with our beliefs.
Upvoted, because you make the case well that we shouldn’t identify with sides when discussing issues like this.
But you’re not really using “Taboo” in the sense that Eliezer described. “Sides” do exist as social phenomena. They are a certain sort of coalition that people group into when they engage in public discourse. As you say, sides exist for non-truth-tracking reasons. However, like race, we need the concept of sides to talk about social dynamics, so, like race, sides exist.
(Of course, they exist as nothing more than certain configurations of the pieces of the stuff out of which reality is made.)
Briefly—since this is getting off topic—if anyone is interested, my views on the matter are here: http://timtyler.org/political_correctness/
|The lower average test scores of Africans is surely an undisputed scientific fact.
Yes, but most interpreted him to be claiming that their genes prevented them from attaining equal test scores. This is definitely disputed.
http://cscs.umich.edu/~crshalizi/weblog/494.html http://cscs.umich.edu/~crshalizi/weblog/495.html http://www.cscs.umich.edu/~crshalizi/reviews/flynn-beyond/
It was Watson—not those with reading comprehension problems among his audience—who wound up out of a job.
I’m not expressing an opinion on whether Watson’s statements were based on evidence.
Why? To have a reasonable measure for the intelligence of their children.
Heard of regression to the mean? An unusually smart person from a background of dumb hick proles, is not going to have kids nearly as smart as they are.
People understand this at a genetic level when picking out mates. “Meet the family” before you decide to keep the girl/guy.
Probably not terribly affected, as he has plenty of information about his intelligence screening off (EDIT: as steven says, this is not the correct term) his ancestry, and he AFAIK never claimed black people to be ‘inferior’ in any sense other than lower average IQ.
I disagree.
Is this it? many ungated versions.
This seems more the opposite. It says,
-wtf am i reading-
that “1/6” black came from a funky genetics test.
A reasonable test for percentage of blackness, is going back say, about 5 generations, and seeing how many black people are in his ancestry.
There were none, since his parents/grandparents,yada yada came from Europe.
Wonder how he felt about it? He understood it was a bullshit media piece.
Is there a genetic funkiness test?
The first link is a 404. Maybe try here:
http://thelede.blogs.nytimes.com/2007/12/10/watsons-black-dna-ultimate-irony/
That is what some people mean. Others truly believe there are literally no differences between human populations apart from skin color and bone structure, and of course culture.
Yes, there are no doubt some people who believe that.
The point I’ve heard, back when I pressed a professor on the rather absurd statement “race is not genetic,” is that if you were simply to look at a bunch of human genomes, and compare them to one another, and rank them by their similarities and differences, the genes controlling race probably wouldn’t occur to you. This seems quite plausible.
Rather than refusing to try to be consistent in my own beliefs, I find it far more useful to notice what kinds of beliefs most people don’t really take seriously enough to be clear about what they mean, to bother to follow through with the most simple sorts of implications, and so on.
Another relevant fact is that, for most of Islam’s history, Islamic nations were militarily equal or superior to anyone that they were likely to come into contact with. Islam was a religion founded by conquerers, not by the conquered, and being in a position of profound weakness compared to Western (Christian/Jewish/secular) civilization is something that’s simply never happened to them before. Radical Islam could very well be simply the Islam of the fourteenth century faithfully reproduced in the modern era, and the fact that it tends to involve suicide bombings instead of conquering armies is a matter of circumstance rather than ideology. I suspect that, if the Christianity of the fourteenth century, or the Judaism of the first century, were to be faithfully reproduced today, it would be equally horrifying.
I’m not so sure. One point Sam Harris has made (can’t find the source atm) is that the Lebanese are in roughly the same position with respect to Israel as the Palestinians, but the Lebanese are predominately Christian rather than Muslim, and commit almost no terrorist acts. Harris argues that it’s like a lab experiment where you put two oppressed peoples next to each other, but with different religions and watch what happens.
Sam Harris actually specifically cites Palestinian Christians. (Who do exist.)
Well, maybe not equally horrifying, but still horrifying. You might not be aware of this, but the infamous “Spanish Inquisition” was a relatively humane law enforcement organization when compared to standard practices in the the rest of Europe.
Is there any reason to think Lebanese Christians are more similar to 14th century christians than to modern western christians?
Yes. The original poster’s statements about the benign nature and gentleness of early Christianity do not reflect its history eg death penalty for those who refuse to convert, burned all the books they could find, later on the slaughter of tens of thousands in Jerusalem during the crusades.
On the other hand Judaism of the first century was hardly at its peak of its power or horror. I seem to recall a penalty of death being declared for those who didn’t follow the command to genocide those they conquered, elderly, women, children and babies alike. Come to think of it lifestock may well have been included as well. I don’t think they had the power or inclination for that sort of thing in the first century.
IIRC, the position of the Catholic Church is that the death and resurrection of Jesus fulfilled the Covenant and freed humans from the obligation to live according to the Jewish law of the Old Testament. In other words, sometimes the blind spots are explicitly acknowledged and handwaved away instead of being overlooked.
Good point. Protestants also say that. Although note that Christians sometimes cite Old Testament commandments as if they still applied today. Even “Be fruitful and multiply”, which was just for Adam & Eve. Also note that for many years the Catholic Church demanded obedience to the commandment not to charge interest on loans, which is an Old Testament commandment. Ironically, primarily (only?) Jews charged interest on loans.
Well, the command not to charge interest on loans in the Old testament was only within your own people: e.g. a Jew shouldn’t charge interest from a fellow Jew, but he could charge interest from non-Jews as much as he liked.
Now, the Christians view themselves as the “new chosen people”, so they couldn’t charge interest from each other, so the banking system had to be performed by Jews, who could—in clean conscience and following their religious beliefs—loan/charge interest from non-Jews(Christians).
In short, the whole “irony” is lost once you actually study the specific commandments and the historical context of the described situation.
Is this an example of Lampshade Hanging?
http://tvtropes.org/pmwiki/pmwiki.php/Main/LampshadeHanging
Maybe closer to an attempted conversion from Fridge Logic into Fridge Brilliance.
So, there is a hidden component in levels of belief: together with stated level of certainty, bland “truthiness” of a statement, there is also a procedural perspective, with the statement applying with different power in different contexts. This more nuanced level of belief is harder to see and harder to influence: take “belief in belief” as a special case; on one hand there is certainty, on the other it refuses to speak of the real world.
Compartmentalization seems to be the default method for managing “quoted” beliefs: instead of keeping track of what evidence there is for what, just start directly believing everything, but in narrow contexts. If the facts check out, collections of new pieces of knowledge pass coherence checks and gain influence. Insanity remains in the quarantine indefinitely, and even if within its crib it calls the shots, it is a mistake to interpret it as accepted by the person as a whole. When an aspect of most people is insane, it is so by design, part of the never-ending process of reevaluation.
This mechanism is also probably what’s responsible for people not even caring to distinguish positive assertions from the negative ones. The natural mode is to just amass impressions of facts, by adherence to each other rather than in their original forms, with levels of certainty simply reporting how well the new statement fits in.
Related: Nerds Are Nuts
Thanks for that link. Razib is very impressive as usual.
Phil,
Well written and thought provoking. Reading this, I was reminded of a Douglas Adams essay/speech abut Balinese rice farmers and the way their religion is highly suitable to growing rice. The gods that they cite as reasons for this or that aren’t necessarily real and some of the practices may actually be useless, but the end product is a very successful harvest. You might ask a rice farmer why he decided to plant this plant here. His answer could involve some custom that if the moon does this and the chickens do that, I need to put a plant here. That’s obviously silly, but it doesn’t mean the plant shouldn’t be there. The customs and beliefs are the basis for how they do things and how they do things is good for growing rice.
I went back and reread the essay and noticed that I remembered it a little wrong. I also noticed that this isn’t some interesting overlap between what you and he are thinking about. What you call the memetic immune system he calls an “artificial god”. Actually, I think that your concept is a subset of the artificial god. You seem to assume his position of the artificial god and use it to construct this immune system idea. I think that you would enjoy the piece: http://www.biota.org/people/douglasadams/
“The conservatism of a religion—it’s orthodoxy—is the inert coagulum of a once highly reactive sap.” -Eric Hoffer, the True Believer
Love your post: religion as virulent namb-shub. See also Snow Crash by Stephenson.
Quick tip: HTML doesn’t work in the comments, but you can make italics by putting asterisks (*) around the thing to be italicized. There should be a “Help” link below the comment window that will unfold a list of markups.
So there is. You know I had never noticed that!
It’s not a complete guide—longer tutorials are available elsewhere—but it has the things you usually need.
I take it, the author doesn’t know many Orthodox Jews..?
Those aren’t actually how orthodox Jews interpret the rules, or apply them nowadays. Tassels are only on very specific articles of clothing, which are hidden under people’s shirts, I’m not even sure what “tying money to yourself” is about, adulterers are only stoned if the temple stands and only under nearly-impossible to satisfy conditions, trees less than 5 years old are only considered a biblical problem in Israel, and if you’re unsure, the fruit is allowed in the rest of the world, and the ritual purity laws don’t apply in general because everyone is assumed to be contaminated anyways.
This also has another side. If individual or cultural selection favors those who don’t try to actually do what a religion or cultural norm tells them to do, what happens next depends on which variables are held constant. If the culture is constrained to hold constant the religion or cultural norms, then the resulting selection will cause the culture to develop blind spots, and also develop an unspoken (because unspeakable) but viciously enforced meta-norm of not seeing the blind spots. But if the culture is constrained to hold opposite meta-norms constant, such as a norm of seeing the blind spots or a norm of actually doing what one’s religion or cultural norms tell one do do, then the resulting selection will act against the dangerous memes instead. This would make the culture safer for truth-seeking, and make the dividends of truth-seeking easier to pursue.
(Sometimes I worry about the problem of how to extend the principle of charity to memes that cannot be safely taken literally.)
My answer is to judge them by the success of the actions they lead their practioners to do, not the falsifiable (or deliberately unfalsifiable) claims about reality they espouse.
This sounds like Burke for the 21st Century,
“prejudices and prescriptions and presumptions are the instruments which the wisdom of the species employs to safeguard man against his own passions and appetites.”
I suppose this can also explain why new cults, from Born-again Christians to the Scientologists and extreme environmentalists, seem so much more harmful than the boring old Church of England and the like. How fast do we think these counter-beliefs can arise?
I don’t know; but we should factor in historical context when examining the record. I mentioned that religions of subjugated people are kinder; on the flip side, religions that have political power are often harmful, even if old. You can’t look at Catholicism in the middle ages as being just a memetic system. Politics produces results that memetic theory wouldn’t predict.
Great post, thanks, upvoted.
So most any value-core will go evil if allowed to unfold to its logical conclusions. This sounds correct to me, and also it sounds just like the motivation for FAI. Now your argument that humans solve this problem by balanced deterrence among value-cores (as opposed to weighing them together in one utility function) sounds to me like a novel intuition applicable to FAI. We have some researchers on the topic here, maybe they could speak up?
When you make every part of a balanced system more powerful without an overseeing process maintaining balance you don’t get a more powerful balanced system, you get an algae bloom.
Why without? We can put an overseeing process in. It probably doesn’t have to be very smart—after all, the overseeing process for humans is pretty stupid compared to a human.
An interesting observation! An objection to it is that this approach would require your AI to have inconsistent beliefs.
Personally, I believe that fast AI systems with inconsistencies, heuristics, and habits will beat verifiably-correct logic systems in most applications; and will achieve general AI long before any pure-logic systems. (This is one reason why I’m skeptical that coming up with the right decision logic is a workable approach to FAI. I wish that Eliezer had been at Ben Goertzel’s last AGI conference, just to see what he would have said to Selmer Bringsjord’s presentation claiming that the only safe AI would be a logic system using a consistent logic, so that we could verify that certain undesirable statements were false in that system. The AI practitioners present found the idea not just laughable, but insulting. I said that he was telling us to turn the clock back to 1960 and try again the things that we spent decades failing at. Richard Loosemore gave a long, rude, and devastating reply to Bringsjord, who remained blissfully ignorant of the drubbing he’d just received.)
That fellow Bringsjord seems to me an obvious kook, e.g. he claims to have proven that P=NP.
He claims to have an argument that P=NP. He’s a philosopher, so “argument” != proof. Although approaching P=NP as a philosophical argument does strike me as kooky.
Better proof of kookhood is that he was at AGI mainly to present his work on hypercomputing, which he claimed was a computational system with more power than a Turing machine. One element of his argument was that proofs using hyperset logic (which he said is an entire field of logic nowadays; I wouldn’t know) use a notation that can not even theoretically be represented by a Turing machine. These proofs were published in two-dimensional journal articles, in black-and-white print. I did not notice any fractal fonts in the proofs.
If it’s this argument, it’s wrong. It is based on the claim that soap films solve the Steiner problem, which they don’t. I tried this myself for four pins; here is a report of six-pin soap-film configurations. The soap film, obviously, only finds a local minimum, not a global one. But finding a local minimum is computationally easy.
Elsewhere, in a paper that detracts from the credibility of the journal it appears in, he argues that people can perform hypercomputation, on the grounds that we can imagine people performing hypercomputation. (Yes, I read all 24 pages, and that’s what it comes down to.)
Judging by Google, the only wide use of the word “hyperset” in mathematics is in non-well-founded set theory. If that is what he was talking about, it’s equiconsistent with the usual sort of set theory and has no more significance for AI than the choice of programming language (which, in my view, has no significance for AI).
What is it with AI? Does it attract the insane, or does it drive them insane? ETA: Or attract the people that it can drive insane?
Oh… This is sad work (Bringsjord). His argument for hypercomputation by people seems remarkably similar to Alvin Plantinga’s Modal Ontological Argument for God.
I am also suspect of much of what Penrose has to say about Computationalism, although I am not yet sufficiently knowledgeable to be able to directly confront his work in any meaningful way (I am working to rectify that problem. I seem to have a knack for formal logic, and I am hoping that when I get to upper division logic classes that I will be able to more directly confront arguments like Penrose’s and Bringsjord’s)
I came across a wikipedia article on hypercomputing a while back, http://en.wikipedia.org/wiki/Hypercomputation , the whole theory doesn’t seem at all well supported to me.
It is a field with an imaginary object of study.
It would be nice though, if outsiders could show some respect by demonstrating, as is probably demonstrable but difficult, that its object of study is incoherent, not just imaginary.
I’m not really sure it makes sense to talk about mathematical objects as being imaginary but not incoherent.
I’d be very surprised if this Universe was super-Turing, but you think it’s actually incoherent? I can definitely conceive of a hypercomputational cellular automata, what is it about the idea of our Universe being hypercomputational that seems incoherent to you?
I think that it is very common for things that we casually think we can definitely conceive of to actually be incoherent. I also think that almost everyone else underestimates how common it is.
I think I’m correcting for that. Do you agree that the halting oracle function itself is well-defined? If so, what seems inconceivable about a cellular automaton whose rules depend on the output of that oracle? OK, you have to stretch the definition of a cellular automaton to allow it, perhaps by allowing cells to have unbounded state, but the result is a wholly defined and therefore surely in-principle-conceivable Universe which is super-Turing. No?
Respectful outsiders?
Is that a reference to the inner sanctum of the Hypercomputation sect? ;-)
It’s not incoherent. There could be such a thing as Hypercomputation.
However, nobody has found any evidence that it exists so far—and maybe they never will.
Hypercomputation enthusiasts claim that its existence doesn’t matter too much—and that it’s a valuable concept regardless of whether it exists or not. Maybe.
I don’t disagree (i.e., I don’t see any positive reason to doubt the coherence of hypercomputation – though Michael sounds like he has one), but remember not to confuse subjective conceivability and actual coherence.
And, now I see why I am skeptical of hypercomputation. It seems to all necessitate some form of computation over an infinite number of steps. This would require some severe bending of the rules or constraints of physics, wouldn’t it?
timtyler’s comment below mine seems to be appropriate:
Doesn’t Newtonian gravity require computation over an infinite number of steps?
Hah! I just came across your comment, Phil :-) I was “Rude”?
Hey, you were sitting next to me, and egging me on by saying “No it isn’t” quietly to yourself every time Bringsjord tried to assert his (nonsensical) claim.
But anyway. I’d claim that I was not rude, really. Bringsjord kept interrupting my attempts to ask my question with loud, almost shouted comments like “If you really think that, I feel sorry for you: you really need to go back and try to get a grasp of elementary logic before you ask me questions like this!!”
So I got a little …. testy. :-) :-)
I really wish someone had recorded that exchange.
An AI doesn’t have to have a purely logical structure (let alone a stupid one, e.g. structureless predicates for tables and chairs) in order to be able to logically prove important things about it. It seems to me that criticism of formally proving FAI by analogy to failed logical AI equivocates between these things.
Will beat equals be developed first or be more capable than.
Selmer doesn’t understand LOTS of things that Eliezer understood at age 12, he’s superficially similar, but it’s a very superficial similarity.
Could be correct or wildly incorrect, depending on exactly what he meant by it. Of course you have to delete “the only”, but I’d be pretty doubtful of any humans trying to do recursive self-modification in a way that didn’t involve logical proof of correctness to start with.
One of the big problems is that he was trying to talk about the logical correctness of human-level symbolic statements about the world. Even if the logic is correct, there is no correct, consistent mapping from the analog world, to symbolic descriptions, and back. A mapping that’s close enough to work 99.99% of the time isn’t good enough when you’re talking about proof.
Companies are the self-improving systems of today—e.g. see Google.
They don’t hack the human brain much—but they don’t need to. Brains are not perfect—but they can have their inputs preprocessed, their outputs post-processed, and they can be replaced entirely by computers—via the well-known process of automation.
Do the folk at Google proceed without logical proofs? Of course they do! Only the slowest and most tentative programmer tries to prove the correctness of their programs before they deploy them. Instead most programmers extensively employ testing methodologies. Testing is the mantra of modern programmers. Test, test, test! That way they get their products to the market before the sun explodes.
As Eliezer has already showed, “test, test, test”ing AIs that aren’t provably Friendly (their recursive self-modification leads to Friendly results) can have disastrous consequences.
I’d rather wait until the sun explodes rather than deploying an unFriendly AI by accident.
The consequences of failing to adopt rapid development technologies when it comes to the development of intelligent machines should be pretty obvious—the effect is to pass the baton to another team with a different development philosophy.
Waiting until the sun explodes is not one of the realistic options.
The box experiments seem irrelevant to the case of testing machine intelligence. When testing prototypes in a harness, you would use powerful restraints—not human gatekeepers.
What powerful restraints would you suggest that would not require human judgment or human-designed decision algorithms to remove?
Turn it off, encase it in nanofabricated diamond, and bury it in a deep pit. Destroy the experimental records, retaining only enough information to help future, wiser generations to one day take up again the challenge of building a Friendly AI. Scatter the knowledge in fragments, hidden in durable artifacts, scatter even the knowledge of how to find the knowledge likewise, and arrange a secret brotherhood to pass down through the centuries the ultimate keys to the Book That Does Not Permit Itself To Be Read.
Tens of thousands of years later, when civilisation has (alas) fallen and risen several times over, a collect-all-the-plot-coupons fantasy novel takes place.
Want to restrain a man?
Use a facility designed by the government with multiple guards and built with vastly more resources than the imprisoned man can muster.
Want to restrain a machine?
You use the same strategy. Or you could use drugs, or build in a test harness. Whatever—but however you look at it, it doesn’t seem like a problem.
We can restrain individuals pretty securely today—and there is no indication that future developments are going to change that.
What’s with the question about removing restraints? That isn’t a problem either. You are suggesting that the imprisoned agent contacts and manipulates humans “on the outside”—and they attempt a jail-break? That is a strategy available to other prisoners as well. It has a low success rate. Those few that do escape are typically hunted down and then imprisoned again.
If you are particularly paranoid about escaped prisoners, then build a higher security prison. Typically, you can have whatever security level you are prepared to pay for.
The hypothetical AI is assumed to be able to talk normal humans assigned to guard it into taking its side.
In other words, the safest way to restrain it is to simply not turn it on.
And not just by persuading the guards—the kind of AIs we are talking about, transhuman-level AIs, could potentially do all kinds of mind-hacking things of which we haven’t even yet conceived. Hell, they could do things that we will never be able to conceive unaided.
If we ever set up a system that relies on humans restraining a self-modifying AI, we had better be sure beforehand that the AI is Friendly. The only restraints that I can think of that would provably work involve limiting the AIs access to resources so that it never achieves a level of intelligence equal to or higher than human—but then, we haven’t quite made an AI, have we? Not much benefit to a glorified expert system.
If you haven’t read the AI Box experiment reports I linked above, I recommend them—apparently, it doesn’t quite take a transhuman-level AI to get out of a “test harness.”
You don’t use a few humans to restrain an advanced machine intelligence. That would be really stupid.
Safest, but maybe not the only safe way?
Why not make a recursively improving AI in some strongly typed language who provably can only interact with the world through printing names of stocks to buy?
How about one that can only make blueprints for star ships?
We might say that humans as individuals do recursive self-modification when they practice at a skilled task such as playing football or riding a bike. Coaches and parents might or might not be conscious of logical proofs of correctness when teaching those tasks. Arguably a logical proof of (their definition of) correctness could be derived. But I am not sure that is what you mean.
Humans as a species do recursive self-modification through evolution. Correctness in that context is survival and the part under human control is selecting mates. I would like to have access to those proofs. They might come in handy when dating.
Those are first-order self-modification, not recursive. Learning better ways to modify yourself, or better things to modify yourself towards doing, would be second-order self-modification. ISTM that it would be very difficult to do anything more than a third-order self-modification on our current wetware.
Although our current platform for self-modification is extremely flexible, and almost anything stored in it can be changed/deleted, we can’t make modifications to the platform itself.… which is where the “recursive” bit would really come into play.
(That having been said, most people have barely scratched the surface of their options for 2nd and 3rd order self-modification, recursive modification be damned.)
Your examples are all missing either the ‘self’ aspect or the ‘recursive’ aspect. See Intelligence Explosion for an actual example of recursive self-modification, or for a longer explanation of recursive self-improvement, this post.
I found those links posted above interesting.
I concede that the human learning process is not at all as explosive as the self-modifying AI processes of the future will be, but I was speaking to a different point:
Eliezer said: “I’d be pretty doubtful of any humans trying to do recursive self-modification in a way that didn’t involve logical proof of correctness to start with.”
I am arguing that humans do recursive self-modification all the time, without “proofs of correctness to start with” - even to the extent of developing gene therapies that modify our own hardware.
I fail to see how human learning is not recursive self-modification. All human intelligence can be thought of as deeply recursive. A playFootBall() function certainly calls itself repeatedly until the game is over. A football player certainly improves skill at football by repeated playing football. As skills sets develop human software (and instantiation) is being self-modified in the development of new neural networks and muscles (i.e. marathon runners have physically larger hearts, etc.) Arguably, hardware is being modified via epigenetics (phenotypes changing within narrow ranges of potential expression). As a species, we are definitely exploring genetic self-modification. A scientist who injects himself with a gene-based therapy is self-modifiying hardware.
We do all these things without foregoing proof of correctness and yet we still make improvements. I don’t think that we should ignore the possibility of an AI that destroys the world. I am very happy that some people are pursuing a guarantee that it won’t happen. I think it is worth noting that the process that will lead to provably friendly AI seems very different than the one that leads to not-necessarily-so-friendly humans and human society.
You will be right about it being genuine recursive self-modification when genetics advances sufficiently that a scientist discovers a gene therapy that confers a significant intelligence advantage, and she takes it herself so that she can more effectively discover even more powerful gene therapies. We’re not there yet, not even remotely close, and we’re even further away when it comes to epigenetics.
Your football example is not recursive self-modification, but the genetics examples would be if they actually come to pass. You’re right that if it happened, it would happen without a proof of correctness. The point is not that it’s not possible without a proof of correctness, but that it’s irresponsibly dangerous. If a single individual recursively self-improved his intelligence to the point that he was then easily able to thoroughly dominate the entire world economy, how much more dangerous would it be for a radically different kind of intelligence to reach that level at a rate of increase that is orders of magnitude greater? It depends on the kind of intelligence, in particular, unless we want to just “hope for the best” and see what happens, it depends on what we can prove about the particular kind of intelligence. Wanting a proof is just a way of saying that we want to really know how it will turn out rather than just hope and pray or rely on vague gap-filled arguments that may or may not turn out to be correct. That’s the point.
Steve Omhundro has given several talks that talk about the consequences of a purely logical or rationally exact AI system.
His talk at the Sing. Summit 2007 The Nature of Self-Improving AI discussed what would happen if such an Agent were to have the wrong rules constraining its behavior. I saw a purely logical system as being one such possible agent type to which he referred.
Not only does this work when someone is trying to follow his religion rationality, it also works when someone is trying to follow rationality rationally.
In other words, not only can becoming more rational lead you to discard the cultural antibodies to religion without discarding the religion, becoming more rational can lead you to discard the cultural antibodies to all sorts of crazy ideas without discarding the crazy ideas. It works for non-religious crazy ideas as well as for religious ones. It even works when the crazy ideas are themselves “rational”.
(Of course, if you’re perfectly rational you would have no crazy ideas, unlike being perfectly religious, where you’d have lots of them, but human beings aren’t perfect. And imperfectly-but-more than-you-were-before rational can lead to crazy ideas.)
I’ll leave it as an exercise to the reader to decide which ideas that are popular here and are considered crazy by average people fall into this category—I’m sure everyone has a different list.
But conversely, Christianity became a lot more militaristic when it became the state religion. Listen e.g. to Dan Carlin’s Hardcore History podcast Thor’s Angels (free as of 01/2014; 4h long).
-> What about e.g. the fatwa over Salman Rushdie’s Satanic Verses, or blasphemy laws, or whatever? This theory doesn’t seem consistent with already known facts.
My guess is that his name is “A. J. Jacobs” and not “A. J. Acobs”
Really?! That’s fascinating.
Does anyone have any cite-able examples of the above?
As a Christian who is pretty familiar with the history of Christianity (less so with Islam, and embarrassingly ignorant as to Buddhist thought), I would suggest that perhaps the point on adult converts being radical needs some nuance.
From a Christian perspective, the AJ Jacobs experiment is intended to make any religion look idiotic, due to a very woodenly literal interpretation of what it means to follow the commands of the old and new testaments.
Although there may be some adult converts who do such actions, this seems pretty abnormal, and although adult converts may be marked by more sincerity, claiming they are marked by being more radical in a viral sense seems entirely unsubstantiated.
Examples:
Francis Collins, current NIH director & sequencer of the human genome: he converted to Christianity as an adult doctor, and neatly integrated religious metaphysics with an impeccably scientific worldview. I’m not aware of him attempting to wear tassels or stone people.
Augustine of Hippo: Adult convert from Mannicheaism (a classical-era dualistic religion) to Christianity. Went from leading a sexual-pleasure-driven life to being one of the best-known philosophers of human history (regardless of how you view his work). Certainly a drastic change in life direction, but his ideas were anything but extreme, except for perhaps unusually egalitarian for his era.
Feedback aside, your point about cultural blind spots is a good reminder. :) Thank you.
Interesting. So I guess the idea is we have immunities that ‘wrap’ memes so toxic that our incomplete rationality might be subverted by them and instead ensure that those ideas simply don’t interact with the rest of it? So that neither is rational thinking allowed to attack them, but neither are they ‘allowed’ to extend their influence?
I’ve found being very careful when you find yourself deviating from everyone around you to be excellent advice, and I do so whenever I can—the first time I deviate in a certain way, after which it gets filed under confirmed ways in which I act differently. That seems to keep the cost of doing so manageable even for the quite abnormal.
I agree with everything you say, but you vascillate between somewhat contradictory positions: that the default is to have disconnected beliefs; or that the default is to have particular “antibodies” preventing action on particular “beliefs.” Could you elaborate on this?
I do agree that both are important phenomena. I think the default is disconnected beliefs. I’m not clear on the prevalence and role of “antibodies.” Maybe they’re just for over-verbal nerds infected with the Enlightenment. But I think they’re more general.
“Antibodies” is a vague metaphor, by which I meant any aspect of your decision process that blocks or sidetracks a dangerous chain of reasoning. I didn’t think about whether these blocks were active responses, or passive omission of a justified inference (eg., disconnected beliefs).
It operates as a metaphor by suggesting co-evolutionary dynamics as a way of looking at the problem. It’s not a valid metaphor for trying to figure out the exact mechanism.
voted up for backing away from the details of the metaphor rather than trying to justify them. Not always an easy choice.
As it stands now, it’s all omitted inference. But I think the monk is the default—almost all inferences are omitted. If that’s the default, I think drawing attention to them and calling them “antibodies” is a figure-ground error. (But maybe you don’t think it’s the default.)
I might talk about co-evolution, not between beliefs and blind spots, but between actions and excuses. The excuses can’t be too incoherent, because some people pay some attention to them. What I took to be “antibodies” were elaborate excuses, excuses for not drawing inferences between the first-order excuses, but I think the race example was the only example you gave of this. Maybe these are rare and most people just use first-order excuses for what they do, not excuses for why they don’t actually follow the first-order excuses.
Maybe the default is disconnected beliefs and actions driven by imitation. New religions tell people that they shouldn’t base their actions on imitation of their local authorities, forcing them back on nominal beliefs and forcing them to make inferences.
Why don’t they just imitate the missionary? Surely, the missionary communicates “be like me,” not “be different from them”? I guess it could be only the over-verbal converts who notice that menstruating women have cooties. They might make good stories without being representative. But there is the general principle that converts are more observant; are they radically more observant, or do they merely find more observant people to imitate? (if the latter, why?)
Or maybe (just speculating) “I too am a sinner; I am merely a bringer of good news; look to God, not to me”.
A better metaphor than antibodies is probably vectors. The degree of compartmentalization in a person’s belief network is a feature of the memetic environment- equivalent to the concentration of population or prevalence of vermin in the context of microbes. When people have a low degree of compartmentalization mimetic schema take over lots domains of thought just has urban living (pre-sanitation) increased the spread of disease. I don’t think there is an obvious sense in which the degree of vectorization has a ‘default’ though, unless you just want to make it zero because that is convenient.
Don’t drink the Kool-Aid ;)
Prevalent communicable diseases usually don’t kill their hosts, because those that do (like Ebola) tend not to spread, and thus are not prevalent.
There have been multiple recorded instances of cults ending in mass suicide, as well as a gamut of other harms to adherents. Those that don’t implode from their virulence may spread and survive, eventually becoming old enough to graduate and be called “religions” (i.e. prevalent cults).
Sounds like exactly the same mechanism to me.
This article seems relevant: “Clever sillies: Why high IQ people tend to be deficient in common sense.”
The author argues that high IQ people solve problems by using abstract reasoning instead of evolved common sense. Moreover, general intelligence is mainly useful for solving evolutionarily novel problems, and can actually be a hindrance for problems which were a regular part of the evolutionary environment (for example, social situations). Hence, when facing problems where humans have evolved behavioral responses, smart people who apply abstract reasoning and override common sense often end up doing silly things.
Unfortunately, it’s by Bruce Charlton. I’ve noticed that whenever this hypothesis comes up, it seems to be solely used as a political cudgel to attack liberals—which means I trust the paper as far as I can throw it.
(Why is the ‘clever silly’ idea always used to attack things like diversity, and not equally abstract and historically unprecedented shibboleths of the right like untrammeled free markets?)
I also don’t understand why politics isn’t considered evolutionarily novel. There is a difference between 1) social organization of a small tribe and 2) management of a polity containing large institutions and thousands to millions of people.
As far as I can tell, no one considers tribal political affiliation desirable.
I would recommend skipping the section on political correctness. I do think the first two sections give a good lesson on how a little reason can be a dangerous thing.
Looks like he got hoist by his own petard.
I have read something very similar to this someplace else before reading this article (on a side-note. This is the very first article I ever read completely on Less Wrong, and had I not contracted H1N1 at the end of September I would have joined Less Wrong at that time).
I too have read A Year Living Biblically. Mostly so that I would have ready material should I ever have to talk to my Evangelical Aunt and Uncle who are busy preparing a huge number of people in Texas for the Rapture and Second Coming of Christ (Hoo boy?!?).
I seem to recall in the article that I read about Toxic Memes (I am almost positive that is the Title of the article) in second and third world countries that they too mentioned that we have developed memetic anti-bodies in the west to things such as pornography and free-speech, where people living, say in Muslim countries do not have the cultural antibodies that are necessary for defense against the memes in question. This causes a pathological relationship with the meme, resulting in things like blaming a woman for why a man would rape her—to him, it was the pornography’s fault for suggesting to him that women are all sex objects.
Obviously, this is wrong, yet those who have been infected by a toxic meme are not going to understand why their understanding of the situation is wrong, any more than we would not understand why an extraterrestrial culture might thought it is wrong for humanity to be playing with their hyperdimensional fecal matter (from another post I made where I discuss that it might be the case that some alien race has feces that is capable of opening up wormholes in space-time). The simple fact of the matter is that the memetic resistance is non-existent.
Duh! It was an article by Jamais Cascio (Toxic Memes. Strangely, it is also a supplement for a game published by a friend of mine in Texas)
Your point about educated fundamentalists has been made elsewhere.
Its possible that some of the geneticists merely think it is good to perform (to steal Hopefully Anonymous’ favorite word) such a belief, perhaps for Straussian reasons. Hence the popularity of the replacement term among many scientists “population”.
I am many years late to the conversation, but this is fascinating. I think this is a good framework for explaining why taking moral theories to their logical extremes often yield unpalatable conclusions at odds with our moral intuitions—because they are overpowering our illogical cultural antibodies. It’s still a good read after 10+ years :)
I am skeptical of any epistemics that conflate memetic survival with truth, even weakly. Mostly because acting on certain beliefs can destroy evidence for other beliefs. Partly because I can think of no reason that all truths should intersect with anthropocentrism. An example of the former might be the destruction of native american agricultural and hunting techniques by the destruction of the environment. An example of the latter might be, more contentiously, natalism vs antinatalism. If antinatalism is true it still loses memetically simply because it self-selects itself away. So I don’t think your heuristics are robust enough, and probably we need people to probe the boundaries of reason from time to time. It would be good to reduce the social costs of this, but that’s different from epistemically consolidating around a small bundle of high equity beliefs and never risking anything on outlandish but potentially important beliefs. That sort of epistemic conservativism seems both exploitable to me, in the long run, and also a kind of death spiral.
That’s the point. In (protestant) Christianity, the old law was a standard that humans could never follow. But the old law had to be paid in blood. So God became human himself, so that he could pay in blood a law that only he could live up to.
It sounds like the author accidentally LARPed as an orthodox Jew.
That is a brilliant point. I also loved you description of the Buddhist monk taking questions from a Western audience. The image of incompatible knowledge blocks is a great one, that actually makes a lot of sense of how various ideologically conditioned people are able to functionally operate.
The example that comes up for me is animal suffering. I believe that torturing animals in factory farms will one day be regarded as a moral evil on bar with war, slavery, etc. I while I refrain from meat, I have a blindspot for eggs and milk, and still a bigger blind spot for other’s eating meat. If I didn’t have the latter blindspot, I wouldn’t be able to function in society. I don’t go around consciously thinking of factory carnivores as moral monsters. Maybe if I were more rationally driven I would think this way, and that might be a very bad thing.
Maybe the true judo move is to learn how to include the practical rationality of when to compartmentalize in the rational calculus. Of course, we might not have such fine grained control over these unconscious aspects of our cognition.
Depends on what do you call “radical Islam”, but I think that a bit of study of Islam’s early history should disabuse you of that notion.
The OP did write:
Which I think acknowledges some of that early history. I assume what is said about Judaism has to do with the slaughter of Canaanites, which is possibly more than half legendary, unlike the exploits of Islam which happened in a much better documented time.
In different times and places, Islam has been extremely sometimes fanatical, and at other times received Jews who were driven out of Spain by the Inquisition, and showed toleration of other ideas. The ups and downs have probably been due to many causes, but there really has been an awakening in recent decades of Islamic fanaticism, and in this case, at least, I think the OP’s thesis might account for some of that; the thinking was just a bit too loose and brainstorm-ey. It is kind of a puzzle to have so many Muslims combining western education, and the ability to function in a modern metropolitan setting combined with extreme fanaticism.
A peak of Christian holy-warring, torturing and witch-burning came when Protestants set out to rid Catholicism of the many “irrational” (having no basis in the bible) false sacraments, and between the Reformation and the Counter-reformation, both Protestants and Catholics were studying the gospels more rigorously and attempting to weave it into a more logical justifiable structure.
I grew up in the 50s and 60s in a very loose sort of Methodist protestantism that was very unconcerned with issues like the literal 7 days of creation, with hell for sinners, with Satan going around tempting people, or with contesting evolution. It seemed to me only a few fossils insisted on all those sorts of things. For most, a general sort of largely “good Samaritan” morality seemed the most salient thing, and there was not much in society to challenge the general sexual mores of moderate protestantism.
I think it’s true generally, that religions, especially if lacking a strong central structure like that of Catholicism or Mormonism, or Islam esp. in times when the idea of a a caliph seemed remote, tend to “mellow” most of the time, especially when not particularly challenged, and to evolve into largely going through the motions, but many different things contribute to a stirring of popular zeal and fanaticism. Both being in close contact with challenges from other belief systems and schooling that trains people to be more logical can contribute, and the result can be a lot of ideas that people half-forgot and certainly didn’t apply rigorously start to come to the fore.
I have listened on audio to both the Koran and the Bible, and admit that the Koran has a stronger more consistent version of the meme that God loathes unbelievers and intends to torture them for eternity, but it is a part of Christianity too, and in Islam it has frequently faded into the background. Again, Islam having been established last, is intensely aware of Judaism and Christianity, and rants against them specifically, but at times (esp the 16 and 17 centuries) there were more islands of toleration for Jews in the Muslim world than in Christiandom. Constant assertion that Islam is uniquely inhumane are just the sort of thing that strengthens fanaticism. In criticizing religion, I’ve come to the conclusion that we should tell believers that to the extent they believe in God’s loathing and wanting to torture “Infidels”, their beliefs present a real problem to others.
Does it work?
I am imagining a believer retorting that of course their beliefs present a real problem to others, and the bigger the problem the better. As indeed might a Christian accused of presenting a real problem to the rich, or a vegan accused of presenting a real problem to the meat industry, or an Ethical Altruist presenting a real problem to people buying $5000 bespoke suits.
The whole point of being an activist for any cause is to be a real problem to their opponents. To be told that they are a real problem is to tell them that they are succeeding.
I don’t think it’s puzzling. Examine your implicit assumptions—which exactly part of your worldview would say that Western education and living in a city should be incompatible with religious fanaticism?
The issue with Islam is not that it’s “inhumane”, the issue is that it is naturally a totalitarian religion. Christianity says “render unto Caesar the things that are Caesar’s”, but Islam says no such thing. From an Islamic point of view there is absolutely no reason why politics should not be subservient to faith and indeed the Christian approach is routinely called schizophrenic.
Cultural development seems not to follow such orderly laws that we can use the word “incompatible” very often if ever. But going to a western university tends to promote individual thought over blind acceptance of whatever you were taught in childhood, and while someone who spent their live in some valley in Afghanistan or northern Pakistan, never exposed to different people, might imagine westerners as cloven hoofed devils, it is at least a reasonable point of view to suppose that going to school with westerners could lesson that kind of visceral revulsion.
I’m a little take aback, as “render unto Caesar the things that are Caesar’s” is an admonition to obey established political authority. And “render unto God what is God’s” is the possibly subversive part—though it’s only recommending obedience to a competing authority”. Also, Christian Russia, esp around the time of Ivan the Terrible was arguably the most totalitarian major state for its time (the main argument would be over China, I think). I believe Paul’s writings give ample admonitions to obey authority, and for slaves to obey their masters.
What would you say in the doctrines of Islam makes it “naturally a totalitarian religion”? I assume you have some analysis that leads you to that conclusion.
I think your main point—that selective application of rationality could be dangerous—is true. But the question then is how often is it dangerous? And in what way should we apply rationality? Should we not apply rationality because it could be dangerous? I think the article would have been much better if these questions were brought up, and addressed.
I get the sense that applying rationality is usually more good than bad. Although I don’t really know enough about radical religions to say if it’s true for them too.
You said, “The history of religions sometimes resembles the history of viruses. Judaism and Islam were both highly virulent when they first broke out, driving the first generations of their people to conquer (Islam) or just slaughter (Judaism) everyone around them for the sin of not being them. ”
I am not familiar with that history of early Judaism. Can you cite any references I can read about it? (I do admit I have not read the entire old testament, perhaps it’s in there?) By the way, I have heard that Roman Catholics are actively discouraged from reading either testament directly.
Have you read even the early books? The constant warfare and near-genocides engaged in until they built up an empire? Then you have, even much later, all the rebellions which prompted the Romans to raze the Temple and exile most of the Jews.
(An oddity I always found to be an example is that one of the lost books of the bible is titled ‘The Book of the Wars of the Lord’.)
In order for the Old Testament to be evidence of Jews acting genocidal, the Old Testament would have to be true to a sufficient extent. If it’s not, you don’t have Jews being genocidal, you have Jews telling stories about their ancestors being genocidal.
It was my impression that non-religious historians do not believe that the genocides described in the Old Testament as being done by Jews actually happened.
It was my impression that non-religious historians do not believe all the little stories and miracles, but I had not noticed that they entirely disbelieved accounts like conquering Canaan and believed that there was evidence indicating they were pacifists and did not exterminate any local populations or engage in warfare, and the archaeological evidence, to the extent that it can speak on the matter (since it’s going to be very difficult to investigate genocides from millennia ago when you are excluding all available written evidence as possibly untrue), supported it (the example that comes to mind is the burning of Jericho, although it’s disputed how well the observed destruction layer fits into the chronology).
Those stories most likely didn’t happen. Still, the fact that their religion is entirely dependent on those bloody stories says a lot about the ancient Hebrews’ priorities.
It seems fundamentally unfair to compare cases of religions whose people actually committed genocide to religions whose people tell stories about committing genocide.
This is especially so considering the original post here, which points out that people don’t actually follow all the commands of their religions and have blind spots about the religions not saying what they say. That applies to stories about genocide just as much as it applies to direct commands—you can reason all you want that someone who believes that fictional genocides were real and justified is as vicious as someone who actually commits genocide, but people’s minds don’t work that way. It’s entirely possible to think Biblical genocides are justified and have blind spots which would lead you not to commit genocide in any real-life situation.
(In fact, I’m not erven sure I could call all the possibilities blind spots. If you believe genocide is only justified when commanded in person by God, is it really a blind spot to say “God doesn’t directly speak to anyone nowadays, so I won’t commit any genocide”?)
Current Israeli treatment of Palestinians is what we’d expect from a civilization not above committing genocide, even with no previous experience at the giving end of it.
If true believers don’t commit genocide because God isn’t telling them to do so, then their minds have a huge backdoor entrance for memetic infection. I once had an extended email debate with a pastor who told me that he didn’t believe God approved of slavery, but if it were proven to him that God did, he’d jump to support it.
If Israel wasn’t above committing genocide, there wouldn’t be any Palestinians.
This site is named “less wrong”. Equating people who believe stories of committing genocide with people who actually commit genocide is more wrong.
“Might someday commit genocide if they get messed up” is still incomparable to “has committed genocide”.
A nation doesn’t need to be completely wiped out to be a victim of genocide (see Armenians, Jews, Native Americans, all of which still exist).
I do get that actual genocide is qualitatively different from hypothetical, and even from potential genocide. What I was trying to say (and what I think the discussion was originally about) is that you are shaped by your heroes. Knowing that you admire Old Testament hero Joshua, mass murderer by divine command, provides a window into some of your values. Knowing that your civilization intentionally made up someone like Joshua in order to glorify him is even more cause for worry.
But you’d expect the perpetrator to try a little harder than Israel actually has.
Who’s to say anyone made up the story? I imagine it more like a game of telephone, where whatever incident started the story was unrecognizeable after a few hundreds of years until someone finally wrote it down.
And even if someone did directly make up parts of it, so what? Telling your child that if he stays up late the monster under the bed will eat him might not be the best of things to say to your kid, but it certainly doesn’t mean you think that being eaten is justified as a punishment for staying up late.
Agreed that making up the story only tells so much about the narrator. But so far we’re only considering the narrator.
Believing the story is what will change you. Until the day the kid knows any better, he will seriously believe that the universe has a punishment system that kills people for staying up late. If he’s upset at his little brother, he may try to trick him into not sleeping, hoping for him to be eaten. This may or may not poison their future relationship.
In a world where the kid grows up without being told the monster wasn’t real, he will tell the same story to his own kids. The first scenario will repeat itself, but this time the adult will mean it.
Edit: I just noticed this scenario contradicts my previous argument, where I suggested the narrator was more to blame than his gullible listeners. I feel more inclined to think otherwise now.
Upvoted for explicitly noticing and noting your confusion. One of the best things about Less Wrong is that noticing the flaws in one’s own argument is respected and rewarded. (As it should be, in a community of truth-seekers.)
Good for you!
If Omega told you that CEV approved of slavery—or a Friendly superintelligence—would you jump to support it?
I would.
ETA: of course, it is admittedly harder to fake a superintelligence …
Thanks. Whatever reading I did of the old testament was back when I was a teenager- which was long ago.I don’t remember how far I got, not very. I was reading the commentary along with it, and it was tedious. Perhaps I’ll get back to it when I get a chance. That’s certainly not the spin that was put on the history we were taught in Hebrew school.
It is because of the potential in posts like this that I wish that Less Wrong had an edit queue, or that the wiki were used as an edit queue. Do you have plans to write a longer version?
I wasn’t planning on it. Perhaps, if I knew an appropriate place to publish it.
Considering religious belief as virus suggests a larger pattern expressed in the physical world as a whole, does it not? Do our beliefs reflect that pattern or is our perception of the pattern inate?
Regarding religious belief as viral suggests a larger pattern expressed in the physical world as a whole, does it not? Do our beliefs reflect that pattern or is our perception of the pattern inate to our humanity?
“I have decided to not believe your comments about doubting the God of creation” for a few reasons 1) your not perfect ! oh you try to seem perfect with your smooth deceptive vocabulary! that is why you are bothered by “Americans who claim the Bible is the word of God.” that part about “impossible, illegal, or embarassing to follow nowadays”.lol. those laws were made to keep people from sin and death, “lets say you have a wife, would she commit adultery if she knew she would be stoned to death? hopefully not! So God wanted to stop that sin to keep her from sin because sin = death remember? 2) you said ” you were bothered by Americans who claim the Bible is the word of God. a handful of the people like you who are quick to throw stone`s at others for believing in something rather then being like you. Oh they would be soooooo much better of to be like you and hating on others.lol. so we would rather have a hope of salvation everlasting with God when we die. we are such awful people.lol. we deserve your punishment oh great imperfect one. why are you so bothered by that because that is what Jesus Christ gives to those who live for him, or would i like to see life you way, bro your filled with hate confess your sin and live the eternal life through faith “Now faith is the substance of things hoped for. hope for eternal life in Jesus Christ , if anyone can answer me this one question, would you rather have a blessed hope when you die or not. if not then fine have it your way , just keep it to yourself , i dont run around starting topics to bash you for not having a blessed hope when you die. i pray this right now that for you that Hell will not be a truth found out to late! Jesus was completely against religion, just confess with you mouth your sins and believe Jesus Christ died for you , he is faithful to forgive your sin and cleanse you of all unrighteousness. that is the will of the father not religion!
I have never so thoroughly enjoyed and had my mood brightened by something that I then logged a “downvote” upon. Am I being irrational in this vote? The proper criteria for making these votes seems largely implied but never actually explained.
It is definitely considered proper to log in to downvote comments that do not have acceptable paragraph formatting.
The guideline is generally: vote up what you want to see more of, vote down what you want to see less of.
Not the same as agreeing or disagreeing. I’d vote up something that was well-argued and forced me to think hard about why I thought it completely wrong before posting a refutation of it.
The grandparent is made of fail on all counts.
One comment like this is briefly entertaining. Ten would be frustrating. Downvoting helps disincentivize the next nine.
I’ve just remembered why I like LessWrong. Thanks.