Reversed Stupidity Is Not Intelligence
“. . . then our people on that time-line went to work with corrective action. Here.”
He wiped the screen and then began punching combinations. Page after page appeared, bearing accounts of people who had claimed to have seen the mysterious disks, and each report was more fantastic than the last.
“The standard smother-out technique,” Verkan Vall grinned. “I only heard a little talk about the ‘flying saucers,’ and all of that was in joke. In that order of culture, you can always discredit one true story by setting up ten others, palpably false, parallel to it.”
—H. Beam Piper, Police Operation
Piper had a point. Pers’nally, I don’t believe there are any poorly hidden aliens infesting these parts. But my disbelief has nothing to do with the awful embarrassing irrationality of flying saucer cults—at least, I hope not.
You and I believe that flying saucer cults arose in the total absence of any flying saucers. Cults can arise around almost any idea, thanks to human silliness. This silliness operates orthogonally to alien intervention: We would expect to see flying saucer cults whether or not there were flying saucers. Even if there were poorly hidden aliens, it would not be any less likely for flying saucer cults to arise. The conditional probability P(cults|aliens) isn’t less than P(cults|¬aliens), unless you suppose that poorly hidden aliens would deliberately suppress flying saucer cults.1 By the Bayesian definition of evidence, the observation “flying saucer cults exist” is not evidence against the existence of flying saucers. It’s not much evidence one way or the other.
This is an application of the general principle that, as Robert Pirsig puts it, “The world’s greatest fool may say the Sun is shining, but that doesn’t make it dark out.”2
If you knew someone who was wrong 99.99% of the time on yes-or-no questions, you could obtain 99.99% accuracy just by reversing their answers. They would need to do all the work of obtaining good evidence entangled with reality, and processing that evidence coherently, just to anticorrelate that reliably. They would have to be superintelligent to be that stupid.
A car with a broken engine cannot drive backward at 200 mph, even if the engine is really really broken.
If stupidity does not reliably anticorrelate with truth, how much less should human evil anticorrelate with truth? The converse of the halo effect is the horns effect: All perceived negative qualities correlate. If Stalin is evil, then everything he says should be false. You wouldn’t want to agree with Stalin, would you?
Stalin also believed that 2 + 2 = 4. Yet if you defend any statement made by Stalin, even “2 + 2 = 4,” people will see only that you are “agreeing with Stalin”; you must be on his side.
Corollaries of this principle:
To argue against an idea honestly, you should argue against the best arguments of the strongest advocates. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates. If you want to argue against transhumanism or the intelligence explosion, you have to directly challenge the arguments of Nick Bostrom or Eliezer Yudkowsky post-2003. The least convenient path is the only valid one.3
Exhibiting sad, pathetic lunatics, driven to madness by their apprehension of an Idea, is no evidence against that Idea. Many New Agers have been made crazier by their personal apprehension of quantum mechanics.
Someone once said, “Not all conservatives are stupid, but most stupid people are conservatives.” If you cannot place yourself in a state of mind where this statement, true or false, seems completely irrelevant as a critique of conservatism, you are not ready to think rationally about politics.
Ad hominem argument is not valid.
You need to be able to argue against genocide without saying “Hitler wanted to exterminate the Jews.” If Hitler hadn’t advocated genocide, would it thereby become okay?
In Hansonian terms: Your instinctive willingness to believe something will change along with your willingness to affiliate with people who are known for believing it—quite apart from whether the belief is actually true. Some people may be reluctant to believe that God does not exist, not because there is evidence that God does exist, but rather because they are reluctant to affiliate with Richard Dawkins or those darned “strident” atheists who go around publicly saying “God does not exist.”
If your current computer stops working, you can’t conclude that everything about the current system is wrong and that you need a new system without an AMD processor, an ATI video card, a Maxtor hard drive, or case fans—even though your current system has all these things and it doesn’t work. Maybe you just need a new power cord.
If a hundred inventors fail to build flying machines using metal and wood and canvas, it doesn’t imply that what you really need is a flying machine of bone and flesh. If a thousand projects fail to build Artificial Intelligence using electricity-based computing, this doesn’t mean that electricity is the source of the problem. Until you understand the problem, hopeful reversals are exceedingly unlikely to hit the solution.4
1Read “P(cults|aliens)” as “the probability of UFO cults given that aliens have visited Earth,” and read “P(cults|¬aliens)” as “the probability of UFO cults given that aliens have not visited Earth.”
2Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values, 1st ed. (New York: Morrow, 1974).
3See Scott Alexander, “The Least Convenient Possible World,” Less Wrong (blog), December 2, 2018, http://lesswrong.com/lw/2k/the_least_convenient_possible_world/.
4See also “Selling Nonapples.” http://lesswrong.com/lw/vs/selling_nonapples.
- 100 Tips for a Better Life by 22 Dec 2020 14:30 UTC; 448 points) (
- Why Our Kind Can’t Cooperate by 20 Mar 2009 8:37 UTC; 292 points) (
- 11 Jul 2019 5:46 UTC; 229 points) 's comment on The AI Timelines Scam by (
- Dissolving the Question by 8 Mar 2008 3:17 UTC; 145 points) (
- Undiscriminating Skepticism by 14 Mar 2010 23:23 UTC; 137 points) (
- Think carefully before calling RL policies “agents” by 2 Jun 2023 3:46 UTC; 133 points) (
- Serious Stories by 8 Jan 2009 23:49 UTC; 126 points) (
- Where Recursive Justification Hits Bottom by 8 Jul 2008 10:16 UTC; 123 points) (
- No Safe Defense, Not Even Science by 18 May 2008 5:19 UTC; 118 points) (
- What I’ve learned from Less Wrong by 20 Nov 2010 12:47 UTC; 113 points) (
- Bayesianism for Humans by 29 Oct 2013 23:54 UTC; 91 points) (
- Abnormal Cryonics by 26 May 2010 7:43 UTC; 79 points) (
- The Great Annealing by 30 Mar 2020 1:08 UTC; 79 points) (
- Focusing on bad criticism is dangerous to your epistemics by 15 Mar 2024 23:57 UTC; 70 points) (EA Forum;
- My Best and Worst Mistake by 16 Sep 2008 0:43 UTC; 70 points) (
- A List of Nuances by 10 Nov 2014 5:02 UTC; 67 points) (
- Prolegomena to a Theory of Fun by 17 Dec 2008 23:33 UTC; 67 points) (
- Is Humanism A Religion-Substitute? by 26 Mar 2008 4:18 UTC; 62 points) (
- The Problem of the Criterion by 21 Jan 2021 15:05 UTC; 62 points) (
- your terminal values are complex and not objective by 13 Mar 2023 13:34 UTC; 61 points) (
- The Jordan Peterson Mask by 3 Mar 2018 19:49 UTC; 60 points) (
- Guardians of the Truth by 15 Dec 2007 18:44 UTC; 56 points) (
- 27 Jun 2020 0:19 UTC; 54 points) 's comment on Slate Star Codex, EA, and self-reflection by (EA Forum;
- Against Devil’s Advocacy by 9 Jun 2008 4:15 UTC; 50 points) (
- Challenges to Yudkowsky’s Pronoun Reform Proposal by 13 Mar 2022 20:38 UTC; 50 points) (
- Should rationalists be spiritual / Spirituality as overcoming delusion by 25 Mar 2024 16:48 UTC; 49 points) (
- Reply to Nate Soares on Dolphins by 10 Jun 2021 4:53 UTC; 46 points) (
- Logical or Connectionist AI? by 17 Nov 2008 8:03 UTC; 46 points) (
- Amputation of Destiny by 29 Dec 2008 18:00 UTC; 45 points) (
- Surface Analogies and Deep Causes by 22 Jun 2008 7:51 UTC; 38 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Rationalist Poetry Fans, Unite! by 20 Mar 2009 1:58 UTC; 38 points) (
- The Comedy of Behaviorism by 2 Aug 2008 20:42 UTC; 32 points) (
- Nuanced and Extreme Countersignaling by 24 Jan 2022 6:47 UTC; 29 points) (
- The Prediction Hierarchy by 19 Jan 2010 3:36 UTC; 28 points) (
- The Importance of Self-Doubt by 19 Aug 2010 22:47 UTC; 28 points) (
- Justified Expectation of Pleasant Surprises by 15 Jan 2009 7:26 UTC; 27 points) (
- Failure By Affective Analogy by 18 Nov 2008 7:14 UTC; 27 points) (
- What does the word “collaborative” mean in the phrase “collaborative truthseeking”? by 26 Jun 2019 5:26 UTC; 26 points) (
- On Defense Mechanisms by 4 Mar 2018 18:53 UTC; 25 points) (
- 13 Nov 2017 10:57 UTC; 23 points) 's comment on An Exploration of Sexual Violence Reduction for Effective Altruism Potential by (EA Forum;
- Any real toeholds for making practical decisions regarding AI safety? by 29 Sep 2024 12:03 UTC; 23 points) (
- Above the Narrative by 25 Feb 2021 4:37 UTC; 23 points) (
- Through a panel, darkly: a case study in internet BS detection by 2 Jul 2023 13:40 UTC; 22 points) (
- Covid 2/2/23: The Emergency Ends on 5/11 by 2 Feb 2023 14:00 UTC; 22 points) (
- The Empty White Room: Surreal Utilities by 23 Jul 2013 8:37 UTC; 22 points) (
- 10 Apr 2020 18:23 UTC; 21 points) 's comment on The Value Is In the Tails by (
- Appropriately Gray Products by 6 Oct 2021 0:56 UTC; 20 points) (
- The Futility of Status and Signalling by 13 Nov 2022 17:14 UTC; 19 points) (
- 29 Apr 2009 8:39 UTC; 17 points) 's comment on Generalizing From One Example by (
- Does the “ancient wisdom” argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? by 4 Nov 2024 15:20 UTC; 17 points) (
- Forager Anthropology by 28 Jul 2010 5:48 UTC; 17 points) (
- 27 Jul 2020 11:10 UTC; 16 points) 's comment on Sunny’s Shortform by (
- 26 Oct 2019 6:59 UTC; 15 points) 's comment on Deleted by (
- 26 Jan 2018 22:13 UTC; 15 points) 's comment on What are the Best Hammers in the Rationalist Community? by (
- 2 Oct 2022 19:27 UTC; 14 points) 's comment on EA may look like a cult (and it’s not just optics) by (EA Forum;
- 21 Aug 2022 9:19 UTC; 14 points) 's comment on Interesting vs. Important Work—A Place EA is Prioritizing Poorly by (EA Forum;
- 11 Jan 2012 3:10 UTC; 14 points) 's comment on On Leverage Research’s plan for an optimal world by (
- So You Want to Colonize The Universe Part 4: Velocity Changes and Energy by 27 Feb 2019 10:22 UTC; 14 points) (
- Rationality Reading Group: Part F: Politics and Rationality by 29 Jul 2015 22:22 UTC; 14 points) (
- 3 Apr 2011 2:31 UTC; 14 points) 's comment on Rationality Boot Camp by (
- Repeat Until Broke by 13 Aug 2020 7:19 UTC; 14 points) (
- 21 Apr 2023 1:38 UTC; 14 points) 's comment on Stop trying to have “interesting” friends by (
- 3 May 2009 0:19 UTC; 14 points) 's comment on The mind-killer by (
- How to use “philosophical majoritarianism” by 5 May 2009 6:49 UTC; 13 points) (
- 7 Mar 2011 3:29 UTC; 13 points) 's comment on Positive Thinking by (
- 19 May 2011 16:49 UTC; 12 points) 's comment on What bothers you about Less Wrong? by (
- 8 Jul 2013 10:00 UTC; 12 points) 's comment on A Gamification Of Education: a modest proposal based on the Universal Decimal Classification and RPG skill trees by (
- Signalling lack of familiarity with outsiders or outside knowledge, to raise status among your in-group peers? by 24 May 2021 3:42 UTC; 11 points) (
- 3 Jul 2012 1:47 UTC; 11 points) 's comment on Rationality Quotes July 2012 by (
- 20 Apr 2014 20:00 UTC; 11 points) 's comment on Open Thread April 16 - April 22, 2014 by (
- What rationality material should I teach in my game theory course by 14 Jan 2014 2:15 UTC; 11 points) (
- 1 Jun 2012 15:52 UTC; 11 points) 's comment on How can I argue without people online and not come out feeling bad? by (
- 13 Nov 2022 5:32 UTC; 10 points) 's comment on A personal statement on FTX by (EA Forum;
- 9 Mar 2014 13:43 UTC; 9 points) 's comment on Rationality Quotes March 2014 by (
- 14 Apr 2023 6:48 UTC; 9 points) 's comment on Preface by (
- 23 May 2010 18:03 UTC; 9 points) 's comment on To signal effectively, use a non-human, non-stoppable enforcer by (
- 15 May 2013 10:14 UTC; 8 points) 's comment on How to Build a Community by (
- Narrative Direction and Rebellion by 13 Apr 2020 1:07 UTC; 8 points) (
- 15 Jun 2009 18:27 UTC; 8 points) 's comment on Rationality Quotes—June 2009 by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 12 Jul 2009 18:56 UTC; 8 points) 's comment on Our society lacks good self-preservation mechanisms by (
- 10 Jun 2010 12:51 UTC; 7 points) 's comment on Open Thread June 2010, Part 2 by (
- 18 Nov 2012 11:32 UTC; 7 points) 's comment on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He’s not.] by (
- [SEQ RERUN] Reversed Stupidity Is Not Intelligence by 22 Nov 2011 4:26 UTC; 7 points) (
- 22 Feb 2019 21:27 UTC; 6 points) 's comment on “Other people are wrong” vs “I am right” by (
- 10 Oct 2013 3:31 UTC; 6 points) 's comment on A Voting Puzzle, Some Political Science, and a Nerd Failure Mode by (
- 26 May 2010 15:08 UTC; 6 points) 's comment on Abnormal Cryonics by (
- 15 Jun 2009 14:21 UTC; 6 points) 's comment on Why safety is not safe by (
- 17 Mar 2012 18:21 UTC; 6 points) 's comment on Muehlhauser-Goertzel Dialogue, Part 1 by (
- 8 Sep 2012 14:55 UTC; 6 points) 's comment on Jews and Nazis: a version of dust specks vs torture by (
- 22 Feb 2016 9:57 UTC; 5 points) 's comment on The Pink Sparkly Ball Thing (Use unique, non-obvious terms for nuanced concepts) by (
- Physics is Ultimately Subjective by 14 Jul 2023 22:19 UTC; 5 points) (
- 24 Nov 2019 19:42 UTC; 5 points) 's comment on Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary by (
- 20 Jul 2010 22:32 UTC; 5 points) 's comment on Fight Zero-Sum Bias by (
- 24 Mar 2010 20:03 UTC; 5 points) 's comment on The scourge of perverse-mindedness by (
- 20 Dec 2010 4:11 UTC; 5 points) 's comment on Humor by (
- 5 Jan 2013 11:29 UTC; 5 points) 's comment on How to Avoid the Conflict Between Feminism and Evolutionary Psychology? by (
- 5 Jun 2023 5:23 UTC; 5 points) 's comment on Rereading Atlas Shrugged by (
- 24 May 2021 10:26 UTC; 4 points) 's comment on Signalling lack of familiarity with outsiders or outside knowledge, to raise status among your in-group peers? by (
- 10 Jul 2011 8:50 UTC; 4 points) 's comment on Experiment: Knox case debate with Rolf Nelson by (
- 13 Oct 2010 16:45 UTC; 4 points) 's comment on Information theory and FOOM by (
- 11 Feb 2011 3:48 UTC; 4 points) 's comment on Rationality Quotes: February 2011 by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- 6 Oct 2010 6:45 UTC; 4 points) 's comment on The Irrationality Game by (
- 28 Nov 2014 6:27 UTC; 4 points) 's comment on You have a set amount of “weirdness points”. Spend them wisely. by (
- 5 Dec 2020 8:29 UTC; 3 points) 's comment on “Other people are wrong” vs “I am right” by (
- 2 Oct 2014 1:59 UTC; 3 points) 's comment on Rationality Quotes October 2014 by (
- 7 Nov 2012 20:18 UTC; 3 points) 's comment on Please don’t vote because democracy is a local optimum by (
- 11 Aug 2010 13:19 UTC; 3 points) 's comment on Open Thread, August 2010-- part 2 by (
- 18 Nov 2017 15:38 UTC; 3 points) 's comment on Entitlement, Covert Contracts, Social Libertarianism, and Related Concepts by (
- 19 Feb 2010 17:00 UTC; 3 points) 's comment on Med Patient Social Networks Are Better Scientific Institutions by (
- 4 Sep 2010 20:20 UTC; 2 points) 's comment on Anthropomorphic AI and Sandboxed Virtual Universes by (
- 20 Jan 2014 2:27 UTC; 2 points) 's comment on Argue against yourself by (
- 13 Nov 2013 16:39 UTC; 2 points) 's comment on AI Policy? by (
- 24 Nov 2011 7:12 UTC; 2 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 26 Apr 2009 22:19 UTC; 2 points) 's comment on Practical Advice Backed By Deep Theories by (
- 16 Nov 2022 2:44 UTC; 2 points) 's comment on Open & Welcome Thread—November 2022 by (
- 16 Mar 2016 22:42 UTC; 2 points) 's comment on How I infiltrated the Raëlians (and was hugged by their leader) by (
- 22 Jun 2015 3:14 UTC; 2 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 6 Jan 2016 19:17 UTC; 2 points) 's comment on Open Thread, January 4-10, 2016 by (
- 8 Sep 2012 21:48 UTC; 2 points) 's comment on Jews and Nazis: a version of dust specks vs torture by (
- 9 Dec 2010 15:48 UTC; 2 points) 's comment on Reliably wrong by (
- 13 Sep 2023 7:27 UTC; 2 points) 's comment on Stupidity is also hard by (
- 13 Sep 2023 7:27 UTC; 2 points) 's comment on Stupidity is also hard by (
- 10 Apr 2009 19:21 UTC; 2 points) 's comment on How theism works by (
- 20 Dec 2010 20:07 UTC; 2 points) 's comment on Christmas by (
- 13 Aug 2013 10:29 UTC; 2 points) 's comment on Common sense as a prior by (
- 25 Jan 2011 3:22 UTC; 1 point) 's comment on Scientific Self-Help: The State of Our Knowledge by (
- 18 May 2024 21:55 UTC; 1 point) 's comment on Stephen Fowler’s Shortform by (
- What is stupid? by 12 May 2015 15:05 UTC; 1 point) (
- Request for rough draft review: Navigating Identityspace by 29 Sep 2010 17:51 UTC; 1 point) (
- 22 Jul 2015 6:00 UTC; 1 point) 's comment on Open Thread, Jul. 13 - Jul. 19, 2015 by (
- 19 Jan 2010 15:29 UTC; 1 point) 's comment on The Wannabe Rational by (
- Arational quotes by 14 Apr 2011 0:58 UTC; 1 point) (
- Meetup : West LA Meetup 10-04-2011 by 1 Oct 2011 15:21 UTC; 1 point) (
- 25 May 2010 2:24 UTC; 1 point) 's comment on Updating, part 1: When can you change your mind? The binary model by (
- 19 Oct 2013 16:54 UTC; 1 point) 's comment on Making Fun of Things is Easy by (
- 21 Mar 2015 21:03 UTC; 1 point) 's comment on Twenty basic rules for intelligent money management by (
- 22 Nov 2012 4:42 UTC; 1 point) 's comment on Rationality Quotes November 2012 by (
- 26 Nov 2017 20:39 UTC; 0 points) 's comment on Entitlement, Covert Contracts, Social Libertarianism, and Related Concepts by (
- 7 Dec 2010 23:32 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 25 May 2017 3:16 UTC; 0 points) 's comment on Reaching out to people with the problems of friendly AI by (
- 2 Sep 2009 4:09 UTC; 0 points) 's comment on Cookies vs Existential Risk by (
- 17 Oct 2012 5:54 UTC; 0 points) 's comment on Problem of Optimal False Information by (
- 20 May 2014 4:04 UTC; 0 points) 's comment on OpenWorm and differential technological development by (
- 29 Jun 2009 16:36 UTC; 0 points) 's comment on What’s In A Name? by (
- 18 Jun 2015 10:16 UTC; 0 points) 's comment on In praise of gullibility? by (
- 29 Jan 2016 13:48 UTC; 0 points) 's comment on To contribute to AI safety, consider doing AI research by (
- 15 Jul 2009 10:29 UTC; 0 points) 's comment on Our society lacks good self-preservation mechanisms by (
- 7 Nov 2015 9:46 UTC; 0 points) 's comment on Rationality Quotes Thread October 2015 by (
- 30 Mar 2012 5:06 UTC; 0 points) 's comment on New front page by (
- 3 May 2012 0:35 UTC; 0 points) 's comment on Do people think Less Wrong rationality is parochial? by (
- 13 Aug 2013 10:39 UTC; 0 points) 's comment on Common sense as a prior by (
- 26 Aug 2013 21:34 UTC; -1 points) 's comment on Reality is weirdly normal by (
- 23 Oct 2011 14:06 UTC; -2 points) 's comment on [LINK] Loss of local knowledge affecting intellectual trends by (
- 11 Oct 2012 23:33 UTC; -4 points) 's comment on Humans are not automatically strategic by (
- 24 Mar 2010 15:52 UTC; -10 points) 's comment on Open Thread: March 2010, part 3 by (
- The cone of freedom (or, freedom might only be instrumentally valuable) by 24 Jul 2023 15:38 UTC; -10 points) (
dammit, you could have told me that before I spent so much time building this flying machine made of bone and flesh...
… he shouted down, soaring through the sky.
It’s amazing how many supposedly rationalist movements fall into the trap of crippling “reverse stupidity.” Many in the atheist movement would not have you make positive pronouncements, not have you form organizations, not have you advocate, not have you adopt symbols or give the movement a name, not have you educate children on atheism, and so on, all because “religion does it.” I think in the case of atheism the source is unique: every (modern) atheist knows his or her atheism is a product of scientific understanding but few atheists are willing to admit it (having taken up also the false belief that some things are “outside science”), so they go looking for other reasons, and “reverse stupidity” offers such reasons in abundance.
“I think in the case of atheism the source is unique: every (modern) atheist knows his or her atheism is a product of scientific understanding...”
We are already “stronger” by far, than most of the “pagan” gods. This century, we may well create our own worlds (“virtual”, yea—but theology doesn’t hold our own world as the “real” for its creator...s). It’s all comes down to terminology.
I think atheists would do well to encourage agnosticism, seems like an easier sell to me, training wheels? Much of the atheist movement reeks of fundamentalism. By definition atheism is closed minded. So much of science is unknown. I don’t discount the idea that the possibility of collective consciousness or any number of other things viewed as supernatural, and therefore dismissed, exist. Read some theoretical physics, we don’t understand a lot of stuff. That stuff could be the basis completely different ways of thinking about reality. It may very well be that what we perceive as reality is a small part, or an expression of something that no one has begun to understand. It’s cliche but what if we are programs running on some ultra advanced computer. Would the operator of that computer not be a “god.” Dismissing that idea is silly, creating computers of that complexity is science fiction but it certainly isn’t out of the realm of possibility. Who’s to say we’d be the first one’s to do it.
A few months have passed since that comment, but maybe you should consider reading: http://lesswrong.com/lw/nz/arguing_by_definition/ and http://lesswrong.com/lw/ny/sneaking_in_connotations/
Is it, really? I find more open mindedness in “there is no evidence for this, so I have no reason to believe it” than any theism. Someone telling you to be open minded usually means they want you to agree with them: Accepting a solution instead of considering others as well. It’s happened to me, when people talked about ghosts, which have been disproven regardless. But then, it’s just accepting one seemingly possible solution.
If all unlikely explanations seem possible, how is it open minded to select just one?
I think there are several problems with your statements; I’ll try to address a few. In the interests of full disclosure, I’m an atheist myself, but I obviously can’t speak for anyone other than myself.
I don’t know about “much”, though some atheists are undeniably fundamentalist—and some theists are, as well. However, this doesn’t tell us anything about whether atheism (or theism) is actually true or not.
I think this depends on which definition you’re using; but something tells me it’s different from mine.
Neither do I, and neither do most atheists. In fact, most atheists don’t discount the possibility of lots of other things existing, as well: Zeus, unicorns, a teapot in orbit of Saturn, leprechauns, FTL neutrinos, etc. But a possibility is not the same thing as probability; and we humans simply don’t have the luxury in believing everything we can think of. We’d never get anywhere if we did that. So, atheists make the conscious choice to live their lives and think their thoughts as though that orbiting teapot did not, in fact, exist. Of course, once someone presents some evidence of its existence, we’d change our minds, and re-evaluate all of our beliefs to include the teapot (or gods, or leprechauns, or what have you).
I suspect we understand more than you think—there are whole books written on the subject, after all. But more importantly, a lack of understanding doesn’t automatically make any alternative hypothesis any more likely. For example, I don’t know with certainty how that suspicious puddle under my car got there, but “aliens !” or “demons !” are not the kinds of answers that instantly spring to mind.
Sure, it could be. But is it ? If it is, then I’d like to see some evidence. Note that the scientific method has a whole mountain of evidence behind it; your computer, for example, is merely a tiny piece of it.
I don’t know, which god did you have in mind ? And do you have any evidence that we’re all programs running on a giant computer, or dreams in the mind of a butterfly, or astral manifestations of Krishna’s vibrations, or whatever else one can come up with ?
I’m afraid I don’t quite understand what “fundamendalist” atheism is. Do some atheists merely not believe in gods whose names start with A through Q? Do some atheists attend mass once every eighth Thursday?
I would suggest you read the following two posts:
http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/ http://lesswrong.com/lw/mm/the_fallacy_of_gray/
For most part they would also do well to eschew evangelism.
Meh… In a way, this entire site is dedicated to evangelism of skepticism (and, therefore, atheism). I’m ok with that.
No it isn’t. It’s a rationality site… which actually puts it at odds with skepticism when it comes to approach and some major conclusions. It is that same rationality which mandates that even if people on the site go ahead and evangelism atheism on the side they do so while acknowledging that most people would be better off getting off the pulpit and living their lives.
How so ? The site promotes (one might say, “evangelizes”) rational thinking, especially the Bayes Rule, and evidence-based reasoning in general. These are the core values of skepticism; disbelief in fairies/UFOs/gods/etc. is merely a consequence.
Number theory and numerology both have number as a core principle. This doesn’t make them the same thing. Look for where they differ, and you might spot why they differ.
″… you have to directly challenge the arguments of Nick Bostrom or Eliezer Yudkowsky post-2003.”
Just what the heck happened in 2003? In any experimental field, particularly this one, having new insights and using them to correct old mistakes is just part of the normal flow of events. Was there a super-super-insight which corrected a super-super-old mistake?
He’s referring to his coming of age as a rationalist (which he hadn’t written yet then); his transhumanist ideas before 2003 were pretty heavily infected with biases (like the Mind Projection Fallacy) that he harps on about now.
If the same majority of smart people as stupid people are conservative then the statement that “Not all conservatives are stupid, but most stupid people are conservatives.” is actually completely irrelevant, but I don’t think that anyone believes otherwise. If there is a positive correlation between intelligence and the truth of one’s beliefs (a claim the truth of which is probably assumed by most people to be true for any definition of intelligence they care about) then the average intelligence of people who hold a given belief is entangled with the truth of that belief and can be used as Bayesian evidence. Evidence is not proof of course, and this heuristic will not be perfectly reliable.
Why would the number of stupid people who believe something anticorrelate with the number of smart people who believe it? Most stupid people and most smart people believe the sky is blue. A shift in the fraction of stupid people who do X can take place without any corresponding shift in the fraction of smart people who do X one way or another. Some smart people actively prefer not to affiliate themselves with stupid people and will try to believe something different, but they are committing the error of the OP and should not be listened to anyway.
The statistical evidence is that liberalism, especially social liberalism, is positively correlated with intelligence. This does not prove that liberalism is correct; but it does provide some mild evidence in that direction.
As an interesting phenomenon, I’ve noticed that when I question people in-depth about their beliefs on specific issues what they actually want is often seriously at odds with the political group to which they claim to adhere.
It’s almost like political affiliations are tribal memberships and people engage in double-think to not risk those memberships even when having that membership doesn’t form a coherent whole with the rest of their ideology.
To the extent which IQ actually matters, I’ve noticed two patterns:
Firstly, to a certain extent, those with higher IQ tend to spend more years of their life in school, and most schools have a very definite liberal or conservative culture and actively punish “wrongthink” to a certain degree. So IQ correlation with political faction may be more indicative of the ratio between schools than anything else.
Secondly, once a person’s IQ gets into the 130+ range you seem to start finding a higher fraction of people who really despise the stupidity and waste of primate social politics and so prefer consistency of internal logic over maintaining good tribal standing. These people are actually interesting to talk to about politics because they’re actually interested in what the facts are and in whether or not policy actually meets its goals. Even when you disagree with their conclusions, you don’t have to spend all your time pointing out the same contradictions again and again.
Declaration of bias: I am a liberal, I am intelligent, but I’m not a Democrat or Republican.
It’s hard to measure liberalism. For example, half the black people say they are conservative and half say they are liberal. But most outsiders would say most black people are liberal (and it’s common for 100% of black people in an area to vote for Obama). People judge their liberalism against people like themselves, so it’s hard to compare groups.
If you count most black people as liberals, then that intelligence difference between liberals and conservatives might disappear (if it exists, I haven’t checked). For example, it’s a proven fact that Republicans are smarter than Democrats (because of black people with an average IQ of 85 voting Democrat), although just between white people there is no real difference.
You also need to consider that intelligence comes with biases, even though it also improves your thinking. Intelligent people are biased towards things that benefit intelligent people, eg. complexity, even if they hurt other people.
Intelligent people are biased towards letting people do whatever they want, because intelligent people like themselves will do sensible things when given the choice. They aren’t used to stupid people, who do stupid things when allowed to do whatever they want. Intelligent people need freedom, while stupid people need strong inviolable guidelines about acceptable behaviour.
Could you give a citation for this? I’ve heard other studies claiming the opposite, and I’m not inclined to accept either at face value without knowing what actually went into the studies.
This article has a lot of bell-curve verbal IQ graphs from GSS (General Social Survey) data for the years 2000-2012, using the wordsum score as a measure of intelligence:
http://blogs.discovermagazine.com/gnxp/2012/04/verbal-intelligence-by-demographic/
It shows Republicans as smarter than Democrats, but Liberals smarter than Conservatives, and White people smarter than Black people, and some other comparisons.
Kind of; the great thing about those distributions is that you can talk about more of the distribution than one summary statistic. There’s a clump of high IQ democrats, a clump of low IQ democrats, and then a clump of medium IQ democrats, whereas the Republicans look like one clump of medium IQ republicans. There are more Democrats from 0 to 5, more Republicans from about 6 to 8, and a tiny few more Democrats from 9 to 10.
This matches with the prediction that there is a significant group of low-vocabulary people who vote predominantly Democratic, the middles voting somewhat more Republican, and the highs about evenly split.
I’d expect the correlation between IQ and WORDSUM to be much weaker when controlling for educational attainment, so some of those graphs will have to be taken with a grain of salt.
What would this statement predict about the WORDSUM distributions by educational level? Is that what that graph shows? (If the graph doesn’t have enough data to answer that question, how else could you answer it?)
So… I think the correlation between IQ and WORDSUM is mostly mediated by education (i.e., in terms of Stuff That Makes Stuff Happen, there’s an arrow from IQ to education and one from education and WORDSUM—there’s also one directly from IQ to WORDSUM but it’s thinner). So I’d expect that the third graph in that article to show an effect more extreme than if you used IQ instead.
But educational attainment is directly caused by IQ, so that wouldn’t make any sense.
Not exclusively IQ—parents’ socio-economic status also matters.
Parents’ socio-economic status is directly caused by parents’ IQ, which is passed on genetically (and a tiny bit environmentally) to their children.
What I mean is, someone with IQ 115 from a upper-class family will be more likely to go to college than someone with IQ 115 from a lower-class family.
I can’t find anything right now on what effect parents’ class (what does that mean? SES?) has on educational attainment for people of the same IQs. Someone else may want to look it up if they’re better at googling than me.
But it doesn’t matter. We already know that wordsum, IQ, and educational attainment are measuring similar things. Wordsum seems like a good proxy for IQ. It gives sensible answers in all the graphs, and it is said to correlate .71 with adult IQ.
Do you have a point, or some sort of theory about what I was saying? Do you disagree with the idea that Republicans are smarter (except at the top end) than Democrats, or that “liberals” are smarter than “conservatives”?
I don’t.
My point was that using a test that heavily relies on ‘learned’ knowledge such as Wordsum may have exaggerated the effect (compared to what one would see if one used a more culture-neutral test such as Raven’s progressive matrices) when some of the groups have historically been educated more than others for additional reasons besides IQ (even if said reasons correlate with IQ, so long as the correlation isn’t close to 1).
Explain that claim, please.
Environmentally in this context just means anything that’s not directly genetic or inherited epigenetic. It doesn’t mean plants and animals or anything like that.
IQ is mostly genetic (in rich egalitarian countries like the USA), but everyone seems to agree that there’s still some environmental factors that smart parents can do to make their children a tiny bit smarter. I don’t know exactly what those factors are though. Probably any kind of practice with thinking and studying would help a tiny bit, but perhaps other things to do with better care such as nutrition. But I know there’s not a lot that parents can do that helps with IQ long-term, especially when society as a whole is already trying to do everything they can to boost IQ environmentally already.
IQ is significantly genetic, but there’s considerably more than a little bit of variance in intelligence between people given the same DNA, and that’s without bringing in the effect of raising people in widely divergent cultures.
It would provide significantly useful evidence, if we had no other information to determine the truth of the tenets of conservatism. Given that we do, and that the ‘evidence’ provided by who believes liberalism vs conservatism is not strong, I suggest it is better to ignore it.
Why? Because using these sorts of arguments are very dangerous because they so readily degenerate into overvaluing social proof.
How are values are true or false. You seem to be arguing for objectivist morality.
Consider, all the greatest minds in Philosophy, specifically ethics, believed in consequentialism. This provides no weight towards or against that particular ethical system. No one has value expertise. People can value one thing (security) or another (liberty). Inset whatever values as necessary.
The same is true with progressives and conservatives generally.
That fact provides no weight towards what we should value.
No, he’s saying that liberalism and conservatism also come with sets of beliefs about the nature of reality and sets of predictions about the consequences of their actions. Some of which are wrong (for both groups). And he’s saying we should be able to guess which group has a better understanding of the world by comparing their IQs. Which I think is a valid point, except that the example he chose is one where IQ clearly creates a bias of its own, and one where black people probably miscategorise themselves.
I believe it was John Stuart Mill who said that.
Nice move using Stalin instead of Hitler, since I get tired of hearing the latter brought up. I myself have endorsed some of Stalin’s ideas like “[ideology] in one country” since even if his policies were bad he was at least fairly successful in getting them implemented and lasting for a good while.
I’m with McCabe—what was the epiphany?
This is wrong.
Even presuming that you’re speaking very informally, and your statement shouldn’t be interpreted literally, it’s STILL wrong.
“The least convenient path is the only valid one.”
When arguing against and idea honestly with the strongest advocates, is it always true that what is right is not always what is easy? Does making the choice not to argue make someone wrong outright or does not entering into the argument in the first place make the point of view non-existent in some way?
It makes your argument wrong by default.
This is in the context of arguing against someone else’s opinion. If you are entering such an argument, the only correct choice is the least convenient—that is, arguing the strongest proponent of the idea you are arguing against.
Clarification: Just yudkowsky after 2003 or yudkowsky and bostrom together, perhaps sharing the same mistake? It would be usefull to know so I don’t make the same mistake, et al.
Matthew: Just me after 2003, not Bostrom.
I call the experience my “Bayesian enlightenment” but that doesn’t really say anything, does it? Guess you’ll have to keep reading Overcoming Bias until I get there.
michael vassar: You’re right when you say a correlation of intelligence with liberalism is evidence for liberalism, but that’s not because the stupid people are conservative, it’s because the smart people are liberal. At least I think that’s what Eliezer meant.
Though you could see the conservativeness of stupid people as strengthening the evidence provided by smart liberal people because it points at there being more of a conservative human baseline to deviate from.
“”“A car with a broken engine cannot drive backward at 200 mph, even if the engine is really really broken.”””
Wrong!
“When the player’s truck is put into reverse, the truck will accelerate infinitely; however, the truck will halt instantly when the reverse key is released.”
“I call the experience my “Bayesian enlightenment” but that doesn’t really say anything, does it?”
Note to readers: Eli discovered Bayesian probability theory (in general) much earlier than 2003, see http://www.singinst.org/upload/CFAI//design/clean.html#programmer_bayesbinding.
“You’re right when you say a correlation of intelligence with liberalism is evidence for liberalism, but that’s not because the stupid people are conservative, it’s because the smart people are liberal.”
If you assume the population is partitioned into liberals and conservatives, a high percentage of stupid conservatives implies a high percentage of smart liberals, and vice-versa. If smart liberals are Bayesian evidence for B, then smart conservatives must be Bayesian evidence against B (note that ‘smart’ here is relative to the average, not some absolute level of smartness).
Can we agree on the following: if you pick a random stupid person and ask for an opinion on B, and the stupid person says B is false, this cannot be evidence against B unless you have background knowledge on the fraction of people who think B, in which case all the work is really being done by the indirect inference about the opinions of smarter people, so calling the stupid person’s opinion negative evidence is misleading even if strictly speaking correct?
I’m not sure if I’d agree on that, especially when it comes to political topics, stupid people with strong exposition to mass media tend to perform significantly worse than random: Thus using the opposite of what said stupid person supported seems to have at least a mildly higher chance of being true in T/F question.
Isn’t the truth of a thing (such as a sentence or artwork) determined by how closely it matches reality? And the match-level is a function of the identity of reality and of the thing. So there is no mention of smart or dumb people anywhere in that.
Good post, and good job putting this into a common language framework. If you convince only one or two more people to think clearly, it was worth it! B
Steven: Yes we can, with the caveat you mentioned earlier about the human baseline. Of course, that point is plausibly precisely what Mill or whoever was pointing to with his comment.
No. The “unless” clause is still incorrect. We can know a great deal about the fraction of people who think B, and it still cannot serve even as meta-evidence for or against B.
There is an ongoing confusion here about the difference between evidence and meta-evidence. It is as obvious and important as the difference between experimental analysis and meta-analysis, and it is NOT being acknowledged.
“No. The “unless” clause is still incorrect. We can know a great deal about the fraction of people who think B, and it still cannot serve even as meta-evidence for or against B.”
This can’t be right. I have a hundred measuring devices. Ninety are broken and give a random answer with an unknown distribution, while ten give an answer that strongly correlates with the truth. Ninety say A and ten say B. If I examine a random meter that says B and find that it is broken, then surely that has to count as strong evidence against B.
This is probably an unnecessarily subtle point, of course; the overall thrust of the argument is of course correct.
We can know a great deal about the fraction of people who think B, and it still cannot serve even as meta-evidence for or against B. There is an ongoing confusion here about the difference between evidence and meta-evidence.
No. From a Bayesian perspective, there is no difference other than strength. This is, of course, different from saying that the truth is what the authorities say it is, but I think that’s what you’re hearing it as.
Actually, if I’m not wrong (and it still confuses me), arguments from authority have a different conditional probability structure than “normal” arguments.
“You’re right when you say a correlation of intelligence with liberalism is evidence for liberalism, but that’s not because the stupid people are conservative, it’s because the smart people are liberal.”
That seems to me exactly wrong. A proposition’s truth or falseness is not entangled in the intelligence of the people who profess the proposition. Alien cultists do not change the probability of poorly hidden aliens. Dumb people who argue for evolution over creationism do not raise the probability that Genesis is natural history, no matter how dumb they are. Conservative Proposition X will be true or not true regardless of whether it is supported by a very intelligent conservative or by a very dumb conservative.
That’s precisely why Bayes’ Theorem isn’t all you need to know in order to reason. It’s an immensely powerful tool, but a grossly inadequate methodology.
Again: there is a great deal of confusion about the difference between evidence and meta-evidence here.
If I do find someone whose statement seem to reliably anti-correlate with reality, am I justified in taking their making a statement as evidence that the statement is false?
Caledonian: please define meta-evidence, then, since I think Eliezer has adequately defined evidence. Clear up our confusion!
Eliezer has NOT adequately defined evidence. There is no data that isn’t tied to every event through the operations of causality.
And you haven’t tried to define meta-evidence at all.
Yes Doug. Furthermore, if you can find a pair of people the difference of who’s opinions seems to correlate with reality you can use that as evidence, which is the pattern pointed to by the original quote.
The definition Eliezer offered, and the way in which he used the term later, are not connected in any meaningful way. His definition is wrong.
Do you know what a meta-analysis study is?
Beware of feeding trolls. If the one can offer naught but flat assertions, you may be better off saying, “Let the audience decide.” If you engage and offer defense to each repeated flat assertion, you encourage them to do even less work in the future, since it offers the same attention-reward.
@yudkowsky I would be happy if I could judge the merit of Bayes for myself versus the frequentists approach. I doubt UTD faculty have seen the light, but who knows, they might. I wonder even more deeply if a thorough understanding of Bayes gives any insight into Epistemology? If you can answer Bayes does offer insight into epistemology I know for sure I will be around for many more months. If I remember correctly, we both have the same IQ (140) yet I am much worse at mathematics. OF course, my dad is an a/c technician, not a physicist.
I enjoy your hard work and insights Eliezer. Also Caledonians comments, mainly for their mystery.
Likewise, if you attempt to engage people who make foolish proclamations and ambiguous definitions, it can reward them with attention and conversation. The benefits to puncturing shoddy arguments are often greater than the prices that need to be paid to do so.
Eliezer has repeatedly offered a definition for a term, gone on to mention that this definition is incomplete, and then failed to explicitly refine the definition or provide a process for the reader to update it. Despite recognizing the fallacious nature of conclusions or arguments supported with such behavior (what he has called the “hidden advisor fallacy”), he doesn’t seem to have a problem with it when he’s the one using the fallacy.
Precision is an absolute requirement for deriving valid conclusions, and when using natural languages extreme care has to be taken to compensate for their ambiguity. You’re not taking that care—you are in fact being extraordinarily careless.
Caledonian,
What do you mean by meta-evidence? How isMr. Yudkowsky’s definition of evidence not adequate for the use in this post?
How about this for a precise definition: A is evidence about B if p(A | B) != p(A | ~B).
Of course, by this definition, almost everything is evidence about almost everything else. So we’d like to talk about the strength of evidence. A good candidate is log p(A | B) - log p(A | ~B). This is the number that gets added to your log odds for B when you observe A.
Ding ding ding!
It may even be the case that, by that definition, everything is evidence about everything else. And clearly that doesn’t match our everyday understanding and use of the term—it doesn’t even match our formal understanding and use of the term.
What’s missing from the definition that we need, in order to make the definition match our understanding?
But everything is evidence about everything else. I don’t see the problem at all.
Given the circumference of Jupiter around its equator, the height of the Statue of Liberty, and the price of tea in China, can you tell me what’s sitting atop my computer monitor right now?
If so, what is it?
If not, why not? I gave you plenty of evidence.
I know with 99% probability that the item on top of your computer monitor is not Jupiter or the Statue of Liberty. And a major piece of information that leads me to that conclusion is… you guessed it, the circumference of Jupiter and the height of the Statue of Liberty. So there you go, this “irrelevant” information actually does narrow my probability estimates just a little bit.
Not a lot. But we didn’t say it was good evidence, just that it was, in fact, evidence.
(Pedantic: You could have a model of Jupiter or Liberty on top of your computer, but that’s not the same thing as having the actual thing.)
Steven, I reduxified your argument as Argument Screens Off Authority.
If not, why not? I gave you plenty of evidence.
Caledonian, you gave evidence, but you certainly didn’t give plenty of it. I see you ignored the part of my post where I talked about how to quantify evidence. The important question isn’t whether or not we have evidence; it’s how much evidence we have.
Let me make an analogy. I can define sugar as sucrose; a specific carbohydrate whose molecular structure you can view on wikipedia. I might say that a substance is “sugary” if it contains some sugar. But by this definition, almost everything is sugary, so I hasten to point out that the important question is how sugary it is, and we might define this as the fraction of its mass which consists of sugar.
If, after I have pointed this out, you offer me some sugar cookies containing 1 molecule of sucrose, and then defend yourself by saying that according to my definition, they are indeed sugary, you are being obnoxious. I already told you how to quantify sugariness, and you ignored it for rhetorical reasons.
Evidence is like gravity. Everything is pulling on everything else, but in most cases the pull is weak enough that we can pretty much ignore it. What you have done, Caledonian, is akin to telling me the position of three one-gram weights, and then asking me to calculate the motion of Charon based on that.
No, I’m not being obnoxious. I’m pointing out that your definition is bad by showing that it leads directly to common and absurd conclusions.
By Eliezer’s definition, even the thing he offers as an example of a thing that isn’t evidence IS STILL EVIDENCE. And instead of you recognizing that this means something is deeply wrong with the definition, you try to exploit the ambiguity of language to defend the utterly absurd result.
So close… and yet, so far.
I agree with you that, even if I gave you absolute, complete, and utterly precise data on the three weights, there is no way you could derive the motion of Charon from that.
So: are the three weights evidence of Charon’s movement?
For any that may be genuinely confused: If you read What is Evidence?, An Intuitive Explanation of Bayesian Reasoning, and A Technical Explanation of Technical Explanation, you will understand how to define evidence both qualitatively and quantitatively.
For the rest of you: Stop feeding the troll.
Caledonian is just trying to point out that the keys to rationalism are family values and a literal interpretation of the Bible. I don’t know why you all can’t see something so obvious.
Observe:
“It may even be the case that, by that definition, everything is evidence about everything else. And clearly that doesn’t match our everyday understanding and use of the term—it doesn’t even match our formal understanding and use of the term.
What’s missing from the definition that we need, in order to make the definition match our understanding?”
Jesus.
“Given the circumference of Jupiter around its equator, the height of the Statue of Liberty, and the price of tea in China, can you tell me what’s sitting atop my computer monitor right now?”
Jesus.
“Do you know what a meta-analysis study is?”
Jesus.
The Bible has the answers, people. This is just further proof that until the ‘rationalist’ community incorporates insights from the Intelligent Design movement and other members of the irrational community, no further progress can be made in understanding the movement of Charon or whatever. Keep the faith Caledonian. You’re a warrior of God.
If this is the same Caledonian who used to post to the Pharyngula blog, he’s barred from there now with good reason.
Is there a cognitive bias at work that makes it hard for people not to feed trolls?
Is there a mathematical expression in probability for the notion that unless someone is making a special effort (concerted or otherwise) they can’t be any ‘wronger’ than 50% accuracy? Subsequently betting the other way would be generating evidence from nothing—creating information. Why no mention of thermodynamics in this post & thread?
Not to feed the troll or anything, but yes, the masses and positions of the three weights are evidence about Charon’s movement. Why? Because if you calculated Charon’s orbit without knowing their masses, positions etc, you’d be less accurate than if you did. Fact! (Note evidence ABOUT. Evidence OF Charon’s movement is taken care of with a decent telescope!)
Eliezer, in your opinion, do the historical prevalence of organised religion, and the human tendency to faith in the unknowable/unprovable, have any bearing at all on the likelihood of the existence of a supreme being of some description?
That’s exactly what I was wondering. A perfect score presumably means either an amazing coincidence or perfect intelligence within the context of the decisions made. (Or is it just perfect information?) And a perfectly incorrect score would then mean the same thing. And a score that exactly matches randomness would seem to involve no intelligence or information at all, although it, too, could presumably also result from perfect information, if that was the objective.
Calculating Charon’s orbit without knowing what direction Charon moves in, or even whether it moves at all, is an impossible task. You are substituting “Charon’s orbit” for “Charon’s movement” in your argument, then acting as though you have made a statement about Charon’s movement.
If you all do not grasp why precision in the use of words is absolutely necessary to eliminate bias in natural language, I will drop the point; nevertheless, it remains.
Ben Jones, I don’t see the human existence of religion as having any evidential bearing on the existence of a Super Happy Agent sufficiently like a person and unlike evolution that theists would actually notice its existence. Pretty much the same probability as an object one foot across and composed of chocolate cake existing in the asteroid belt. For interventionist Super Happy Agents, same probability as elves stealing your socks.
Incidentally, with sufficiently precise measurements it’s perfectly possible to get a gravitational map of the entire Solar System off a random household object.
Ben Jones, I don’t see the human existence of religion as having any evidential bearing on the existence of a Super Happy Agent sufficiently like a person and unlike evolution that theists would actually notice its existence.
Any evidential bearing? Surely P(religion X exists|religion X is true) is higher than P(religion X exists|religion X is false).
Nick, I don’t see how that follows for the supermajority of religions that are logically self-contradictory, except in the sense that if 1=2 then the probability of the Sun rising tomorrow is nearly 200%. Furthermore, Ben Jones asked about religion in general rather than any specific religion, and religion in general most certainly cannot be true.
In general, any claim maintained by even a single human being to be true will be more probable, simply based on the authority of that human being, than some random claim such as the chocolate cake claim, which is not believed by anyone.
There are possibly some exceptions to this (and possibly not), but in general there is no particular reason to include religions as exceptions.
Also incorrect. More than one configuration of masses can have exactly the same effect on the object. No matter how precisely you measure the properties of the object, you can never distinguish between those configurations.
I should add that this is true about self-contradictory religions as well. For the probability that I mistakenly interpret the religion to be self-contradictory is greater than the probability that the chocolate cake is out there.
“If God did not exist, it would be necessary to invent him.”
Nick: Why should an atheistic rationalist have any more faith a a religion that exists than a religion that doesn’t? I don’t belive in God; the testimony of a man that claims he spoke to God in a burning bush doesn’t sway me to update my probability. I Defy The Data!
My ‘lack of faith’ stems from a probability-based judgment that there is no Super Agent. With this as my starting point, I have as much reason to worship Yoda as I do God.
Ben Jones, I don’t see the human existence of religion as having any evidential bearing on the existence of a Super Happy Agent sufficiently like a person and unlike evolution that theists would actually notice its existence. Pretty much the same probability as an object one foot across and composed of chocolate cake existing in the asteroid belt. For interventionist Super Happy Agents, same probability as elves stealing your socks.
Eli, you’re just saying that you don’t believe in the existence of a SHASLAPAUETTWANIE. But since you labeled it with: ”...that theists would actually notice its existence,” then clearly the existence of religion has some evidential bearing on the existence of a SHASLAPAUETTWANIE.
Eli, you’re just saying that you don’t believe in the existence of a SHASLAPAUETTWANIE. But since you labeled it with: ”...that theists would actually notice its existence,” then clearly the existence of religion has some evidential bearing on the existence of a SHASLAPAUETTWANIE.
(Blink.)
Um, I concede to your crushing logic, I guess… what exactly am I conceding again?
Flying saucer cultism was helped along by secret Cold War technological advances that were accidentally witnessed by civilians.
For example, the famous 1947 Roswell incident was the crashing of an American strategic reconnaissance super-balloon that was supposed to float over the Soviet Union and snap pictures, which would then be recovered many thousands of miles away. That’s why it was made out of the latest high-tech materials that were unfamiliar to people in small town New Mexico in 1947.
The KGB used to generate flying saucer stories in Latin America to discredit actual sightings of the re-entry of a Soviet “partial-orbit” missile that was being tested in order to allow a surprise attack on the U.S. from the South (the NORAD radar assumed a Soviet attack would come over the Arctic). KGB agents in Latin America would phone in flying saucer reports to newspapers to make honest witnesses of the Soviet missile test look like lunatics.
Steve, maybe this was your point anyway, but the incidents you mention indicate that the existence of flying saucer cults is evidence for the existence of aliens (namely by showing that the cults were based on seeing something in the real world.) No doubt they aren’t much evidence, especially given the prior improbability, but they are certainly evidence.
“Not all Conservatives are stupid, but most stupid people are
Conservatives.” (The British Conservative Party was the brunt of this quip by J.S. Mills.) It helps to Venn diagram this. I find that many stupid conservatives assume that conservatives are the majority, which leaves few stupid people to be liberals or anything else (although a majority of Liberals are assumed by stupid conservatives to be stupid people). But if conservatives are not a majority, there are many stupid people who MIGHT or MIGHT NOT be liberals. I assume there are plenty of stupid people to go around between the conservatives, liberals and other groups. If conservatives ARE the majority and most stupid people are conservatives, but liberals are a very sizeable minority, you would expect a lot of Smart People Who Are liberals. To quote Karl Rove: “As people do better, they start voting like Republicans—unless they have too much education and vote Democratic, which proves there can be too much of a good thing.” Presumably the Republicans have a lock on the rich and stupid vote, or at least the rich and uneducated.
pokes head in and looks around Okay, I’m new here, and maybe I shouldn’t open by poking a sleeping dragon, but I can’t help but try and take a small crack at this. firm nod
As I understand it, the crux of the article is concern about irrational arguments which imply that valid points and rational arguments should be discarded if they are somehow associated with irrational, unsuccessful, or commonly disliked people. Many of the comments on the conservatives quote seem to ignore that context.
Also, debating the semantics of the article rather than its core meaning (Whether or not the word “evidence” should be used to describe such, for example) is counterproductive, distracting and not really assisting people in learning rational thought.
That said, I feel that even out of context the point is still valid. One major and somewhat flawed assumption in many of the comments here as well as consistently elsewhere is that smart people are more likely to arrive at the “correct” answer than stupid people.
Stupid people often arrive at the correct conclusion, either by luck, or because a question isn’t actually very difficult, or because they may listen to the advice of others.
Smart people often arrive at the incorrect conclusion, as they do not (and can not) always gather enough evidence to make a sufficiently informed decision, and sometimes take false advice or evidence under consideration.
Both smart and stupid people can and will often lie about their findings for many reasons, or they may disagree over what defines the correct conclusion.
Such associative statements are therefore so weakly weighted as evidence that they are only really worth mentioning if no other evidence can be found, and an argument which relies solely on such weak data as this is not to be trusted as truth.
At best, it may serve as a cautious guess until more reliable evidence can be found, which would quickly render the opinions of unreliable people (be they stupid or smart) obsolete.
In the Charon’s orbit example, this would be less like using small weights in a room (Which is very fine, very precise evidence), and more like trying to track Charon’s movement using blurry pictures taken at random across the entire night sky. If some of them had inaccurate timestamps.
Even though stupid people sometimes get things right, and smart people sometimes get things wrong, that doesn’t say anything about how often they do so (comparatively). You can’t use those rare cases to negate the ‘assumption’ that intelligence aids correct judgements. It just means that intelligence is not a 100% guarantee of correctness—but we knew that anyway. As it stands, the usefulness of different aspects of intelligence—reasoning, analytical ability and so on—in assessing probablities and making judgements is fairly obvious.
Also, even if the personal beliefs of one individual don’t serve as very strong evidence, a large-scale trend towards more intelligent people favouring one side of the argument should be taken into account. It’s not so much evidence in itself as meta-evidence that a) other people who may know things you don’t, tend to favour one option; and b) other people with the same knowledge as you, but better processing capabilities, tend to favour that option. With more complex issues which you may not have much personal experience of, this could be a rather substantial factor in your probability assessment.
I should also point out that it’s intelligence, not stupidity, that is important. Intelligent supporters of a view can be taken as reasonably strong evidence, as seen above. Stupid people have less intelligence, therefore their view should be weaker evidence—but even a stupid person supporting something INCREASES the probability that that view is correct, albeit by such a small amount that it can almost be ignored in favour of assessing what smart people think.
Of course, then there’s the worldview difference to consider, and the fact that even if they can make a better decision than you, their “better” option may not lead to a more desirable world from your perspective.
And that’s why politics is more about identity than predictive truth.
Many people are unsatisfied with their monogamous relationships, therefore polyamory must be great?
-Heartland Institute billboard
From the press release:
Interestingly, science is the first thing mentioned in the next section:
That quote from J. S. Mill would be perfect for a pasted-on “billboard improvement.”
Again, I disagree. Cults can’t form around anything. They can only form around issues that would make them social or intellectual outcasts. And in a world in which there were poorly hidden aliens, too many intelligent people would be of the opinion that there are poorly hidden aliens, and no such cult could arise.
But the more important point is… IF I start to think that there are poorly hidden aliens, that could be due to one of two reasons: either because I have reasonable evidence for their existence, or because I’m being influenced by some sort of bias.
The existence of cults around the issue shows that those biases exist and are reasonably common, and thus are a more likely reason for my belief than the alternative of actual aliens.
I’d guess P(cults|aliens) would be noticeably bigger than P(cults|~aliens).
It’s just that the prior P(aliens) is so tiny that even the posterior P(aliens|cults) is negligible.
That’s one way to put it, I guess.
Please try to keep your comments informative. Note that your comments show up in recent comments.
I think that this has deal with boundeed rationality. Perfect knowledge required endless ammount of time- and all human have only limited lifetime. So, ammount of time for each dessision limited even more. Therefore we can not explore all argument. And I think—it would be a good strategy to throw away some arguments right in the begining and don’t waste time on them. Instead you can pay more attention to more plausible one. And this give you a opportunity to build a relatively accurate model of the world in relativly short time. If you not agree- consider this argument. How you can argue against communism if you don’t read all the works of Marx/Engels/Lenin/Mao/Trotskiy/Rosa Luxemburg/Bucharin/Zinovev/Stalin/Kautskiy/Sen-Simon....And so on- liste can be endless. I think that such arguments are actually used as a demand that you must slavishly agree with persone who said this. Clearly, this unacceptable and we have right don’t agree even if we doesn’t read all this volumes- using only limited ammount of data that we already posses. So. if we throw away some idea after first glance- stupidity of followers is not a worst criteria for doing this
“The conditional probability P(cults|aliens) isn’t less than P(cults|aliens) ”
Shouldn’t this be ” The conditional probability P(cults|~aliens) isn’t less than P(cults|aliens) ?” It seems trivial that a probability is not less than itself, and the preceding text seems to propose the modified version included in this comment.
I’ve just realized that there’s a footnote addressing this. My apologies.
I’m reserved as to the corollary that only winning against the strongest advocate of an idea holds ANY meaning to disprove the idea.
For one, there could be a better arguer. If there is a better advocate of the intelligence explosion than Eliezer, unlikely as they may seem, who just won’t go public and keeps to private circles, would it do nothing to win against the former? Taken another step further, if it is likely there ever will be such a proponent, does that invalidate all present and past efforts?
For another, the quality of an arguer can only be made after they effect. So to have any standing on any idea, one must win against every single advocate of the opposing view. Has anyone here tried that on, say, theism?
I think it’s more accurate to say that winning an argument against sub-optimal advocates of an idea doesn’t give enough basis to discredit the idea reliably. Indeed, since in complicated issues there is often no advocate who can exhibit all arguments favoring a position, one cannot completely discredit the idea even after defeating the champion of advocates. This frame seems more Bayesian Rationalistic, too, as it does not deal with probabilities of 0 or 1.
One point Hans Rosling tried to convey constantly in Factfulness is that most stupid people perform significantly worse than random (than “the chimps” as he portrayed it). He argued that this results from biases etc. If this generalises to more than people’s perception of how the world is doing and increases in strength for less intelligent people or people with more exposition to mass media, this could potentially apply to silliness and alien intervention not being orthogonal but having significant correlation as well. Similarly, “most stupid people vote republican” is then at least some mild evidence that this party actually is more “wrong” (although I think this is weaker for a two-party system like the US, and stronger for European multiparty systems: If a party’s actions correlate with the preferences of stupid people (and those perform worse than random), that’s some evidence that they’re wrong and thus at least mildly relevant to a political debate.
I actually think it is possible for someone’s beliefs to anti-correlate with reality without being smart enough to know what really is true just to reverse it. I can think of at least three ways this could happen, beyond extremely unlikely coincidences. The first two are that a person could be systematically deceived by someone else, until they have more false beliefs then true ones, and the second is that systematic cognitive biases could reliably distort their beliefs. The third is the most interesting one, though: If someone has a belief that many of their other beliefs depend on, and that belief is wrong, it could lead to all of those other beliefs being wrong as well. There are plenty of people who base a large portion of their beliefs on a single belief or cluster of beliefs, the most obvious example being the devoutly religious, especially if they belong to a cult or fundamentalist group. Basically, since beliefs are not independent, people can have large sets of connected beliefs that stand or fall together. Of course, this still wouldn’t affect the probability that any of their beliefs not connected to those clusters are true, so it doesn’t really change the conclusion of this essay by much, but I think it is interesting nonetheless. At the very least, it is a warning against having too many beliefs that all depend on a single idea.
At least for me and therefore possibly others it would be useful to put a hyperlink on an Important Name to go off to a review of their primary argument. For instance “Hanson”. Wonderful writeup.
Thanks.
“If a hundred inventors fail to build flying machines using metal and wood and canvas, it doesn’t imply that what you really need is a flying machine of bone and flesh.”
Although we do, because that would be awesome.