The Proper Use of Humility
It is widely recognized that good science requires some kind of humility. What sort of humility is more controversial.
Consider the creationist who says: “But who can really know whether evolution is correct? It is just a theory. You should be more humble and open-minded.” Is this humility? The creationist practices a very selective underconfidence, refusing to integrate massive weights of evidence in favor of a conclusion they find uncomfortable. I would say that whether you call this “humility” or not, it is the wrong step in the dance.
What about the engineer who humbly designs fail-safe mechanisms into machinery, even though they’re damn sure the machinery won’t fail? This seems like a good kind of humility to me. Historically, it’s not unheard-of for an engineer to be damn sure a new machine won’t fail, and then it fails anyway.
What about the student who humbly double-checks the answers on their math test? Again I’d categorize that as good humility. The student who double-checks their answers wants to become stronger; they react to a possible inner flaw by doing what they can to repair the flaw.
What about a student who says, “Well, no matter how many times I check, I can’t ever be certain my test answers are correct,” and therefore doesn’t check even once? Even if this choice stems from an emotion similar to the emotion felt by the previous student, it is less wise.
You suggest studying harder, and the student replies: “No, it wouldn’t work for me; I’m not one of the smart kids like you; nay, one so lowly as myself can hope for no better lot.” This is social modesty, not humility. It has to do with regulating status in the tribe, rather than scientific process. If you ask someone to “be more humble,” by default they’ll associate the words to social modesty—which is an intuitive, everyday, ancestrally relevant concept. Scientific humility is a more recent and rarefied invention, and it is not inherently social. Scientific humility is something you would practice even if you were alone in a spacesuit, light years from Earth with no one watching. Or even if you received an absolute guarantee that no one would ever criticize you again, no matter what you said or thought of yourself. You’d still double-check your calculations if you were wise.
The student says: “But I’ve seen other students double-check their answers and then they still turned out to be wrong. Or what if, by the problem of induction, 2 + 2 = 5 this time around? No matter what I do, I won’t be sure of myself.” It sounds very profound, and very modest. But it is not coincidence that the student wants to hand in the test quickly, and go home and play video games.
The end of an era in physics does not always announce itself with thunder and trumpets; more often it begins with what seems like a small, small flaw . . . But because physicists have this arrogant idea that their models should work all the time, not just most of the time, they follow up on small flaws. Usually, the small flaw goes away under closer inspection. Rarely, the flaw widens to the point where it blows up the whole theory. Therefore it is written: “If you do not seek perfection you will halt before taking your first steps.”
But think of the social audacity of trying to be right all the time! I seriously suspect that if Science claimed that evolutionary theory is true most of the time but not all of the time—or if Science conceded that maybe on some days the Earth is flat, but who really knows—then scientists would have better social reputations. Science would be viewed as less confrontational, because we wouldn’t have to argue with people who say the Earth is flat—there would be room for compromise. When you argue a lot, people look upon you as confrontational. If you repeatedly refuse to compromise, it’s even worse. Consider it as a question of tribal status: scientists have certainly earned some extra status in exchange for such socially useful tools as medicine and cellphones. But this social status does not justify their insistence that only scientific ideas on evolution be taught in public schools. Priests also have high social status, after all. Scientists are getting above themselves—they won a little status, and now they think they’re chiefs of the whole tribe! They ought to be more humble, and compromise a little.
Many people seem to possess rather hazy views of “rationalist humility.” It is dangerous to have a prescriptive principle which you only vaguely comprehend; your mental picture may have so many degrees of freedom that it can adapt to justify almost any deed. Where people have vague mental models that can be used to argue anything, they usually end up believing whatever they started out wanting to believe. This is so convenient that people are often reluctant to give up vagueness. But the purpose of our ethics is to move us, not be moved by us.
“Humility” is a virtue that is often misunderstood. This doesn’t mean we should discard the concept of humility, but we should be careful using it. It may help to look at the actions recommended by a “humble” line of thinking, and ask: “Does acting this way make you stronger, or weaker?” If you think about the problem of induction as applied to a bridge that needs to stay up, it may sound reasonable to conclude that nothing is certain no matter what precautions are employed; but if you consider the real-world difference between adding a few extra cables, and shrugging, it seems clear enough what makes the stronger bridge.
The vast majority of appeals that I witness to “rationalist’s humility” are excuses to shrug. The one who buys a lottery ticket, saying, “But you can’t know that I’ll lose.” The one who disbelieves in evolution, saying, “But you can’t prove to me that it’s true.” The one who refuses to confront a difficult-looking problem, saying, “It’s probably too hard to solve.” The problem is motivated skepticism a.k.a. disconfirmation bias—more heavily scrutinizing assertions that we don’t want to believe.1 Humility, in its most commonly misunderstood form, is a fully general excuse not to believe something; since, after all, you can’t be sure. Beware of fully general excuses!
A further problem is that humility is all too easy to profess. Dennett, in Breaking the Spell: Religion as a Natural Phenomenon, points out that while many religious assertions are very hard to believe, it is easy for people to believe that they ought to believe them. Dennett terms this “belief in belief.” What would it mean to really assume, to really believe, that three is equal to one? It’s a lot easier to believe that you should, somehow, believe that three equals one, and to make this response at the appropriate points in church. Dennett suggests that much “religious belief” should be studied as “religious profession”—what people think they should believe and what they know they ought to say.
It is all too easy to meet every counterargument by saying, “Well, of course I could be wrong.” Then, having dutifully genuflected in the direction of Modesty, having made the required obeisance, you can go on about your way without changing a thing.
The temptation is always to claim the most points with the least effort. The temptation is to carefully integrate all incoming news in a way that lets us change our beliefs, and above all our actions, as little as possible. John Kenneth Galbraith said: “Faced with the choice of changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof.”2 And the greater the inconvenience of changing one’s mind, the more effort people will expend on the proof.
But y’know, if you’re gonna do the same thing anyway, there’s no point in going to such incredible lengths to rationalize it. Often I have witnessed people encountering new information, apparently accepting it, and then carefully explaining why they are going to do exactly the same thing they planned to do previously, but with a different justification. The point of thinking is to shape our plans; if you’re going to keep the same plans anyway, why bother going to all that work to justify it? When you encounter new information, the hard part is to update, to react, rather than just letting the information disappear down a black hole. And humility, properly misunderstood, makes a wonderful black hole—all you have to do is admit you could be wrong. Therefore it is written: “To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.”
- Outside the Laboratory by 21 Jan 2007 3:46 UTC; 150 points) (
- The Level Above Mine by 26 Sep 2008 9:18 UTC; 134 points) (
- How to avoid dying in a car crash by 17 Mar 2012 19:44 UTC; 125 points) (
- Would You Work Harder In The Least Convenient Possible World? by 22 Sep 2023 5:17 UTC; 104 points) (
- The Sin of Underconfidence by 20 Apr 2009 6:30 UTC; 103 points) (
- Status Regulation and Anxious Underconfidence by 16 Nov 2017 19:35 UTC; 83 points) (
- Could Anything Be Right? by 18 Jul 2008 7:19 UTC; 73 points) (
- The Contrarian Status Catch-22 by 19 Dec 2009 22:40 UTC; 70 points) (
- Nonperson Predicates by 27 Dec 2008 1:47 UTC; 66 points) (
- Meditation Trains Metacognition by 20 Oct 2013 0:47 UTC; 59 points) (
- The Modesty Argument by 10 Dec 2006 21:42 UTC; 59 points) (
- Extenuating Circumstances by 6 Apr 2009 22:57 UTC; 56 points) (
- Rationality Lessons in the Game of Go by 21 Aug 2010 14:33 UTC; 49 points) (
- Debiasing as Non-Self-Destruction by 7 Apr 2007 20:20 UTC; 46 points) (
- Advancing Certainty by 18 Jan 2010 9:51 UTC; 44 points) (
- Rationalist Storybooks: A Challenge by 18 Mar 2009 2:25 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Rationalists should beware rationalism by 6 Apr 2009 14:16 UTC; 32 points) (
- The Error of Crowds by 1 Apr 2007 21:50 UTC; 32 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- What Motte and Baileys are rationalists most likely to engage in? by 6 Sep 2021 15:58 UTC; 25 points) (
- Risk-Free Bonds Aren’t by 22 Jun 2007 22:30 UTC; 23 points) (
- 7 May 2019 3:02 UTC; 22 points) 's comment on How do we check for flaws in Effective Altruism? by (EA Forum;
- Real Reality by 5 Feb 2022 4:13 UTC; 22 points) (
- Using humility to counteract shame by 15 Apr 2016 18:32 UTC; 21 points) (
- Reflective Consequentialism by 18 Nov 2022 23:56 UTC; 21 points) (
- Politics are not serious by default by 28 Mar 2024 23:36 UTC; 19 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- 7 Dec 2016 10:14 UTC; 16 points) 's comment on My problems with Formal Friendly Artificial Intelligence work by (
- [Altruist Support] How to determine your utility function by 1 May 2011 6:33 UTC; 13 points) (
- 2 Mar 2010 19:44 UTC; 13 points) 's comment on Rationality quotes: March 2010 by (
- Status Regulation and Anxious Underconfidence by 16 Nov 2017 21:52 UTC; 12 points) (EA Forum;
- [SEQ RERUN] The Proper Use of Humility by 22 Apr 2011 11:48 UTC; 12 points) (
- 24 Jul 2012 18:41 UTC; 11 points) 's comment on Less Wrong fanfiction suggestion by (
- 4 Apr 2011 13:38 UTC; 11 points) 's comment on Recent de-convert saturated by religious community; advice? by (
- 15 Jun 2010 4:23 UTC; 11 points) 's comment on Open Thread June 2010, Part 3 by (
- Rationality Reading Group: Part E: Overly Convenient Excuses by 16 Jul 2015 3:38 UTC; 11 points) (
- 19 Jul 2009 17:17 UTC; 10 points) 's comment on Are You Anosognosic? by (
- 10 May 2011 20:41 UTC; 9 points) 's comment on “I know I’m biased, but...” by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 17 Sep 2014 6:09 UTC; 6 points) 's comment on Open Thread by (EA Forum;
- 20 Nov 2015 12:23 UTC; 5 points) 's comment on The Market for Lemons: Quality Uncertainty on Less Wrong by (
- 7 Aug 2012 1:36 UTC; 5 points) 's comment on Self-skepticism: the first principle of rationality by (
- 22 May 2010 6:09 UTC; 5 points) 's comment on The Tragedy of the Social Epistemology Commons by (
- 14 Dec 2011 1:00 UTC; 4 points) 's comment on How to Not Lose an Argument by (
- 22 Mar 2015 7:24 UTC; 4 points) 's comment on Is arrogance a symptom of bad intellectual hygeine? by (
- 17 Feb 2010 3:44 UTC; 4 points) 's comment on Boo lights: groupthink edition by (
- 18 Feb 2011 21:14 UTC; 4 points) 's comment on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields by (
- 30 Jan 2015 20:38 UTC; 3 points) 's comment on Is there a rationalist skill tree yet? by (
- 16 May 2012 14:45 UTC; 3 points) 's comment on Open Thread, May 16-31, 2012 by (
- 16 Jan 2012 5:38 UTC; 2 points) 's comment on What Curiosity Looks Like by (
- 10 Oct 2014 15:10 UTC; 2 points) 's comment on Questions on Theism by (
- 5 Mar 2012 19:47 UTC; 2 points) 's comment on Theists are wrong; is theism? by (
- 6 Mar 2017 12:47 UTC; 2 points) 's comment on Welcome to Less Wrong! (9th thread, May 2016) by (
- 25 Apr 2009 6:40 UTC; 2 points) 's comment on What’s in a name? That which we call a rationalist… by (
- 5 Sep 2022 7:42 UTC; 1 point) 's comment on EAs underestimate uncertainty in cause prioritisation by (EA Forum;
- 10 Dec 2007 0:51 UTC; 1 point) 's comment on When None Dare Urge Restraint by (
- 19 Apr 2014 19:13 UTC; 1 point) 's comment on Rationality Quotes April 2014 by (
- 29 Aug 2010 3:56 UTC; 1 point) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 2 by (
- 10 Mar 2015 20:36 UTC; 0 points) 's comment on [POLL] LessWrong group on YourMorals.org (2015) by (
- 16 Jan 2011 21:09 UTC; 0 points) 's comment on Circular Altruism by (
- 8 Dec 2010 2:49 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 14 Oct 2014 3:43 UTC; 0 points) 's comment on Open thread, Oct. 6 - Oct. 12, 2014 by (
- 12 May 2011 17:57 UTC; -3 points) 's comment on The elephant in the room, AMA by (
Most abstract beliefs most people have make pretty much no difference to their actions. They hold those beliefs not to advise action but to help them think and talk about interesting topics, so they can win friends (and mates and employers) and influence people. For these purposes, changing their minds may well not usually be a good deal.
Do you have some evidence that back up this statement? I understand if this is just something you believe in. I don´t and if you actually have evidence that could update my belief I would be thankful.
“Most abstract beliefs most people have make pretty much no difference to their actions.”
I’m pretty sure I understand well enough what you’re trying to say. But this statement is literally false, since abstract beliefs include many general knowledge claims. If I were informed that my car had just been run into in the parking lot, that would certainly influence my actions.
Perhaps you mean to restrict “beliefs” to “moral beliefs”? Or maybe you mean “abstract” as in “related to one’s daily life only tenuously, if at all”?
pdf, yes, by “abstract” I mean about large abstractions, rather than the specifics of daily life. Some abstractions are useful of course, but most of them are only tenuously related to daily life.
Robin, I’m not sure why you think the difference between “abstract” (?) and non-abstract beliefs is germane to the proper use of humility. It does seem germane to Dennett’s distinction between professing and believing, but that is not the main topic of the essay.
Eliezer, I just meant to point out that while your advice is great for someone who really cares about reducing belief error, it may understandably not be of much use for the usual purposes of most not-directly-practical conversations. Unfortunately this may well be the case for most of the advice we offer here at Overcoming Bias.
Over at http://edge.org/discourse/bb.html ( An Edge Discussion of BEYOND BELIEF ) there seems to be a discussion slightly pertaining to the issue at hand. Anyone care to comment on what Scott Atran is putting forward?
Either I’m missing something, or all of these comments pertain to the general question of why one wants to be rational, with no specialization for the particular question of how to use humility in the service of rationality (assuming from the start that you want to be rational, on which the essay is obviously premised).
Eliezer, perhaps we find your argument so clear and persuasive that we don’t have much to say about it directly, but we want to comment on something so all will see we are paying attention. Perhaps blogs comments need some sort of smiley nodding icon option, letting us indicate our pleasure with your post without needing words. :)
Reading this comment 4 years after it was posted cause one of those “aha” moments for why we have the karma system.
I think it cuts down on the trolls significantly as well.
More significantly it provided a method of allowing the community as a whole condem or reward patterns of thought/expression.
The sort of humility required can inculcated by an openminded and continuous study of the human propensity to develop systems of thought that are often sealed from the admission of evidence which might contradict them.
cf.:
http://amethodnotaposition.blogspot.com/2005/10/how-to-become-crackpot.html
and:
http://michaelprescott.typepad.com/michael_prescotts_blog/2006/12/hypnotized_by_s.html
My own personal view is that this needed form of humility is even more lacking in self-proclaimed rationalists than the population at large, probably for selection reasons.
I discuss some very interesting fMRI research bearing on this question here:
http://amethodnotaposition.blogspot.com/2006/10/confirmation-bias.html
To avoid this gaping pitfall to progress in our search for what is real, we ought consider deeply these words of Oliver Cromwell:
“I beseech you, in the bowels of Christ, think it possible you may be mistaken”
I’d suggest that there is a relatively straightforward and unproblematic place to apply humility: to overcome the universal overconfidence bias. Many studies have found that when asked to give estimates with a confidence interval, error rates are far higher than would be expected if the confidence interval were accurate. Many of these find errors an order of magnitude or more than subjects expected.
You could take self-tests and find out what your overconfidence level is, then develop a calibration scale to correct your estimates. You could then use this to modify your confidence levels on future guesses and approach an unbiased estimate.
One risk is that knowing that you are going to modify your intuitive or even logically-deduced confidence level may interfere with your initial guess. This might go in either direction, depending on your personality. It could be that knowing you are going to increase your error estimate will motivate you to subconsciously decrease your initial error estimate, so as to neutralize the anticipated adjustment. Or in the other direction, it could be that knowing that you always guess too low an error will cause you to raise your error guesses, so that your correction factor is too high.
However both of these could be dealt with in time by re-taking tests while applying your error calibration, adjusting it as needed.
An appeal to humility might just be an eloquent concession to difficulty. It may not achieve anything if there is something tangible to achieve (for example, your scientific applications). But on the profoundly abstract and inherently human questions it may have a place. In many cases I need to accept that I do not have an answer and will probably never have an answer if I am going to get any sleep at night. But that is a different thing to the ‘good’ humility which says (a) I am human and capable of making errors and, in fact, it is inevitable that I will err and so accordingly (b) I will implement safeguards against such error in the systems I create and administer. Differing shades of humility appropriate for differing applications?
It is not only me who posts unannounced in the hope of demonstrating the efficacy I hold myself to but can’t bring myself to test on a real and threatening medium.
This internet shears the communication from each of us and puts those ideas out on their own; more or less free from predjudice or the risk of reflecting unflatteringly back on any of us.
I don’t hold out any possibility of my meek few lines attracting attention of anyone but me. As for influence?
It’s the power of invisibility, only none of us are seen. I think it kind of takes a bit away from it all.
It’s all about me.
I’d just note that if you believe in a deity, it actually isn’t particularly less rational to believe that it can be three and one at the same time. How would you prove the invisible, incorporeal, floating dragon who spits heatless fire isn’t simultaneously one and three?
http://lesswrong.com/lw/jp/occams_razor/
http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/
Hmm. To clarify my meaning:
Since anyone who applies Occam’s Razor in the correct form will reject theism to start with, I strongly doubt that any such person has, in fact, wasted the time to actually work out whether the vast convolutions necessary to “rationalize” theism are ultimately made more or less simple by the introduction of a variant of multiple personality disorder into the theistic godhead.
So, I doubt anybody is actually in a position to say that unitarian theism is, in fact, simpler than trinitarian theism. A rational person would never spend the time and effort to work out which ridiculously convoluted theory is actually simpler, because he’s already discarded both of them, and there’s no point in debating which is more ridiculous. The irrational can’t be trusted to do the reasoning correctly, and thus the rational can’t leverage their results.
Therefore, it’s optimal when making the case for rationality to avoid comment on trinitarianism. A rationalist is unlikely to actually be able to demonstrate it is actually inferior to unitarian theism, and he wouldn’t get any benefit from bolstering the relative case for unitarian theism anyway.
Hm… this doesn’t seem right. Let me take a stab at this.
What you’re saying assumes that rationality—or such specific tools of it as Occam’s Razor—get applied equally to everything. Theists are making this big salient mistake, and so we assume they make this mistake everywhere. Which is not how people work. Like you have overall successful people who happen to also be, say, creationists.
To say that everything in theism is equally worthless is the outside view: we can see the whole field is based on an undeservedly priviledged hypothesis, so to us everything in that volume of theory space is not worth distinguishing between. Like distinguishing between two conditional probabilities where the condition itself is extremely unlikely; not practically useful. But from the inside, where the condition is already granted—there’s still bound to be some things that make considerably more sense than others. To deny that is to just say that you’re not interested in the distinction (which is reasonable), not that it couldn’t be made for good reasons.
I haven’t studied it, but I wouldn’t be surprised to learn that theology, past the fact that it takes its theistic assumptions as a given, contained quite a lot of good thinking and that historically it contributed to our understanding of logic and valid reasoning. The reason I think so is that for long periods of time in history, becoming a clergyman was the main way of getting an education and getting to work on anything science-like, so at least some of the greatest minds in history were clergymen. Like Thomas Bayes. ;)
So I see a warning sign whenever aspiring rationalists dismiss theists as idiots.
(I’m probably failing to signal my allegiance to the tribe here ;) )
Taken out of context, my statement is too general, yes, and does look like the dismissing-theists-as-idiots thing, yes.
What I was saying was intended to be understood as “Those who accept theism can’t be trusted to have correctly reasoned about the specific nature of the theos, because the very same influences that caused them to be theists are going to be inducing them to defend a specific theos whether it makes more or less sense than the alternative.”
Given the tendency of people to put things in domains, I will, in fact, (reasonably) trust what a Vatican astronomer says about the Andromeda Galaxy, or a Creationist nuclear engineer says about Three Mile Island, et cetera. But the existence of a theistic deity and the nature of a theistic deity seem closely-enough related, domain-wise, that I won’t trust a theist to tell me he’s rationally evaluated whether God is One or Three, rather than rationalized it.
And, from my outsider perspective, I’m just not going to guess whether trinitarianism is more complicated, or if it just seems more complicated when you don’t know what problems it solves. In physics, I trust that if the more-complicated-seeming answer of relativity didn’t give better answers than the simpler-seeming Newton, physicists wouldn’t use relativity. In theistic theology, I can’t trust either proponents or opponents of trinitarianism to be giving me a rational evaluation as to whether the Trinity is an overcomplication or, overall, simplifies things.
Wouldn’t having three deities instead of one be more complex by their interactions with one another? Even if they existed on separate planes of existence, they would have to all be exerting some kind of influence for them to be gods, no? And in their shared application of influence, would they not be interacting?
The interactions of three people is more complex than the interactions of one person with himself. But the theory that my house contains three different residents still explains observations of my house much more simply than if you start with the assumption there’s only one resident. You accordingly cannot actually use Occam’s Razor to disfavor the theory that my house has three residents simply because the interactions of three people with each other are more complex than the interactions of one person with himself. Similarly, adding a cat to the three persons hypothesis actually improves the explanatory power of the model, even though you now have three sets of human-cat interactions added to the model; rejecting the cat on the basis of Occam’s Razor is also a failure.
Is a trinity more complex than a unitary godhead? In itself, sure. But if you’re trying to do something as notoriously convoluted as, say, theodicy, the question is, does the trinity provide extra explanatory power that reduces the overall complications?
And I strongly doubt anyone is both knowledgeable enough about theodicy and sufficiently rational and unbiased on the unity/trinity question to give a trustworthy answer on the question of which is the actual lesser hypothesis there. Especially since the obvious least hypothesis in theodicy is that there is no God at all and thus nothing to explain.
If you’re going to claim that a unitary godhead is favored by Occam’s Razor over a trinity, you actually need, among other things, a whole unitary-godhead theodicy. But if you actually worked one out, in order to have a rational opinion on the relative viability of the unitary and trinity theories, I’m going to wonder about your underlying rationality, given you wasted so much time on theodicy.
As defined in some places—for example, the Occam’s Razor essay that Eliezer linked for you many comments ago—simplicity is not the same as fitting the evidence.
The official doctrine of the Trinity has probability zero because the Catholic Church has systematically ruled out any self-consistent interpretation (though if you ask, they’ll probably tell you one or more of the heresies is right after all). So discussing its complexity does seem like a waste of time to me as well. But that’s not true for all details of Catholicism or Christianity (if for some reason you want to talk religion). Perhaps some intelligent Christians could see that we reject the details of their beliefs for the same reason they reject the lyrics of “I Believe” from The Book of Mormon.
Of course simplicity is not the same thing as fitting the evidence. You only even start comparing simplicity after you have multiple hypotheses that actually fit the evidence. Then, and only then, can you properly apply Occam’s Razor. The hypotheses “Always comes up heads” and “always comes up tails” and “always lands on the edge” are all already on the reject pile when you’re trying to figure out the best theory for the existence of the “HTTHHT” sequence, and thus none of them get any points at all for being simple.
Indeed, if you’ve only got one hypothesis that fits, it’s still too soon to apply Occam’s Razor, except informally as a heuristic to encourage you to invent another hypothesis because your existing one looks excessively complicated. Only after you’ve got more than one hypothesis that fits the “HTTHHT” sequence can you actually use any formalization of Occam’s Razor to judge between those hypotheses.
It occurs to me that Trinitarianism and similar are likely best explained as the theological equivalent of wave-particle duality.
Does light really sometimes behave like a particle and sometimes behave like a wave? Probably not. More likely there is some underlying, unified behaviour that we simply haven’t figured out yet due to limited data and limited processing power.
Similarly, when trying to comprehend and describe an infinite… something-that-has-intent, with a finite human mind and viewpoint as your only tool, there are likely going to be some similar bits of weirdness. God in three persons? More likely you have a “blind men and the elephant” situation. Only this elephant is too big to ever see more than a tiny piece of it at a time, and too mobile to know for certain that you’ve found the same part of it to look at twice in a row.
So you could easily have a case where the Unitarians are technically more correct about the overall nature, but the Trinitarians have a better working description.
This says nothing about whether Theism as a whole is the most correct explanation for the observed phenomenon. Just note that the “practical explanation that mistakenly comes to be thought of as the way things really are” is hardly limited to Theology, and I highly doubt theologians are measurably more likely to commit this error than anyone else. The very reason that you have to use placeholder tokens for thinking about concepts that can’t fit in your brain all at once leaves you susceptible to occasionally forgetting that they’re just placeholders.
If “humility” can be used to justify both activities and their opposites so easily, perhaps it’s a useless concept and should be tabooed.
It seems to me to be the case that when confronting rationalists, those who have a belief they’re motivated to continue to hold will attempt to manipulate rationalists into withdrawing skepticism or risk social disapproval. For example, when creationists ask something like “how can you be sure you’re absolutely right about evolution?”, I believe the actual intention is not to induce humility on the part of the evolutionist, but to appeal and warning for the evolutionist not to risk the creationist’s disapproval.
So, it’s crucial to identify the difference between when someone else wants you to be humble, and when someone wants you to be socially modest so you don’t frustrate them by challenging their beliefs.
There’s better discussion than what I can produce on when humility is and isn’t useful in the comments of the SEQ RERUN of this post
NOTE: edited for simplicity and grammar.
Matthew 6:16-18:
Sorry to spread my Christian-flavored ideas around, but it reminded me. :3 The old joke among me and my siblings, when I was growing up, was that we would proclaim ourselves to be “the humblest one” of us all. I thought it was a joke, until I grew up and interacted with people who actually adhered to a similar philosophy...
Very well-written post, sir. I greatly appreciate the ones where you take a common word or phrase, and reduce it to its proper and true state.
Actually, what this really reminds me of is a recent altercation between me and a roommate. The word at the heart of this altercation was “selfishness”… my erstwhile roommate (subleaser, really) said that my and my wife’s decision to fail to renew their lease was “selfish”, because, apparently, in our religion we are supposed to give everything we have to anyone who asks of it. Logically, it can be well demonstrated that this does not follow; if we were to be “charitable” under this definition, we would give all our shelter and money to the starving and homeless, and die of starvation and exposure.
How strange the unflinching hypocrisy of mankind.
That is possible, but you didn’t show it. Who knows what would happen if we gave all our shelter and money to the starving and homeless? Perhaps they’d listen if we asked for it back, or a miracle would produce more? And how do we know we aren’t supposed to die of starvation and exposure?
There are certainly biblical statements implying one shouldn’t. There may even be two or three pages worth of such excepts for every one page implying the opposite, but once the principle of explosions explodes you, there’s really no putting the pieces back together.
If the logical demonstration depends on assuming something at all like biblical consistency, you can say so, but biblical quotes are worthless for some purposes because it may be assumed there is one supporting P and one supporting ~P for a great many things. This is true for the Old Testament alone, the New testament makes it exponentially worse, which is like having a fatal wound or disease be exponentially more fatal than fatal...I can’t even imagine adding the Book of Mormon to the mix.
For this reason biblical quotes are not ideal, unless there is doubt that any passage supports a particular position, or there is some other good reason. But the default assumption is that if there is a debate, biblical quotes can be found to support any side.
In any case, one should be careful to not accept a false dichotomy that arose from a clash of two opinions, but to seek better alternatives, particularly those similar to the opposing position, and to throw away fake justifications that worked against the real interlocutor, but not the idealized one.
I thank you for your caution, but my argument was actually non-Biblical in nature, and it was a proof by contradiction. Ran something like this:
So, you think that I should give away everything to those who ask for it, without exception?
Every resource I consume is a resource that is then unavailable for others who ask for it.
Therefore, in order to give away every resource I might have otherwise consumed, I must not consume any resources, and therefore dies.
Your moral system prohibits suicide.
Therefore, your original proposition is inconsistent with your professed morality, QED.
Also therefore, get out of my house before I call the cops.
I apologize for the ambiguity; I did not mean to explicitly ascribe any moral valuation to committing suicide, though I should hope it could be inferred that I do not, in fact, advocate suicide. :P
As for “the homeless giving it back”, why, to even ask would be selfish!
I hadn’t myself understood why I disliked one style of biblical quotations until I had to explain it to you.
Other reasons for biblical quotes are fine, such as showing how telling a story several times and differently has an effect, or showing something about how people then likely thought, or having an old source for “Nothing new under the sun”, etc. There’s nothing about the books that makes quoting them magically a bad thing to do, it’s just that there’s enough contradictory stuff (probably in Exodus or Numbers or Deuteronomy alone, much less the Pentateuch, much less the Old Testament, much less...) that saying there is Biblical warrant for something similar to one’s position is the most unspectacular thing one can say. A quantity of quotes from among sources showing preponderant and/or broad and consistent would be something else and as valuable as perhaps a small quote from a dissimilar source, but by definition that’s not something that fits in a reasonable amount of space and is more of a thesis paper.
The first sentence of this comment is the important one, we can probably constructively generalize from it.
As an atheist in hiding knowing the bible well can be extremely useful though. Due to how you can support nearly any position using biblical quotes, it becomes a lot easier dealing with strongly religious people when you disagree with them if you can argue based on their own priors. Telling someone about a logical fallacy, information colelcted using carbon dating, etc only works when they actually assign weight to your sources.
Another bonus, when people find out I am an atheist and I have been liberally trolling them for years it might shake up their faith in the community if I am lucky, but I am not sure how I would test this.
A big problem with trying to pull wisdom out of the bible and similar is that there is a whole pile of cultural context that is either gone, or requires large amounts of study to discover.
Like someone a thousand years from now who has somehow dug up an old blog post that strongly asserts that “The Cake is a Lie!” you’re missing a massive portion of the story. And you can justify almost anything you want to just by filling in the missing bits differently.
And this is before you even get into the biblical religions having all gone through historical phases where they deliberately filled in the cultural bits incorrectly for political reasons.
The best thing I’ve found to do with it is set God = Truth, and remember that someone’s story being included isn’t an assertion that they had everything right. There’s plenty of satire in there too. Most of it exceedingly subtle. Something about criticizing the powerful being a potential death sentence so they had to make it look like praise. But if you actually lay out the statements and evaluate them as a whole instead of individually it paints a different picture.
Like when you suddenly realise that they’re praising Solomon as being a great king by describing the grand temple and palace he built, but if you pay attention to the descriptions of each it seems that he not only built the palace out of grander, more expensive materials, he built it as a mirror of the temple with his throne room in place of the holy of holies… And suddenly the description of the man’s character takes on an entirely different tone if you know anything about what the relationship between God and the King was supposed to be.
And yet various branches of bible-based religions spent hundreds of years using Solomon as part of their description of a “Godly King”. Because it fit their political narrative and kept the peasants in line.
In short, Biblical stories are like any other repository of folk wisdom. The only way to find the truth in there is if truth is what you’re actually looking for and you don’t stop until it makes coherent sense. And this whole site is dedicated to showing all the ways in which human beings generally aren’t actually looking for the truth… So… Good luck?
There is a difference between not consuming anything and giving away anything if asked.
So apparently in his religion one is supposed to give away everything if asked, but nothing is implied if one is not asked.
That is a good point, but the error comes in my statement of he problem, not in the argument. Otherwise, why would we ever give to charity, unless explicitly asked to? What would constitute “asking”, anyway? Could we pass by a homeless man on the street and, as long as he didn’t actually say anything to us, safely ignore his sign?
I don’t understand. Mostly, because your argument is along the lines of: A, because if not A, then why B? And B,” and I can think of many other reasons for B, not merely just A or just one besides A. How is this not an argument from incredulity? You’re accusing the roommate of unflinching hypocrisy, but I don’t see it.
Then perhaps I was incorrect in my accusation. I apologize that I’m not able to present my side more clearly; this happened a while ago, and the data is muddled.
I don’t know why EY was taking grief for this. It’s a good distinction, well phrased.
On the other side of the pancake, I’d say that intellectual arrogance is often similarly misconstrued.
People often take open disagreement as a sign of intellectual arrogance, while it is a display of respect and humility; showing respect with the honest acknowledgment of your disagreement, and showing humility in affording the other person a chance to defend themselves and prove you wrong. To say nothing is to treat that person’s beliefs dismissively, as if they don’t matter, and then assume that discussion was futile because they’re incapable of understanding the truth, and of course, couldn’t possible have anything to teach you.
A majority of people openly disagreeing with others are doing so out of pride, not a desire to learn. The exact flavour of pride varies. Some feel that they are righteously doing their duty to defend their opinion and remain true to themselves and/or their tribe, some want to feel like they are doing a favour to humanity by enlightening others, some disagree to humiliate a person with a contradictory opinion because they dislike the person, some disagree to challenge a person’s social status rather than challenging his opinion, some because they take pride in being edgy or non-conformist, some just want to flaunt their opinion and superior knowledge. The fact that people interpret open disagreement as arrogance is quite a reasonable assumption since the probability of a person openly disagreeing with them not out of pride is negligibly low, at least outside the rationalist community. (Even within the rationalist community, it is still relatively unlikely that a person disagree for an opportunity to refine their model of the universe. Even rationalists regularly fall prey to emotions such as pride.)
It still does happen though. I’ve only gotten this far in the Recommended Sequences, but I’ve been reading the comments whenever I finish a sub-sequence; and they (a) definitely add to the understanding, and (b) expose occasional comment threads where two people arrive at mutual understanding (clear up lexical miscommunication etc.). “oops” moments are rare, but the whole karma system seems great for occasional productive discourse.
That is obviously not an analog for the face-to-face experience, but isn’t the “take a chance on it” approach still better then a general prohibitive “not worth it” attitude? You can be polite (self-skeptical etc.) while probing your metaphorical opponent. Non confrontational discussions are kind of essential to furthering one’s understanding about what’s going on and why.
If someone could convince people at large that this is true it would make intelligent dicussion much easier. Trying to convince people to abandon the treasured perks of high status might prove difficult however.
A good way I think of to define humility is as the inverse of your willingness to argue with future you. Imagine that yourself from a few weeks in the future (or 5 years in Matthew McConaughey’s case) steps out of a time machine. Would you be willing to concede that he knows more?
Examples:
The student who is certain of his answer will expect that it will not change, so he is not humble at all about it.
The student who is resigned to the fact that the answer is unknowable expects that future her doesn’t know any better so she’s not humble either.
The student who rechecks her answer anticipates that future her found a mistake, otherwise she wouldn’t bother checking. That’s how you know she’s humble.
I’m humble about my assessments of the probability of creating an AGI. I would immediately take future-me’s word on it because he will surely know more.
I’m not humble about my belief in MWI, because I don’t expect that future me will know more about it. The only thing that could change my mind is an experiment disproving superposition for cat-sized objects, which I don’t expect me-in-5-years to see. If future-me doesn’t believe in MWI I would need to hear all of his arguments, I wouldn’t agree with him on the spot (maybe I’m going to get hit on the head in two years?)
I believe that people systematically underestimate the amount that the world, themselves and their opinions will change in 5 years. That would amount to a bias for under-humility.
Hm. Looking in the mirror, I am entirely willing to defer to future-me, but at the same time I wouldn’t describe myself as humble. What you are describing seems to be more along the lines of the well-known quote usually but erroneously attributed to Churchill: “When the facts change, I change my mind. What do you do, sir?”
Humility demands an appreciating what we do NOT know, and the PURSUIT of counter-evidence.
To take up the first example in the article, this applies to creationism versus simple evolution (random chance plus natural selection being sufficient).
People should learn the case for evolution before adamantly deciding that Genesis is literally true. Many do not, despite the rather insurmountable case against it.
BY THE SAME TOKEN, people should learn the case against random chance and natural selection being sufficient to explain everything we know, and an appreciation of how much we still don’t know, before adamantly deciding it is true. Many do not, despite the strength of the case against it, as well.
“The case against X” is a vague term because it could mean “the arguments against X” or “the good arguments against X”. Which of these do you mean when you suggest that people should learn the case against evolution?
I think that this meme is appropriate and that it can help readers understand the idea. But I’m not sure and would appreciate input from others.
I also just had the thought that memes in general might be a good way to communicate a lot of the ideas on LessWrong (by contrasting proper reasoning with flawed reasoning). What do you guys think?
I actually don’t quite agree (this is the first time I found something new to criticize on one of the sequence posts).
To me, it seems like humility as discussed here is inherently a distortion, that when applied, shifts a conclusion in some way. The reason why it can be a good thing is simply that, if a conclusion is flawed, it can shift it into a better place, sort of a counter-measure to existing biases. it is as if I do a bunch of physical measurements and realize that the value I observe is usually a bit too small, so I just add a certain value to my number every time, hoping to move it closer to the correct one.
However, once I fix my measurement tools, that distortion then becomes negative. Similarly, once I actually get my rationality correct, humility will become negative. In this case, there also seems to be a general tool to get your conclusion fixed, which is to use the outside view rather than the inside view. Applying that to the engineer example:
If the engineer used the outside view, he should know that humans are fallible and already conclude that he should spend an appropriate amount of time on fail-safe mechanics. If he then applied humility on top of it, thus downplaying his efforts despite having used the outside-view, it should lead him to worry/work on it more than necessary.
Of course, you could reason that in my example, applying the outside view is itself a form of applying humility. My point is simply that even proper humility doesn’t seem to cover any new ground. It’s not “part of rationality,” so to speak. It’s simply a useful tool, practically speaking, to apply when you haven’t conquered your biases yet. In that sense, I would argue that, ultimately, the correct way to use humility is not at all / automatically without doing anything.
Do you, or anyone, have good examples of such specific actions?
Manned spaceships have dozens of fallback plans to keep astronauts safe, even though they don’t anticipate things going wrong.
Backing up… everything. Deploying changes to test environment before deploying to production. Accepting Murphy’s Law unto yourself. Looking twice before crossing the street. Developing a blanket policy of general paranoia. Promoting a blanket policy of general paranoia. Developing alcoholism. Promoting alcoholism. Etc...
Edit: I forgot arguably the most important one: admiting you cannot reliably do better than the market by picking individual stocks (nobody can!) and buying market ETFs instead.
What?
This was meant as a joke. Sorry if the intent is not obvious.
In some contexts, this is exactly right. It is right and proper to see major, real-time belief updates in the climax of a rational fic. And one hopes that executives in a high-stakes meeting will be properly incentivized to do the same. But in many ordinary cases, the most extreme concession one should hope to hear is, “okay, you’ve given me something to think about,” followed by a change of subject. (If this seems unambitious, consider how rarely people make even such small concessions.)
I think it’s important to mind the costs—both psychological and social—of abruptly changing one’s plans or attitudes. “Why bother going to all that work to justify [staying the course]?” Indeed, I wish it were more normal for people to say, “well, that’s a good point but it’s probably not worth the switching costs” or even just, “I don’t feel like thinking that hard about it.”