[SEQ RERUN] A Prodigy of Refutation
Today’s post, A Prodigy of Refutation was originally published on 18 September 2008. A summary (taken from the LW wiki):
Eliezer’s skills at defeating other people’s ideas led him to believe that his own (mistaken) ideas must have been correct.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Raised in Technophilia, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
- [SEQ RERUN] The Sheer Folly of Callow Youth by 1 Sep 2012 6:26 UTC; 6 points) (
- 2 Sep 2012 16:36 UTC; 0 points) 's comment on [SEQ RERUN] Say It Loud by (
I still rely on this crutch, and I’d be pretty curious to what other people do to get around it. Trying to turn arguments into near-term predictions is the best technique I’ve got for not relying on other people to do my error checking. But this doesn’t work very well for normative questions, which is where I’ve found arguments with friends to be really helpful. They’re good at coming up with edge cases I haven’t thought of or noticing where I’m not implementing the preference I’ve expressed.
I don’t get around it—I rely heavily on others to correct my errors. It works quite well, I find, as long as you invite and listen to the arguments. It’s an augmentation, not a substitute for one’s own error checking. Self-correction and correction-by-others are mutually beneficial (as EY often advises, it helps to perfect the other person’s argument before evaluating it).
More important than coming up with a correct grand narrative, is coming up with a world conception that allows a high degree of functionality and adaptability. I doubt if the strongest rationalist has the correct ideas about everything or would have time in her lifetime to reason all the things most important to her.
Taking principles, tempered by material results, as goals instead of using pure technical reasoning skills, can be very useful in uncertain circumstances.
For example: you are in a classroom debating whether the development of Irani nuclear weapons will stabilize or destabilize the Middle East. There is no means of empirically testing your hypothesis, but your reasoning should still be based upon sound principles.
In principle, people who have special knowledge in things are better equipped to make judgments about those things: you might notice that you don’t know a lot about the history of the Middle East, like how Israel made itself a state by kicking the Palestinians off their land or the history of western imperialism in Iran. So you read a bunch of media sources on current events.
Then, again in principle, you might look at the motivations different people have for claiming different things about world events. Though it’s not a situation-specific methodology, the principles are useful in any situation to providing you the right background from which to rationally make and refute claims.
After striving to exhaust your principle-informed objectives, temper your reasoning by material reality. You should ideally argue in a manner that allows everyone to have the most fulfilling dialogue. Socrates would never tell people the answers to problems; instead he would always be questioning. Even if you can reason the way around people, it’s important to take into account how they will react to your methods.
By maintaining good principles, abstracting lessons from other situations, and letting reality guide you, you should be able to hone your reasoning skills upwards.
I’m not sure what you mean by this. Can you explain and/or give some examples?
Checking that beliefs pay rent kinda stuff. For an example: I believe that a new acquaintance is treating me coldly. I predict I’ll see some of the things I’m missing when she interacts with other people (smiles, extended responses, etc), but when I don’t observe this, I realize that my original belief was wrong and she probably just has an unusual baseline response.
Basically I’m trying to get into the habit of turning descriptions of the present world into predictions about the future world. Then I don’t need someone else to catch me out; reality will.