Find yourself a Worthy Opponent: a Chavruta
You’ve been on Less Wrong for a while. You’ve become very good at a lot of stuff. Specifically, arguing. You win arguments. All the time. Effortlessly. And the worst part is, you often win for the wrong reasons. Perhaps there were counters to your propositions. Perhaps you failed to mention a very important, non trivial premise, and your public accepted your shaky proposition with as much enthusiasm as if it had been rock-solid, if not more.
They have failed you. You now know that, if you want to remain objective, to keep your grip on reality, to keep your mind sharp and your guard high, you need a Worthy Opponent, someone who’s on the same level as you, who’s as different in ideology and character from you as possible, who will not hesitate to point out any and every flaw your propositions would have, and would in fact go out of their way to contradict you, just for fun. This intellectual sparring will strengthen you both, and make you more careful in actual debate, on the public arena, whether you choose to use the Dark Arts or not.
Quoth JoshuaZ: In many forms of Judaisms one often studies with a chavruta, with whom one will debate and engage the same texts. Such individuals are generally chosen to be about the same background level and intelligence, often for precisely the sort of reason you touch upon [I paraphrased that in the two first paragraphs] (as well as it helping encourage them to each try their hardest).
A couple of interesting excerpts from the wikipedia artilce:
Unlike conventional classroom learning, in which a teacher lectures to the student and the student memorizes and repeats the information back in tests, and unlike an academic academy, where students do individual research,[5] chavruta learning challenges the student to analyze and explain the material, point out the errors in his partner’s reasoning, and question and sharpen each other’s ideas, often arriving at entirely new insights into the meaning of the text.[1][6]
A chavruta helps a student stay awake, keep his mind focused on the learning, sharpen his reasoning powers, develop his thoughts into words, and organize his thoughts into logical arguments.[7] This type of learning also imparts precision and clarity into ideas that would otherwise remain vague.[8] Having to listen to, analyze and respond to another’s opinion also inculcates respect for others. It is considered poor manners to interrupt one’s chavruta.[9]
In the yeshiva setting, students prepare for and review the shiur (lecture) with their chavrutas during morning, afternoon, and evening study sessions known as sedarim.[2] On average, a yeshiva student spends ten hours per day learning in chavruta.[11] Since having the right chavruta makes all the difference between having a good year and a bad year, class rebbis may switch chavrutas eight or nine times in a class of 20 boys until the partnerships work for both sides.[12] If a chavruta gets stuck on a difficult point or needs further clarification, they can turn to the rabbis, lecturers, or a sho’el u’mashiv (literally, “ask and answer”, a rabbi who is intimately familiar with the Talmudic text being studied) who are available to them in the study hall during sedarim. In women’s yeshiva programs, teachers are on hand to guide the chavrutas.[13]
Chavruta learning tends to be loud and animated, as the study partners read the Talmudic text and the commentaries aloud to each other and then analyze, question, debate, and defend their points of view to arrive at a mutual understanding of the text. In the heat of discussion, they may wave their hands or even shout at each another.[14] Depending on the size of the yeshiva, dozens or even hundreds of chavrutas can be heard discussing and debating each other’s opinions.[15][16] One of the skills of chavruta learning is the ability to block out all other discussions in the study hall and focus on one’s study partner alone.[2]
In the yeshiva world, the brightest students are highly desirable as chavrutas.[17] However, there are pros and cons to learning with chavrutas who are stronger, weaker, or equal in knowledge and ability to the student. A stronger chavruta will correct and fill in the student’s knowledge and help him improve his learning techniques, acting more like a teacher. With a chavruta who is equal in knowledge and ability, the student is forced to prove his point with logic rather than by right of seniority, which improves his ability to think logically, analyze other people’s opinions objectively, and accept criticism. With a weaker chavruta, who often worries over and questions each step, the student is forced to understand the material thoroughly, refine and organize his thoughts in a logical structure, present his viewpoint clearly, and be ready to justify each and every point. The stronger chavruta helps the student acquire a great deal of information, but the weaker chavruta helps the student learn how to learn. Yeshiva students are usually advised to have one of each of these three types of chavrutas in order to develop on all three levels.[7]
Given the pattern their interactions have followed online in the past, one could easily think of classifying Yudkowsky and Hanson’s relationship as an informal chavruta. And perhaps we should follow their example: Endoself expressed the desire for such a companion, and suggested that we at Less Wrong establish some similar institution.
Honestly, I don’t just think this institution should be introduced into Less Wrong. I think it need to be introduced into every educational system. The way the article is written (though I suspect bias since there isn’t even the slightest criticism), it sounds like the most freaking awesome way of studying ever.
Now, here in Lesswrong, we can usually count on each other to read the arguments properly and point out any fauts there may be. It’s kind of a collective effort. Therefore, I’m not quite sure we need such an institution on the site proper, since we seem to function like a huge hydra of a chavruta right now. Which we shall demonstrate right now, as usual, in the comments section, where I’ll be impatiently waiting for feedback from both Jews and Gentiles.
- Babble challenge: 50 ways of sending something to the moon by 1 Oct 2020 4:20 UTC; 91 points) (
- Welcome to LessWrong (For highschoolers) by 26 Nov 2011 15:47 UTC; 35 points) (
- 17 Sep 2017 23:22 UTC; 31 points) 's comment on LW 2.0 Strategic Overview by (
- Experiment: Knox case debate with Rolf Nelson by 8 Jul 2011 8:22 UTC; 27 points) (
- 27 Nov 2011 20:12 UTC; 7 points) 's comment on Article idea: Good argumentation by (
- 27 Jan 2012 0:05 UTC; 7 points) 's comment on “Politics is the mind-killer” is the mind-killer by (
- Partners in scholarship—Search and Find by 9 Sep 2011 12:50 UTC; 6 points) (
- 12 Nov 2011 12:32 UTC; 5 points) 's comment on 2011 Less Wrong Census / Survey by (
- 8 Feb 2012 14:02 UTC; 4 points) 's comment on Welcome to Less Wrong! (2012) by (
- 7 Dec 2012 18:44 UTC; 4 points) 's comment on Mixed Reference: The Great Reductionist Project by (
- 12 Dec 2011 19:04 UTC; 4 points) 's comment on Meetup : Any Salt Lake City residents who might be interested in a meetup? by (
- Good interview with Kahneman [link] by 2 Dec 2011 15:04 UTC; 4 points) (
- 8 Oct 2012 0:37 UTC; 3 points) 's comment on Rationality: Appreciating Cognitive Algorithms by (
- 20 Aug 2011 16:15 UTC; 3 points) 's comment on A Crash Course in the Neuroscience of Human Motivation by (
- 10 Nov 2011 2:00 UTC; 3 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 6 Jul 2011 13:50 UTC; 2 points) 's comment on Dark Arts: Schopenhauer wrote The Book on How To Troll by (
- 8 Sep 2011 18:17 UTC; 2 points) 's comment on Rationality Boot [Mini]Camp… night class style? by (
- 5 Dec 2015 15:45 UTC; 1 point) 's comment on Starting University Advice Repository by (
Very interesting, but I do have one concern: this setup does nothing to prevent rationalization. Debates and arguments can facilitate learning in certain situations, but they don’t help you notice when you are rationalizing instead of updating. Too much exposure to a learning style that emphasizes clever arguing over evenhandedness may end up being epistemically unhealthy in the long run.
See also: Against Devil’s Advocacy
And indeed, the example given is its use in defending Judaism. This should raise red flags!
I’ve recently had a couple conversations with someone who does this a lot (in the context of Judaism). He appears to be quite smart and instrumentally rational in general, but his epistemology is so horrible as to make communication about theory selection just about impossible. The worst part is that his epistemology is heavily fortified, and there ain’t nothin’ you can do to talk some sense in- and I’m talking about non religious topics. It seems to be at least another level past that of the average Christian.
The system is very rarely used to defend Judaism per se. Chavrutas will only very rarely debate or argue over those fundamental premises.
As to your friend- I’m pretty sure he’s the exception rather than the rule. Having interacted with a large number of Orthodox Jews (both when I was Orthodox and after), they aren’t any better at apologetics than the average Christian. Epistemological issues exist, but they exist as they do in essentially all religious frameworks (I’m actually beginning to think that common epistemological flaws are one of the unifying features of religion). Judaism has some epistemological problems that many forms of Christianity do not have, such as heavy emphasis on tradition and ancestral belief as extremely strong valid evidence, but these problems seem to be divorced from the chavruta system.
Have you tried asking him to do an exchange where in this conversation you do your best to adopt his epistemology and in that conversation he does his best to adopt yours? Agree to briefly agree, and trade off who agrees with whom.
In conversations with other people, that’s an awesome technique. In this case, I’d be very surprised if it’d work. He’s a competent hypnotist, so he knows all about getting people to imagine that they believe different things… and then forget that they’re imagining. He wouldn’t “fall” for that one.
How does Hypnotism even work?
As Peterdjones said, a lot of it is metasuggestion. However, that is not the whole story, and there are ways to get the same effects without using the word “hypnosis”.
On top of that, most of the interesting stuff just got buried under the label of “suggestion”.
“Hypnosis” is a horribly vague word, but its basically all about learning how to talk to the different parts of the brain and engineering what you say as to get the different parts to respond the way you want. “Engineered placebo”, perhaps.
What kind of answer are you looking for? I can’t explain the whole lot in a short comment, but I’d consider writing a post on what researching hypnosis has taught me, if there’s interest.
interested hand is raised
a) it has been suggested to people. by culture in general that there is such a thing as hypnosis, which works in such-and-such way.
b) This existing suggestion can be used to suggest to people that they become more suggestible than they are already.
In short;
c) it’s suggestion all the way down.
Suppose I cast a magic spell to make magic start working for everyone in the universe, including myself retroactively so that I can cast the spell.
I would not expect magic to start working.
Conclusion: there’s something real happening in the vicinity of the referent of “suggestion”.
I’m not saying suggestion is nothing. I am saying the level of suggestibility is never zero.
Its hard to fake too, the reason it works so well is the people that wont be hypnotized are removed early in the process. The only way ive ever seen someone successfully mess with a good hypnotist is when my friend hypnotized himself first, so that he would be able to break the hypnotism in the middle of the hypnotists act with help from a separate trigger. Then you can really mess with the guy successfully making everyone suggestible.
This is way more hilarious than it has any right to be.
I trust in our ability to think up ways of avoiding this. The same way Jiu Jitsu, a lethal, murderous martial art, was redeveloped into Judo, a far less lethal version, I think we can use our sense of rationality in such a way as to develop an art of debate that doesn’t need to rely on the Dark Arts to persuade the audience (and, secretly, the opponent). Note that being a dangerous Judoka takes much more training and effort than becoming a dangerous Jiu Jitsuka, but the end result is much more beautiful to behold, yes? (And causes much less trouble in the long run, especially before the sort of audience that will judge you on how “nobly” you won rather than how quickly, to the point of making even a defeat on the field a victory among the public, Rocky-style).
This includes sparring techniques and training methods: training a man to kill takes very different methods from training them to lawfully win street fights. The wrong training in the wrong circumstances can prove to be disastrous, such as what happens when you send combat troops do the job of MP.
I don’t trust anyone’s ability to avoid this pitfall. Rationality skills are often learned on the 5-second level, so it’s paramount that we train ourselves to not instinctively rationalize things. In general, when persuation takes precedence over truth-seeking, then you’re no longer talking about rationality, you’re just talking about how to win debates. I agree that there are white-hat ways to debate and black-hat ways to debate and that public speaking is a valuable skill, but rationality is never about arguing for a particular conclusion. As a result, I think it’s best to keep the two concepts separate and be very mindful of which thinking skills you are using for each activity.
Our ability as a group. See, you yourself are already contributing into averting those problems.
As long as both participants really want truth and are willing to lose, I don’t think this will be a problems. When following a chain of logic by yourself, you are the only one who can notice your rationalizations. Chavrutas might provide extra incentive to rationalize but for some people, especially many lesswrongers, this will be outweighed by having an extra guard against rationalization.
I’m somewhat skeptical that LessWrong readers are so good at avoiding rationalization that we can engage in activities like this without any adverse cognitive effects. It’s not like non-LWers are running Windows ME and we are running LINUX—we all share a similar cognitive architecture, and we are all prone to rationalization. Being aware of this fact is not enough to prevent it, just like seeing an optical illusion does not make it go away. It takes a conscious effort to think rationally, and I think we should be focusing our efforts on developing epistemic rigor rather than engaging in something potentially poisonous.
Furthermore, while having a partner in a calm, reasoned discussion will help you catch yourself in the act of rationalizing, having a partner whose purpose is to argue with you probably won’t.
I have a very different impression of the mental consequences of debating than you; I didn’t think that debating always provides a very strong incentive to rationalize, depending on context, the relationship between the participants, etc. Am I incorrect?
Well, I can tell you about my own experience: I participated in organized debate in high school and at university for a total of 6 years. After a while, rationalization came naturally, and I couldn’t tell the difference between rationalization and non-rationalization. In early 2011 I started re-reading the Core Sequences (I read them for the first time in mid-2010), and some of the posts on rationalization really “clicked” the second time around. I gradually realized that I had to make a deliberate effort to not rationalize, and I tasked myself with double-checking as many of my own thoughts as possible. Since then I’ve improved somewhat, and I can sometimes catch myself in the act of rationalizing. But I’ve got a long way to go, and I think that’s partly a result of training myself to accept a randomly assigned conclusion and manufacture arguments to match.
I agree that some kinds of debates do discourage rationalization, but I’m worried that as soon you give up your bottom line, you put yourself at a great deal of epistemic risk.
The kind of debate one would use with a rationalist chavruta should be defending a point that you actually believe using the reasons that actually cause you to believe it. Maybe the word ‘debate’ has the wrong connotations, especially in the context of debate clubs, which are epistemically horrible, but I think humans can try to convince each other of things without rationalizing.
I completely agree; though there would still be rationalization for social reasons, as JoshuaZ points out below, if both partners were being completely honest with each other then there is significantly less cause for concern.
I’m not quite sure what you mean here. Do you think that many LWers could maintain a high enough level of honesty for this to work? I think that we could, though some people are more competitive and less truth-seeking in these types of scenarios, so I wouldn’t recommend it for everyone.
As long as there is nothing incentivizing people to be anything other than completely honest (like there is in an organized debate) then I’m much less concerned (but not entirely unconcerned). And I agree that not everyone is capable of being entirely honest.
If we can’t be entirly honest, then what are we doing here?
We don’t always know our true rejections; sometimes we even invent rejections during the debate. This is a bias that should be reduced as much as possible, but it can’t always be eliminated.
I participated in organized debate for some time and was quite good at it. I am not convinced that I learned anything other than sophistry and selective reasoning from the experience. Worse, I’m not convinced that that wasn’t the point.
If at least some of the chavruta arguments are public, then the odds go up that weak arguments will be recognized for what they are.
“Jews and Gentiles” puts the dividing line in the wrong place. I’m Jewish, but I know little about how Talmud is studied. I’d never heard of chavrutas till your post.
… Yup, that basically sums up my perception of it, superficially at least.
I attended the world’s only yeshiva that is explicitly more open to science and progress than the Orthodox movement for about a year. It is, e.g., co-educational, features openly gay and/or atheist faculty, and pays a certain amount of attention to higher literary criticism. Ironically, it is called the “Conservative Yeshiva in Jerusalem.”
http://www.conservativeyeshiva.org/about/egalitarian-yeshiva-philosophy/
While there, I had a chevruta partner for about two months. It functioned exactly as touted in the Wikipedia article, except that ultimately my partner and I realized that we had a fundamental disagreement about epistemology—he was willing to accept arguments with even minimal plausibility as long as they made him feel good, whereas I required significant plausibility. We parted ways with good will, but not mutual respect—he saw my insistence on evidence as stingy, and I saw his openness to unsupported traditional claims as irresponsible.
I disagree with JoshuaZ’s point about affective death spirals, because, at least at my yeshiva, time in chevruta alternated on a 1:1 basis with time spent in ‘seminar’ or ‘shiur’ groups of 10-12 students led by a senior faculty member, in which there was group discussion. Stupid or incoherent ideas were frequently (several times per seminar session) exposed as such by either the group or the faculty member. Moreover, people usually chose partners who had a different background or otherwise had something interesting and valuable to share...nobody really wanted to fly across the world and study texts with little commercial value just so that they could confirm their prior opinions as being correct. Thus, there was little opportunity for two people to permanently drift away into a radically (more) unrealistic point of view than the rest of the group.
I strongly encourage anyone who is interested to give rational chevrutas a try.
I’d say this reduction of the issues to a disagreement about epistemology is precisely the sort of thing we would expect based on Aumann’s Agreement Theorem. Did you happen to try discussing the means by which the priors originated, a la Robin Hanson’s argument? If so, what went wrong?
It’s kind of you to say so, but our disagreement went deeper than just having different priors. We used different functions to evaluate whether a given proposition was worthy of the adjective “true.” I suppose on LW everyone more or less aspires to be a perfect Bayesian, but at the yeshiva there was widespread disagreeement on what should count as true. Amazing though it may sound (har), we actually made decisions based on predictions about what would happen next based on other variables besides just what we actually expected to happen next—and we were proud of it. It was seen as generally a good and wise idea to make decisions based on factors like “consistency of this model with traditional models” or “compatibility of this model with feelings of spiritual uplift.” My partner and I just had a disagreement about to what degree these other factors should be allowed to intrude into ordinary decision-making.
As for discussing the means by which our value systems originated, no, we didn’t try it. We already knew a fair bit about each other’s background/history, and it seemed too condescending to pry into each others’ subconsciouses and say “Oh, you have this value system because of what happened to you as a kid.” I might try it now, very carefully, if something similar cropped up. I was 21 at the time; he was 18. It can be hard to analyze that stuff without being offensively rude until you’ve had a bit of life experience.
Interesting. I’ve been thinking exactly that lately.
In the same way that Haidt finds different moral modalities, I suspect there are different truth modalities. And I expect hard wired pattern recognizers to be involved for the different moral and truth modalities.
It is better to be your own chavruta, to check yourself, to ask whether, not why. Being paired with another person of similar level to catch your clever arguing makes it feels more like a competition, makes you want to make your clever arguments more subtle so you get away with them, rather than relinquish them.
I find that others are much better at spotting my reasoning errors than I am. That goes double for coming up with hypotheses I hadn’t considered. I think I can damp down the temptation for myself to compete, enough to get net benefits from the factors I just mentioned. The temptation for the other to compete is a little trickier, but it causes less damage to my learning than my competing would.
I endorse the self-management model of sharpness. It’s sort of like the Pratchett quote:
“Quis custodiet ipsos custodes, Your Grace?”
“I know that one,” said Vimes. “Who watches the watchmen? Me, Mr. Pessimal.”
”Ah, but who watches you, Your Grace?” said the inspector with a brief little smile.
”I do that, too. All the time.”
When the idea of chavrutas came up in an earlier discussion with raw power, I noted that the system doesn’t always work out the way it is intended, especially when weird social issues come into play.
Having a lot of experience with this system, it does have pros and cons. One serious con as far as LW should be concerned is two-person meta-affective death spirals. What I mean by this is not affective death spirals so much (although that can be a problem) but reinforcing various types of arguments. To use a Less Wrongish example, one could have two chavrutas who really like anthropic arguments. They might focus on anthropic issues to the exclusion of other types of relevant evidence. Continued for long enough, this sort of thing could result in people giving a lot more weight to some approaches which are completely disproportionate to the approaches’ actual usefulness.
Spelling: “affective”. (Eliezer; Wiktionary.)
Fixed. Thanks.
I have one of these, and I highly value our relationship. My friend and I have very basic disagreements in our worldview: I’m a mystic, and he’s a rationalist. We spend our time working through our differences. He’s done more than anybody else to make me question and revise my views on things. I think I’ve become significantly more correct, and a little bit wiser, due to his influence. He’s also the reason I’m here on Less Wrong.
It’s actually surprisingly hard to get to a point with somebody where you respect them and listen to them, despite having fundamental disagreements in your worldview. Every disagreement wears down rapport, and so extended discussions of fundamental differences in beliefs are a challenge for a relationship. I’m grateful to my partner for the fact that he still listens to my ideas when I’ve said so many things that I know seemed like nonsense to him.
The general idea sounds a lot like pair programming.
I’d like to give this a try on a specific topic.
I’m Catholic and of late not particularly happy about it. Through reading much of the LessWrong material and noticing the ubiquity of atheists here, my confidence estimates in the relevant religious questions have declined. In response, I spent a few days searching online for pro-atheist or at least pro-agnostic-versus-Catholic evidence and ended up very disappointed in what Google turned up. It’s an amusing and puzzling experience to be disappointed about one’s own deconversion failure.
I don’t want book recommendations unless they’re damn good. Almost invariably they’re not aimed at me anyway, but at fundamentalist-types: my religious education was actually pretty decent quality, so such books have so far ended up being a waste of time.
Ideally, what I want is somebody who can present a case for atheism adapted for a non-fundy Catholic, take my objections seriously, and not give up for a while until we reach some conclusion. Almost as a separate issue, I’m curious whether it would resolve to a change of mind or halt at incompatible priors.
Any takers?
I’m interested. One question before we go further, though—you describe yourself as ‘a non-fundy Catholic’. Would Pope Benedict XVI agree that you are Catholic, or are you using another definition? (I don’t mean to offend, but I have personal experience with a ‘Catholic’ who doesn’t believe in an afterlife—too much doublethink hurts my brain sometimes.)
It occurs to me that such a person should behave exactly the same way as an atheist (except perhaps when making bets about isomer concentrations in dinosaur fossils). A god who doesn’t treat you differently based on whether you worship him is an irrelevant god!
Isn’t part of the catholic belief structure that god occasionally grants prayers?
A god wouldn’t necessarily wait until you’re dead to punish or reward your behavior. In the Old Testament, God seems to prefer to provide feedback for the living.
The catholic God in particular does most of his control by alleged threats after death. Punishment and reward in during life appears to be minimal. (This differs vastly from the descriptions of other gods and even the belief of past, particularly pre-christian, believers in the same god.)
Catholicism in particular has doctrines that hinge very strongly on the existence of an afterlife. If a person who identifies as Catholic professes not to believe in an afterlife, my confidence that they adhere to other common Catholic doctrines is reduced.
Hence ‘Catholic’ in scare quotes, yes.
That isn’t necessarily true. I might believe in a god whose doctrines are maximally moral (either in the consequentialist sense that living according to them will maximize overall value, or in some deontological sense I don’t entirely understand) but who won’t treat me any differently if I worship him. Such a god is relevant to my behavior, in that what I ought to do given his existence is different from what I ought to do given his nonexistence.
That’s what I said. The fact that he does behave like an atheist is the annoying bit—I knew he went to Catholic school, but actually wasn’t aware he still considered himself Catholic until the topic of my atheism happened to come up.
As far as I can tell, he believes exactly the same things I do about basically everything—yes, even dinosaur fossils. He just considers himself Catholic, and says he ‘believes in God’. If there ever was a more freeloading belief, I haven’t met it.
I have touched on a related subject in another thread.
While I’ve never actually heard the term “Catholic atheist” as I have “Jewish atheist,” it wouldn’t actually be that surprising -- “Catholic,” much more so than the generic “Christian”, is a cultural signifier as well as a purely religious one.
Indeed. In Ireland, especially Northern Ireland, religion is far from the only major difference between the two main groups (Catholics/Nationalists/Irish/people who say “Derry”/people who say “haitch” and Protestants/Unionists/British/people who say “Londonderry”/people who say “aitch”, for lack of any completely satisfactory one-word labels for the groups.)
(My spell checker is clearly Protestant, as it flags “haitch” as incorrect.)
I wouldn’t be so weirded out if that were the case, I can understand that. The problem is that it isn’t being used as a cultural signifier—he never goes to Mass, none of his friends are Catholic and he didn’t raise me to be Catholic. (It occurs to me to mention at this point that the person in question is my father.)
My ‘atheist coming-out’ was a deeply strange conversation, not least because I wasn’t aware I had been in a closet.
Reminds me a bit of my father. My dad has basically said that he doesn’t think there is anything after death, and that what you do in life does not matter- so long as you do not ‘get caught.’ While I cringe at his lack of morals, I do question why he considers himself Catholic. He does not go to church, does not pray, and holds the church in contempt.
I can see that it is not a cultural signifier, so my idea is that he fears creating any problems within the family. Other people in the family might outright ostracize him for openly stating his beliefs without the mandatory “but I’m a Catholic!” added in at the end. Perhaps it is a similar situation? I can’t actually say, since I do not know your father. It’s simply a stab in the dark.
There is the possibility of feeling gratitude, respect, or love for one’s Creator. Your feelings in that regard would make God very relevant.
If there were a Creator of the Universe, and he wasn’t the usual monotheistic Celestial Psychopath, I’d feel some gratitude.
Hi pedanterrific, Yes, the Pope would agree that I’m a Catholic, although that’s hardly an essential feature. For specificity, I’m a practicing Roman Catholic who can recite the Nicene Creed in good faith. PMs/emails preferred.
So, did you get my PM?
Maybe? It depends somewhat on what sort of a case you want made.
If I accept that beliefs are justified insofar as evidence differentially supports them relative to competing beliefs, and I ask whether a belief that a deity exists that has the properties attributed to it by (for example) Catholics is justified, it follows that I should look for evidence differentially supporting that belief. If I don’t find such evidence, I should conclude that such a belief is not justified; if I do find it, I can go on to ask other more detailed questions about that belief.
If you agree with that, and you’re in the position of having looked for such evidence and found it (or found plausible candidates for it), then sure, I might be interested in working that through with you. Who knows, perhaps you’ll convince me as well.
OTOH, if you don’t agree with that, we probably don’t have enough common ground to even get started.
Hi TheOtherDave,
Shortly after I was introduced to LessWrong, a local philosophy meetup that I sporadically attended held a meeting on the topic, “What would it take to convince you of God’s existence?”. Given my background on the other side of the question, I naturally prepared a lengthy list of the sorts of evidences I would look for to convince me that God doesn’t exist. (Sadly, no one else at the meetup seemed interested in an evidential approach and just answered “absolutely nothing” to the original question or maundered on about supposed past lives, so I didn’t get any critiques there.)
Nevertheless, it’s quite plausible I missed some important possible tests or mistook the data, and that’s where a chavruta would fit in.
I’d actually like to back up a step from there, if it’s OK with you.
It seems likely to me that many of the items you list as evidence of the non-existence of the referent of “God” as understood by your form of Catholicism would also be evidence of the non-existence of the referent of “God” as understood by my form of Judaism. (For convenience, I will hereafter refer to those referents as the Christian God and the Jewish God, respectively.)
If that’s true, it creates something of a problem, since while I would agree that seeking evidence for the nonexistence of X and failing to find it constitutes evidence (not proof, but evidence) for the existence of X, if the evidence you’re seeking and failing to find is also evidence for the nonexistence of Y then failing to find it is equally evidence for the existence of Y. So, if the evidence you identified would demonstrate the nonexistence of both the Christian and the Jewish Gods, then failing to find that evidence would be both evidence for the existence of the Christian God and evidence for the existence of the Jewish God.
And the same goes for many other denominations’ Gods.
Which would be fine, if your goal was to explore the existence of some kind of God, who might not be the Christian God… but it doesn’t sound like that’s where you’re coming from.
And if X and Y are mutually exclusive, then the whole thing becomes rather a muddle.
So it seems it’s important to find, not only evidence that supports the existence of the Christian God (such as failing to find evidence of Godlessness) but also evidence that differentially supports that existence, relative to the existence of other Gods (say, one of the Hindu Gods, or the God of some religion neither of us has ever heard of).
Would you agree?
With every word.
OK. So, I return to my earlier statement: if I want to know whether a belief in the Christian god is justified, I should look for evidence differentially supporting that belief. If I don’t find such evidence, I should conclude that such a belief is not justified; if I do find it, I can go on to ask other more detailed questions about that belief.
The obvious next question, then: what evidence differentially supports that belief?
I suggest concluding that beliefs are probabilistic, and strengths of belief are justified or unjustified.
Sure, agreed. Read “a certain confidence level in a belief in” for “a belief in” throughout.
I suspect that one of two things was going on. They may have not really cared to talk about the supernatural but were intending to use that extreme case as a springboard to talk about evidence and belief in general. Alternatively, if they thought the topic as phrased was apt, likely they were not sufficiently deft at dealing with and unpacking unhelpful terms like “God”.
You should have abandoned sharing your list (hard to do after putting effort into it) and discussed why an evidential approach was better than their approaches at a philosophical level. If you don’t have a separate long mental list of why it is, then even if it is the right approach, you shouldn’t feel too superior over people using the wrong approach who can’t justify their philosophical approach because you can’t justify yours either, you just know how to use it.
Eh, I’m fine with analytic philosophy. It seems like an essential toolset. The only sense in which an evidential approach seemed superior to me was that it felt less like cheating. I’ve encountered dozens of definitions of “God”, and it’s easy to pick a definition such that the entity necessarily exists or necessarily doesn’t exist. Doing that and stopping there is cheating, I think, because it’s not the sensus fidelium regarding what and who God is. Plainly Catholicism does use (by habit, not dogma) a small set of definitions of necessarily existing entities, but it’s far from obvious that they are (or can be) the same entity, and quite dubious that those entities have much in common with Yahweh.
As a result of this article, I’m seeking a Chavruta. I’ll report back on the effectiveness of this method.
How did it go?
I’ve had several conversations with my target Chavruta, but I’d have to say it’s not been terribly valuable. He claims to be interested, and even responded favourably when I suggested an exchange of readings, but hasn’t followed through. I suspect this is more a result of the individual than the method.
Thanks for the reminder, I’m going to ring him up and try to rekindle this.
Kahneman and Tversky appear to have worked as a great chavrusa, but I do wonder how rare this is
http://lesswrong.com/r/discussion/lw/8od/good_interview_with_kahneman_link/
Reading LW (and everything else) with my husband has helped both of us think through things more clearly, I think. It would probably work better for debate/learning purposes if we started out more different from each other, but we fairly often point out flaws in the other’s thinking.
I keep looking for someone like that. It would be nice to have someone to debate with, and would probably motivate me to do more research. Usually, I just discuss new things I hear, or beliefs I hold, with my boyfriend or my roommate.
This sounds like a pretty cool idea.
One issue I find with it is that I’ve been updating my political and other views somewhat frequently. Getting someone with the opposite view would be somewhat odd, and probably most of what we debate would be specific ideas that don’t particularly correlate with each other, so I might need more than three.
Along axes that I don’t have strong opinions, it would probably be more of a discussion, which would still be cool.
It totally sounds worth a try though.