Very interesting, but I do have one concern: this setup does nothing to prevent rationalization. Debates and arguments can facilitate learning in certain situations, but they don’t help you notice when you are rationalizing instead of updating. Too much exposure to a learning style that emphasizes clever arguing over evenhandedness may end up being epistemically unhealthy in the long run.
And indeed, the example given is its use in defending Judaism. This should raise red flags!
I’ve recently had a couple conversations with someone who does this a lot (in the context of Judaism). He appears to be quite smart and instrumentally rational in general, but his epistemology is so horrible as to make communication about theory selection just about impossible. The worst part is that his epistemology is heavily fortified, and there ain’t nothin’ you can do to talk some sense in- and I’m talking about non religious topics. It seems to be at least another level past that of the average Christian.
The system is very rarely used to defend Judaism per se. Chavrutas will only very rarely debate or argue over those fundamental premises.
As to your friend- I’m pretty sure he’s the exception rather than the rule. Having interacted with a large number of Orthodox Jews (both when I was Orthodox and after), they aren’t any better at apologetics than the average Christian. Epistemological issues exist, but they exist as they do in essentially all religious frameworks (I’m actually beginning to think that common epistemological flaws are one of the unifying features of religion). Judaism has some epistemological problems that many forms of Christianity do not have, such as heavy emphasis on tradition and ancestral belief as extremely strong valid evidence, but these problems seem to be divorced from the chavruta system.
Have you tried asking him to do an exchange where in this conversation you do your best to adopt his epistemology and in that conversation he does his best to adopt yours? Agree to briefly agree, and trade off who agrees with whom.
In conversations with other people, that’s an awesome technique. In this case, I’d be very surprised if it’d work. He’s a competent hypnotist, so he knows all about getting people to imagine that they believe different things… and then forget that they’re imagining. He wouldn’t “fall” for that one.
As Peterdjones said, a lot of it is metasuggestion. However, that is not the whole story, and there are ways to get the same effects without using the word “hypnosis”.
On top of that, most of the interesting stuff just got buried under the label of “suggestion”.
“Hypnosis” is a horribly vague word, but its basically all about learning how to talk to the different parts of the brain and engineering what you say as to get the different parts to respond the way you want. “Engineered placebo”, perhaps.
What kind of answer are you looking for? I can’t explain the whole lot in a short comment, but I’d consider writing a post on what researching hypnosis has taught me, if there’s interest.
Its hard to fake too, the reason it works so well is the people that wont be hypnotized are removed early in the process. The only way ive ever seen someone successfully mess with a good hypnotist is when my friend hypnotized himself first, so that he would be able to break the hypnotism in the middle of the hypnotists act with help from a separate trigger. Then you can really mess with the guy successfully making everyone suggestible.
I trust in our ability to think up ways of avoiding this. The same way Jiu Jitsu, a lethal, murderous martial art, was redeveloped into Judo, a far less lethal version, I think we can use our sense of rationality in such a way as to develop an art of debate that doesn’t need to rely on the Dark Arts to persuade the audience (and, secretly, the opponent). Note that being a dangerous Judoka takes much more training and effort than becoming a dangerous Jiu Jitsuka, but the end result is much more beautiful to behold, yes? (And causes much less trouble in the long run, especially before the sort of audience that will judge you on how “nobly” you won rather than how quickly, to the point of making even a defeat on the field a victory among the public, Rocky-style).
This includes sparring techniques and training methods: training a man to kill takes very different methods from training them to lawfully win street fights. The wrong training in the wrong circumstances can prove to be disastrous, such as what happens when you send combat troops do the job of MP.
I trust in our ability to think up ways of avoiding this. The same way Jiu Jitsu, a lethal, murderous martial art, was redeveloped into Judo, a far less lethal version, I think we can use our sense of rationality in such a way as to develop an art of debate that doesn’t need to rely on the Dark Arts to persuade the audience
I don’t trust anyone’s ability to avoid this pitfall. Rationality skills are often learned on the 5-second level, so it’s paramount that we train ourselves to not instinctively rationalize things. In general, when persuation takes precedence over truth-seeking, then you’re no longer talking about rationality, you’re just talking about how to win debates. I agree that there are white-hat ways to debate and black-hat ways to debate and that public speaking is a valuable skill, but rationality is never about arguing for a particular conclusion. As a result, I think it’s best to keep the two concepts separate and be very mindful of which thinking skills you are using for each activity.
In general, when persuation takes precedence over truth-seeking, then you’re no longer talking about rationality, you’re just talking about how to win debates.
As long as both participants really want truth and are willing to lose, I don’t think this will be a problems. When following a chain of logic by yourself, you are the only one who can notice your rationalizations. Chavrutas might provide extra incentive to rationalize but for some people, especially many lesswrongers, this will be outweighed by having an extra guard against rationalization.
Chavrutas might provide extra incentive to rationalize but for some people, especially many lesswrongers, this will be outweighed by having an extra guard against rationalization.
I’m somewhat skeptical that LessWrong readers are so good at avoiding rationalization that we can engage in activities like this without any adverse cognitive effects. It’s not like non-LWers are running Windows ME and we are running LINUX—we all share a similar cognitive architecture, and we are all prone to rationalization. Being aware of this fact is not enough to prevent it, just like seeing an optical illusion does not make it go away. It takes a conscious effort to think rationally, and I think we should be focusing our efforts on developing epistemic rigor rather than engaging in something potentially poisonous.
Furthermore, while having a partner in a calm, reasoned discussion will help you catch yourself in the act of rationalizing, having a partner whose purpose is to argue with you probably won’t.
I have a very different impression of the mental consequences of debating than you; I didn’t think that debating always provides a very strong incentive to rationalize, depending on context, the relationship between the participants, etc. Am I incorrect?
Well, I can tell you about my own experience: I participated in organized debate in high school and at university for a total of 6 years. After a while, rationalization came naturally, and I couldn’t tell the difference between rationalization and non-rationalization. In early 2011 I started re-reading the Core Sequences (I read them for the first time in mid-2010), and some of the posts on rationalization really “clicked” the second time around. I gradually realized that I had to make a deliberate effort to not rationalize, and I tasked myself with double-checking as many of my own thoughts as possible. Since then I’ve improved somewhat, and I can sometimes catch myself in the act of rationalizing. But I’ve got a long way to go, and I think that’s partly a result of training myself to accept a randomly assigned conclusion and manufacture arguments to match.
I agree that some kinds of debates do discourage rationalization, but I’m worried that as soon you give up your bottom line, you put yourself at a great deal of epistemic risk.
I agree that some kinds of debates do discourage rationalization, but I’m worried that as soon you give up your bottom line, you put yourself at a great deal of epistemic risk.
The kind of debate one would use with a rationalist chavruta should be defending a point that you actually believe using the reasons that actually cause you to believe it. Maybe the word ‘debate’ has the wrong connotations, especially in the context of debate clubs, which are epistemically horrible, but I think humans can try to convince each other of things without rationalizing.
The kind of debate one would use with a rationalist chavruta should be defending a point that you actually believe using the reasons that actually cause you to believe it.
I completely agree; though there would still be rationalization for social reasons, as JoshuaZ points out below, if both partners were being completely honest with each other then there is significantly less cause for concern.
if both partners were being completely honest with each other then there is significantly less cause for concern.
I’m not quite sure what you mean here. Do you think that many LWers could maintain a high enough level of honesty for this to work? I think that we could, though some people are more competitive and less truth-seeking in these types of scenarios, so I wouldn’t recommend it for everyone.
As long as there is nothing incentivizing people to be anything other than completely honest (like there is in an organized debate) then I’m much less concerned (but not entirely unconcerned). And I agree that not everyone is capable of being entirely honest.
We don’t always know our true rejections; sometimes we even invent rejections during the debate. This is a bias that should be reduced as much as possible, but it can’t always be eliminated.
I participated in organized debate for some time and was quite good at it. I am not convinced that I learned anything other than sophistry and selective reasoning from the experience. Worse, I’m not convinced that that wasn’t the point.
If at least some of the chavruta arguments are public, then the odds go up that weak arguments will be recognized for what they are.
“Jews and Gentiles” puts the dividing line in the wrong place. I’m Jewish, but I know little about how Talmud is studied. I’d never heard of chavrutas till your post.
Very interesting, but I do have one concern: this setup does nothing to prevent rationalization. Debates and arguments can facilitate learning in certain situations, but they don’t help you notice when you are rationalizing instead of updating. Too much exposure to a learning style that emphasizes clever arguing over evenhandedness may end up being epistemically unhealthy in the long run.
See also: Against Devil’s Advocacy
And indeed, the example given is its use in defending Judaism. This should raise red flags!
I’ve recently had a couple conversations with someone who does this a lot (in the context of Judaism). He appears to be quite smart and instrumentally rational in general, but his epistemology is so horrible as to make communication about theory selection just about impossible. The worst part is that his epistemology is heavily fortified, and there ain’t nothin’ you can do to talk some sense in- and I’m talking about non religious topics. It seems to be at least another level past that of the average Christian.
The system is very rarely used to defend Judaism per se. Chavrutas will only very rarely debate or argue over those fundamental premises.
As to your friend- I’m pretty sure he’s the exception rather than the rule. Having interacted with a large number of Orthodox Jews (both when I was Orthodox and after), they aren’t any better at apologetics than the average Christian. Epistemological issues exist, but they exist as they do in essentially all religious frameworks (I’m actually beginning to think that common epistemological flaws are one of the unifying features of religion). Judaism has some epistemological problems that many forms of Christianity do not have, such as heavy emphasis on tradition and ancestral belief as extremely strong valid evidence, but these problems seem to be divorced from the chavruta system.
Have you tried asking him to do an exchange where in this conversation you do your best to adopt his epistemology and in that conversation he does his best to adopt yours? Agree to briefly agree, and trade off who agrees with whom.
In conversations with other people, that’s an awesome technique. In this case, I’d be very surprised if it’d work. He’s a competent hypnotist, so he knows all about getting people to imagine that they believe different things… and then forget that they’re imagining. He wouldn’t “fall” for that one.
How does Hypnotism even work?
As Peterdjones said, a lot of it is metasuggestion. However, that is not the whole story, and there are ways to get the same effects without using the word “hypnosis”.
On top of that, most of the interesting stuff just got buried under the label of “suggestion”.
“Hypnosis” is a horribly vague word, but its basically all about learning how to talk to the different parts of the brain and engineering what you say as to get the different parts to respond the way you want. “Engineered placebo”, perhaps.
What kind of answer are you looking for? I can’t explain the whole lot in a short comment, but I’d consider writing a post on what researching hypnosis has taught me, if there’s interest.
interested hand is raised
a) it has been suggested to people. by culture in general that there is such a thing as hypnosis, which works in such-and-such way.
b) This existing suggestion can be used to suggest to people that they become more suggestible than they are already.
In short;
c) it’s suggestion all the way down.
Suppose I cast a magic spell to make magic start working for everyone in the universe, including myself retroactively so that I can cast the spell.
I would not expect magic to start working.
Conclusion: there’s something real happening in the vicinity of the referent of “suggestion”.
I’m not saying suggestion is nothing. I am saying the level of suggestibility is never zero.
Its hard to fake too, the reason it works so well is the people that wont be hypnotized are removed early in the process. The only way ive ever seen someone successfully mess with a good hypnotist is when my friend hypnotized himself first, so that he would be able to break the hypnotism in the middle of the hypnotists act with help from a separate trigger. Then you can really mess with the guy successfully making everyone suggestible.
This is way more hilarious than it has any right to be.
I trust in our ability to think up ways of avoiding this. The same way Jiu Jitsu, a lethal, murderous martial art, was redeveloped into Judo, a far less lethal version, I think we can use our sense of rationality in such a way as to develop an art of debate that doesn’t need to rely on the Dark Arts to persuade the audience (and, secretly, the opponent). Note that being a dangerous Judoka takes much more training and effort than becoming a dangerous Jiu Jitsuka, but the end result is much more beautiful to behold, yes? (And causes much less trouble in the long run, especially before the sort of audience that will judge you on how “nobly” you won rather than how quickly, to the point of making even a defeat on the field a victory among the public, Rocky-style).
This includes sparring techniques and training methods: training a man to kill takes very different methods from training them to lawfully win street fights. The wrong training in the wrong circumstances can prove to be disastrous, such as what happens when you send combat troops do the job of MP.
I don’t trust anyone’s ability to avoid this pitfall. Rationality skills are often learned on the 5-second level, so it’s paramount that we train ourselves to not instinctively rationalize things. In general, when persuation takes precedence over truth-seeking, then you’re no longer talking about rationality, you’re just talking about how to win debates. I agree that there are white-hat ways to debate and black-hat ways to debate and that public speaking is a valuable skill, but rationality is never about arguing for a particular conclusion. As a result, I think it’s best to keep the two concepts separate and be very mindful of which thinking skills you are using for each activity.
Our ability as a group. See, you yourself are already contributing into averting those problems.
As long as both participants really want truth and are willing to lose, I don’t think this will be a problems. When following a chain of logic by yourself, you are the only one who can notice your rationalizations. Chavrutas might provide extra incentive to rationalize but for some people, especially many lesswrongers, this will be outweighed by having an extra guard against rationalization.
I’m somewhat skeptical that LessWrong readers are so good at avoiding rationalization that we can engage in activities like this without any adverse cognitive effects. It’s not like non-LWers are running Windows ME and we are running LINUX—we all share a similar cognitive architecture, and we are all prone to rationalization. Being aware of this fact is not enough to prevent it, just like seeing an optical illusion does not make it go away. It takes a conscious effort to think rationally, and I think we should be focusing our efforts on developing epistemic rigor rather than engaging in something potentially poisonous.
Furthermore, while having a partner in a calm, reasoned discussion will help you catch yourself in the act of rationalizing, having a partner whose purpose is to argue with you probably won’t.
I have a very different impression of the mental consequences of debating than you; I didn’t think that debating always provides a very strong incentive to rationalize, depending on context, the relationship between the participants, etc. Am I incorrect?
Well, I can tell you about my own experience: I participated in organized debate in high school and at university for a total of 6 years. After a while, rationalization came naturally, and I couldn’t tell the difference between rationalization and non-rationalization. In early 2011 I started re-reading the Core Sequences (I read them for the first time in mid-2010), and some of the posts on rationalization really “clicked” the second time around. I gradually realized that I had to make a deliberate effort to not rationalize, and I tasked myself with double-checking as many of my own thoughts as possible. Since then I’ve improved somewhat, and I can sometimes catch myself in the act of rationalizing. But I’ve got a long way to go, and I think that’s partly a result of training myself to accept a randomly assigned conclusion and manufacture arguments to match.
I agree that some kinds of debates do discourage rationalization, but I’m worried that as soon you give up your bottom line, you put yourself at a great deal of epistemic risk.
The kind of debate one would use with a rationalist chavruta should be defending a point that you actually believe using the reasons that actually cause you to believe it. Maybe the word ‘debate’ has the wrong connotations, especially in the context of debate clubs, which are epistemically horrible, but I think humans can try to convince each other of things without rationalizing.
I completely agree; though there would still be rationalization for social reasons, as JoshuaZ points out below, if both partners were being completely honest with each other then there is significantly less cause for concern.
I’m not quite sure what you mean here. Do you think that many LWers could maintain a high enough level of honesty for this to work? I think that we could, though some people are more competitive and less truth-seeking in these types of scenarios, so I wouldn’t recommend it for everyone.
As long as there is nothing incentivizing people to be anything other than completely honest (like there is in an organized debate) then I’m much less concerned (but not entirely unconcerned). And I agree that not everyone is capable of being entirely honest.
If we can’t be entirly honest, then what are we doing here?
We don’t always know our true rejections; sometimes we even invent rejections during the debate. This is a bias that should be reduced as much as possible, but it can’t always be eliminated.
I participated in organized debate for some time and was quite good at it. I am not convinced that I learned anything other than sophistry and selective reasoning from the experience. Worse, I’m not convinced that that wasn’t the point.
If at least some of the chavruta arguments are public, then the odds go up that weak arguments will be recognized for what they are.
“Jews and Gentiles” puts the dividing line in the wrong place. I’m Jewish, but I know little about how Talmud is studied. I’d never heard of chavrutas till your post.
… Yup, that basically sums up my perception of it, superficially at least.