I would love to read a top-level post comparing the major differences and similarities between academic negotiation theory and NVC, and why the differences that negotiation theory has are better than their NVC alternatives.
The short answer for now is to apply a general principle: whenever one sees a lauded and apparently efficacious set of beliefs people are applying, and those beliefs are extremist, and those beliefs are unjustified (not necessarily unjustifiable) or poorly justified by appeal to something other than good evidence (such as their effectiveness or persistence), one should suspect the reality is that people are biased against following the recommended practices, such that partial or even full adherence to the beliefs is better than total cluelessness, and the wrong belief system ameliorates those biases.
E.g., if I see people saying silly things like “violence never solves anything” or “violence is never the best way to fulfill your goals”, I ought to suspect that the actual truth is that people are biased to incorrectly conclude that they ought to use violence when in fact it is inappropriate as well as when it is appropriate, so such a dogmatic belief is helpful, even if false. If so, such a belief will be particularly helpful to moderates who take it seriously but not as gospel and are open to acting in accordance with their instincts when those instincts are insistently clamoring. People who are truly faithful to the system may or may not be worse off than those who never heard of it, because insofar as adaptation of the system is based off its usefulness when used irregularly, it isn’t necessarily beneficial when tried universally.
As far as this system in particular is concerned, you can browse the web for information. The following are the first words on the home page of the Center for NVC:
Nonviolent Communication (NVC) is based on the principles of nonviolence—the natural state of compassion when no violence is present in the heart.
NVC begins by assuming that we are all compassionate by nature and that violent strategies—whether verbal or physical—are learned behaviors taught and supported by the prevailing culture. NVC also assumes that we all share the same, basic human needs, and that each of our actions are a strategy to meet one or more of these needs.
People who practice NVC have found greater authenticity in their communication, increased understanding, deepening connection and conflict resolution.
Only three things come to mind when I think of how a conclusion can fail to be correct. 1) It can be based on false premises, 2) it can be based on flawed reasoning, and 3) it can fail to actually mean anything at all and be “not even wrong”.
The first statement arguably passes the third test, though I think it is an application of the “appeal to nature” fallacy (note: not the “naturalistic fallacy”) and happens to be based on a false fact, so it fails tests 1) and 2).
The second paragraph clarifies that our interpretation of the first was correct and adds some assumptions that are either false and fail the second test or strain the meanings of the words used and thereby flirt with violating the third test.
The third statement is unscientific in a number of ways but I don’t really hold it against them, it’s good enough for a webpage. However, it fails to support a specific conclusion and thus invites the reader to construct an argument reaching the strongest pro-NVC argument he or she fails to see a flaw in, like “Churches were the biggest patrons of art and culture in medieval Europe and are responsible for (insert good thing here),” which is potentially a true statement, but is often inserted among pro-religious arguments as if it were the basis of a sound pro-religious argument.
To me the theories behind NVC look like tricks to get people to buy into extreme propositions that act as counterweights to flawed biases, approaches, and methods many people in fact have. The theories don’t look true and the advice seems merely useful for most people to hear rather than actually true.
To me the theories behind NVC look like tricks to get people to buy into extreme propositions that act as counterweights to flawed biases, approaches, and methods many people in fact have. The theories don’t look true and the advice seems merely useful for most people to hear rather than actually true.
Given Sturgeon’s law, “merely useful” is pretty high praise.
In any field of endeavor where your goal is to convince someone to change their behavior, it is a given that you must provide a suitable “theory” to be a behavioral mnemonic and/or intuition pump. (Because in order to get a person to act, you must provide them with an intuitive perception that taking (or refraining from) the prescribed actions will produce the result they want.)
It is also a given that most such intuition pumps will contain things that are to some degree, false or wrong, simply because all models are wrong, and convincing models especially so. (Since they have to fit humans’ pre-existing biases and cognitive capacities.)
And, if you try to fix that wrongness by being more detailed and more nuanced, you will gradually begin to lose your audience, which for the most part, really doesn’t care!
And, even if they do care, they are soon no longer able to grasp the essence of your model, due to the number of bits of information you’re expecting them to internalize.
Therefore, a rationalist that wants to obtain useful information needs to have a lower threshold for rejecting source of information based on their epistemic hygiene, and focus only on the predictions made by a model.
For example: the “law of attraction” model is complete and utter bunk.
And yet, if you discard the theory, and examine instead what specific behaviors its advocates say people should engage in, and what specific results they predict will occur, you will in fact find that, well, the prescriptions and predictions are actually kind of right. (See Wiseman’s “luck research”, which provides far more plausible explanations for how those phenomena actually occur.)
So: the theory is bunk, and yet results are produced… just like candles still burned when everybody still thought “phlogiston” was a thing.
Now, note that I’m not saying that NVC has been empirically verified. I know next to nothing about it other than tidbits I’ve heard about some of the skills—and which tidbits I’ve put to practical use.
What I am saying, however, is that the theories provided by any system of self-improvement, communication, etc. should be taken with a grain of salt, because the practical value of those theories is to provide intuitive understanding and motivation for someone learning to apply the practical knowledge involved.
So, it is these specific, detailed behavioral recommendations and result predictions that should be examined, when examining a practical body of knowledge.
Because, either those behaviors produce the result, or they don’t. And if you desire the result, the theory part is completely and utterly irrelevant: all that matters is whether the result is produced or not.
You can, if you wish, always invent a replacement intuition pump—perhaps even making it so silly that you know you won’t be compromised by believing it (see e.g. the Flying Spaghetti Monster), or perhaps carrying out your own groundbreaking scientific research to show why/how the crazy thing under study actually works (like Richard Wiseman).
But if you set your standards for theory so high as to require an academic level of precision, you’re automatically cutting yourself off from vast amounts of useful knowledge, and substituting knowledge that usually isn’t optimized for actually doing anything.
Because, either those behaviors produce the result, or they don’t. And if you desire the result, the theory part is completely and utterly irrelevant: all that matters is whether the result is produced or not.
That doesn’t address the concern that more diligent adherence to the specific behavioral recommendations produces worse results when the recommendations overshoot how much one should correct a bias in one’s natural tendencies. That’s part of why I followed “merely useful” with “to most people”. If something is untrue but useful to many and harmless to few or none, that’s one thing, but this is untrue and harmful to many and we can do better.
It may be too complicated to some people to say “Violence is appropriate .01% of the time, you think it’s appropriate .03% of the time, until you learn to distinguish among cases in which you think you should use violence, when you think you should use violence there is a 2⁄3 chance you are wrong”. Fine. Tell those people “Violence is never the answer”.
LW people can handle that many bits of information, and will see right through some lies that are useful to those dull enough to believe them. We can even aspire to distinguish among cases in which we intuit that the best course of action is violence, something those who tell themselves the useful lie cannot. (Obviously, I made these numbers up to make the point.)
What I am saying, however, is that the theories provided by any system of self-improvement, communication, etc. should be taken with a grain of salt, because the practical value of those theories is to provide intuitive understanding and motivation for someone learning to apply the practical knowledge involved.
Having sound theory is important so one knows when to deviate from standard practices. Here’s a rule: don’t commit fouls in basketball. Opponent has possession and is up by one point with 20 seconds left on the shot clock and in the game? Meh. Here’s a rule: use a goalie and not a sixth skater in ice hockey. Delayed penalty call against the opponent? Meh. I could go on.
Having sound theory is important so one knows when to deviate from standard practices.
Not necessarily. One reason simple linear models do better than human experts is because the humans are too eager to abandon their standard model in favor of something that seems like a good reason to deviate.
(And this is an important reason why you don’t teach a beginner advanced strategies until they’ve mastered the basics first—you want their brain to pave a really broad route to the basics, and relatively narrow tracks for the advanced bits.)
In any case, calling it a need for “sound theory” is overstating the case. What’s needed is a model that provides accurate predictions. The model itself need not be true or sound—see again the “law of attraction” as an example.
humans are too eager to abandon their standard model
You seem to imply that it is sometimes, though not always, appropriate to deviate from a standard model—I agree—but that people tend to do it too much—I agree.
I wouldn’t think you better off if instead of believing this, you believed wrongly that it is never appropriate to deviate from standard models. Note how in arguing for the utility of false beliefs you refer back to reality as we both believe it to be, in which there are nuances and exceptions, If it so happens that you correctly believe you aren’t good at determining when to depart from standard models, you can do well and in practice never deviate from them, all without the burden of the false belief that it is never correct to do so.
Like any false belief, that would risk spreading epistemological contagion when other beliefs get entangled with it, and you can’t avoid this by labeling it false as “If there were a verb meaning “to believe falsely,” it would not have any significant first person, present indicative.” -Wittgenstein.
I agree that false beliefs can be useful, but this seems somewhat analogous to the fact that the most correct actor is the one who has priors (somewhat miraculously) corresponding to the truth and has confidence of 1, or the fact that to get the most points after a touchdown one would always have to “go for two” rather than kicking for an extra point. I’m wary of accepting (or teaching) false beliefs even when apparently useful, for the same reasons Eliezer stated in Protected from Myself. I know that to believe/teach false things is sometimes best, but I also know that I overestimate when it is best, so I’m avoiding doing it, and I didn’t have to falsely believe “teaching false things is never best” to avoid doing it and protect me from myself!
I wouldn’t think you better off if instead of believing this, you believed wrongly that it is never appropriate to deviate from standard models.
No, I believe that it is not appropriate to deviate from a model you are trying to learn, before you have mastered it. This also means that it’s inappropriate to critique a training program on the basis that it advocates not deviating from a model; it is, after all, material for people trying to learn that model.
[various confused things]
In order to explain why I think the rest of what you said is wrong and/or tangential, I’d have to take a lot of time to expand out each of your terms and assumptions, and I really don’t want to take the time right now. So, all I’m going to say at this point is that your map of “belief” does not match the territory of the brain’s hardware. Rather, it’s a naive intuition of an idealized non-physical mind, not unlike the intuition that makes humans inclined to believe in things like souls.
IOW, the term “belief” is extremely overloaded. I deliberately have been referring to “models” rather than beliefs, specifically to narrow down the overloading. At minimum, we can divide beliefs into anticipative (Kahneman/Tversky System 1 aka “near”) beliefs and verbal/symbolic (K/T System 2 aka “far”) beliefs. Anticipative beliefs control your actual real-world anticipations, behaviors, and emotional responses, while verbal/symbolic beliefs drive your verbal reasoning, professions of belief, and long-term expectations.
This split explains why one can “not believe in ghosts”, but still be scared in a haunted house, or “believe” that one is just as deserving as anyone else, yet have trouble speaking up in a group.
However, the system 1/system 2 distinction is only the tip of the iceberg with respect to how beliefs and models work in the brain—there are meaningful subdistinctions within both system 1 and system 2, and there are differences in how permeable the systems are—the rate, you might say, at which a belief can “diffuse” through the brain and influence other things.
None of this, AFAICT, is incorporated into your naive model of epistemological contagion.
That being said, I don’t advocate teaching things that would be contagious.
For example, I wouldn’t support NVC’s teaching that violence is just learned and not inherent to human beings; that’s plain stupid. However, the apparent intended function of that belief is to communicate that the expression of violent impulses is modulated by choice and learning, and so I’d need to replace that belief with some other idea that conveys that same point… perhaps in a story or metaphor that conveys the idea implicitly, so as to help push it into students’ System 1models (where it really needs to be, anyway, if you want people to behave differently, vs. being able to regurgitate things on tests).
None of this, AFAICT, is incorporated into your naive model of epistemological contagion.
I can’t see why you would guess my model excluded it.
I wouldn’t support NVC’s teaching that violence is just learned and not inherent to human beings; that’s plain stupid.
OK, so starting with the foundational belief of NVC, it’s important that learners of NVC not think any of that is true, and not to be misled by its association with the sound methods it apparently underlies. I haven’t seen any advocate of NVC say as much, but I haven’t delved into it.
Why should we expect the system to do a job of accumulating an effective set of methods that’s not easily improvable, if its teachers and practitioners believe these falsities? If they believe them, why are we confident they haven’t made erroneous, harmful extrapolations based on the theory being true?
Isn’t it likely that the leading academics (with studies, experiments, peer review, and decades of teaching experience) could separate what’s effective for teaching from what’s not, as well as what’s true from untrue, when those have been their dual goals? While their system is optimized for slightly different goals than NVC, HNP includes NVC’s goals, so asking HNP to compete against NVC by NVC’s goal criteria is like challenging a world-class triathlon champion to swim race against a guy who has never actually timed himself, but gets around pretty quickly at the YMCA pool, if he does say so himself, and that’s all he does all day. More accurately, it’s like asking both for advice on swimming rather than them racing.
OK, so starting with the foundational belief of NVC, it’s important that learners of NVC not think any of that is true, and not to be misled by its association with the sound methods it apparently underlies. I haven’t seen any advocate of NVC say as much, but I haven’t delved into it.
Er… I think you misunderstand me. Most people don’t give a flying football whether that statement is true or not. The functional purpose of the statement is (IMO) to encourage people to rethink an existing bias to assume that certain classes of communication are normal, natural, expected, and/or the only available option.
So, the statement serves a functional purpose, and if it’s thrown out, it needs to be replaced with something else. I am not saying that people should be taught to consider it untrue, and I doubt that the NVC folks do so. I’m just saying that if I were to teach NVC, I would ideally replace that statement with something that was both more true and more useful.
All I’m trying to say here is that it’s silly for a rationalist (whose goal is to acquire skill in a given field) to discard a set of methods from serious consideration or study, simply on the basis of obviously-wrong and obviously-stupid theories. (If Richard Wiseman had done that, we wouldn’t have luck research, for example.)
Why should we expect the system to do a job of accumulating an effective set of methods that’s not easily improvable, if its teachers and practitioners believe these falsities? If they believe them, why are we confident they haven’t made erroneous, harmful extrapolations based on the theory being true?
For the same reasons we expect candlemakers to be able to make candles, even when they believed pholgiston exists. And that is because, generally speaking, theories follow successful practice of some kind.
For example, Anton Mesmer noticed that if he did certain things, he could get people to behave in odd ways. He then made up a nonsense theory (“animal magnetism”) to explain this peculiarity. The practice of hypnotism still exists today, despite a near-complete absence of an epistemically-sound theory for its method of operation.
Theories preceding practices are exceedingly rare, because people don’t usually make up their theories out of nothing; generally, they make them up to explain their observations. And it is these observations that a rationalist should concern themselves with, rather than the theories that were made up to explain them.
While their system is optimized for slightly different goals than NVC, HNP includes NVC’s goals
HNP includes the goal of becoming a more compassionate person?
[Other HNP vs. NVC stuff]
I think you’re still mistaking me for an advocate of NVC, or someone trying to compare these two sets of practices. My sole purpose in this thread is to correct the all-too-common mis-perception that rationalists should discard bodies of practical knowledge that are packaged with verbal falsities. Such an attitude is poisonous to progress, since it needlessly discards quite a lot of otherwise perfectly-usable evidence and observations.
Most people don’t give a flying football whether that statement is true or not.
I don’t directly care whether they care about it or not, I care about the belief’s effect, regardless of whether or not students care, and I am concerned.
Theories preceding practices are exceedingly rare,
Fortunately, non-stupid theories following practices abound, though they are obviously not universal.
[I]t’s silly...to discard a set of methods from serious consideration or study, simply on the basis of obviously-wrong and obviously-stupid theories.
It’s a good thing I didn’t discard them, and instead qualified my skepticism based on my familiarity with it. You criticize me for downgrading my estimation of the likelihood value is worth extracting from NVC (huge piles of ore abound around a mining town near where I grew up, one could easily acquire millions of dollars worth of silver, though only extract it at the cost of at least twice that in refining costs. Hence, the silver ore is worthless,) after I read their silly theories. If they had said brilliant things, independently derived but in accord with the latest and greatest science, would I have been right to upgrade its predicted utility in my mind?
I would love to read a top-level post comparing the major differences and similarities between academic negotiation theory and NVC, and why the differences that negotiation theory has are better than their NVC alternatives.
The short answer for now is to apply a general principle: whenever one sees a lauded and apparently efficacious set of beliefs people are applying, and those beliefs are extremist, and those beliefs are unjustified (not necessarily unjustifiable) or poorly justified by appeal to something other than good evidence (such as their effectiveness or persistence), one should suspect the reality is that people are biased against following the recommended practices, such that partial or even full adherence to the beliefs is better than total cluelessness, and the wrong belief system ameliorates those biases.
E.g., if I see people saying silly things like “violence never solves anything” or “violence is never the best way to fulfill your goals”, I ought to suspect that the actual truth is that people are biased to incorrectly conclude that they ought to use violence when in fact it is inappropriate as well as when it is appropriate, so such a dogmatic belief is helpful, even if false. If so, such a belief will be particularly helpful to moderates who take it seriously but not as gospel and are open to acting in accordance with their instincts when those instincts are insistently clamoring. People who are truly faithful to the system may or may not be worse off than those who never heard of it, because insofar as adaptation of the system is based off its usefulness when used irregularly, it isn’t necessarily beneficial when tried universally.
As far as this system in particular is concerned, you can browse the web for information. The following are the first words on the home page of the Center for NVC:
Only three things come to mind when I think of how a conclusion can fail to be correct. 1) It can be based on false premises, 2) it can be based on flawed reasoning, and 3) it can fail to actually mean anything at all and be “not even wrong”.
The first statement arguably passes the third test, though I think it is an application of the “appeal to nature” fallacy (note: not the “naturalistic fallacy”) and happens to be based on a false fact, so it fails tests 1) and 2).
The second paragraph clarifies that our interpretation of the first was correct and adds some assumptions that are either false and fail the second test or strain the meanings of the words used and thereby flirt with violating the third test.
The third statement is unscientific in a number of ways but I don’t really hold it against them, it’s good enough for a webpage. However, it fails to support a specific conclusion and thus invites the reader to construct an argument reaching the strongest pro-NVC argument he or she fails to see a flaw in, like “Churches were the biggest patrons of art and culture in medieval Europe and are responsible for (insert good thing here),” which is potentially a true statement, but is often inserted among pro-religious arguments as if it were the basis of a sound pro-religious argument.
To me the theories behind NVC look like tricks to get people to buy into extreme propositions that act as counterweights to flawed biases, approaches, and methods many people in fact have. The theories don’t look true and the advice seems merely useful for most people to hear rather than actually true.
Given Sturgeon’s law, “merely useful” is pretty high praise.
In any field of endeavor where your goal is to convince someone to change their behavior, it is a given that you must provide a suitable “theory” to be a behavioral mnemonic and/or intuition pump. (Because in order to get a person to act, you must provide them with an intuitive perception that taking (or refraining from) the prescribed actions will produce the result they want.)
It is also a given that most such intuition pumps will contain things that are to some degree, false or wrong, simply because all models are wrong, and convincing models especially so. (Since they have to fit humans’ pre-existing biases and cognitive capacities.)
And, if you try to fix that wrongness by being more detailed and more nuanced, you will gradually begin to lose your audience, which for the most part, really doesn’t care!
And, even if they do care, they are soon no longer able to grasp the essence of your model, due to the number of bits of information you’re expecting them to internalize.
Therefore, a rationalist that wants to obtain useful information needs to have a lower threshold for rejecting source of information based on their epistemic hygiene, and focus only on the predictions made by a model.
For example: the “law of attraction” model is complete and utter bunk.
And yet, if you discard the theory, and examine instead what specific behaviors its advocates say people should engage in, and what specific results they predict will occur, you will in fact find that, well, the prescriptions and predictions are actually kind of right. (See Wiseman’s “luck research”, which provides far more plausible explanations for how those phenomena actually occur.)
So: the theory is bunk, and yet results are produced… just like candles still burned when everybody still thought “phlogiston” was a thing.
Now, note that I’m not saying that NVC has been empirically verified. I know next to nothing about it other than tidbits I’ve heard about some of the skills—and which tidbits I’ve put to practical use.
What I am saying, however, is that the theories provided by any system of self-improvement, communication, etc. should be taken with a grain of salt, because the practical value of those theories is to provide intuitive understanding and motivation for someone learning to apply the practical knowledge involved.
So, it is these specific, detailed behavioral recommendations and result predictions that should be examined, when examining a practical body of knowledge.
Because, either those behaviors produce the result, or they don’t. And if you desire the result, the theory part is completely and utterly irrelevant: all that matters is whether the result is produced or not.
You can, if you wish, always invent a replacement intuition pump—perhaps even making it so silly that you know you won’t be compromised by believing it (see e.g. the Flying Spaghetti Monster), or perhaps carrying out your own groundbreaking scientific research to show why/how the crazy thing under study actually works (like Richard Wiseman).
But if you set your standards for theory so high as to require an academic level of precision, you’re automatically cutting yourself off from vast amounts of useful knowledge, and substituting knowledge that usually isn’t optimized for actually doing anything.
I think this would make a really valuable top-level post.
That doesn’t address the concern that more diligent adherence to the specific behavioral recommendations produces worse results when the recommendations overshoot how much one should correct a bias in one’s natural tendencies. That’s part of why I followed “merely useful” with “to most people”. If something is untrue but useful to many and harmless to few or none, that’s one thing, but this is untrue and harmful to many and we can do better.
It may be too complicated to some people to say “Violence is appropriate .01% of the time, you think it’s appropriate .03% of the time, until you learn to distinguish among cases in which you think you should use violence, when you think you should use violence there is a 2⁄3 chance you are wrong”. Fine. Tell those people “Violence is never the answer”.
LW people can handle that many bits of information, and will see right through some lies that are useful to those dull enough to believe them. We can even aspire to distinguish among cases in which we intuit that the best course of action is violence, something those who tell themselves the useful lie cannot. (Obviously, I made these numbers up to make the point.)
Having sound theory is important so one knows when to deviate from standard practices. Here’s a rule: don’t commit fouls in basketball. Opponent has possession and is up by one point with 20 seconds left on the shot clock and in the game? Meh. Here’s a rule: use a goalie and not a sixth skater in ice hockey. Delayed penalty call against the opponent? Meh. I could go on.
Not necessarily. One reason simple linear models do better than human experts is because the humans are too eager to abandon their standard model in favor of something that seems like a good reason to deviate.
(And this is an important reason why you don’t teach a beginner advanced strategies until they’ve mastered the basics first—you want their brain to pave a really broad route to the basics, and relatively narrow tracks for the advanced bits.)
In any case, calling it a need for “sound theory” is overstating the case. What’s needed is a model that provides accurate predictions. The model itself need not be true or sound—see again the “law of attraction” as an example.
You seem to imply that it is sometimes, though not always, appropriate to deviate from a standard model—I agree—but that people tend to do it too much—I agree.
I wouldn’t think you better off if instead of believing this, you believed wrongly that it is never appropriate to deviate from standard models. Note how in arguing for the utility of false beliefs you refer back to reality as we both believe it to be, in which there are nuances and exceptions, If it so happens that you correctly believe you aren’t good at determining when to depart from standard models, you can do well and in practice never deviate from them, all without the burden of the false belief that it is never correct to do so.
Like any false belief, that would risk spreading epistemological contagion when other beliefs get entangled with it, and you can’t avoid this by labeling it false as “If there were a verb meaning “to believe falsely,” it would not have any significant first person, present indicative.” -Wittgenstein.
I agree that false beliefs can be useful, but this seems somewhat analogous to the fact that the most correct actor is the one who has priors (somewhat miraculously) corresponding to the truth and has confidence of 1, or the fact that to get the most points after a touchdown one would always have to “go for two” rather than kicking for an extra point. I’m wary of accepting (or teaching) false beliefs even when apparently useful, for the same reasons Eliezer stated in Protected from Myself. I know that to believe/teach false things is sometimes best, but I also know that I overestimate when it is best, so I’m avoiding doing it, and I didn’t have to falsely believe “teaching false things is never best” to avoid doing it and protect me from myself!
No, I believe that it is not appropriate to deviate from a model you are trying to learn, before you have mastered it. This also means that it’s inappropriate to critique a training program on the basis that it advocates not deviating from a model; it is, after all, material for people trying to learn that model.
In order to explain why I think the rest of what you said is wrong and/or tangential, I’d have to take a lot of time to expand out each of your terms and assumptions, and I really don’t want to take the time right now. So, all I’m going to say at this point is that your map of “belief” does not match the territory of the brain’s hardware. Rather, it’s a naive intuition of an idealized non-physical mind, not unlike the intuition that makes humans inclined to believe in things like souls.
IOW, the term “belief” is extremely overloaded. I deliberately have been referring to “models” rather than beliefs, specifically to narrow down the overloading. At minimum, we can divide beliefs into anticipative (Kahneman/Tversky System 1 aka “near”) beliefs and verbal/symbolic (K/T System 2 aka “far”) beliefs. Anticipative beliefs control your actual real-world anticipations, behaviors, and emotional responses, while verbal/symbolic beliefs drive your verbal reasoning, professions of belief, and long-term expectations.
This split explains why one can “not believe in ghosts”, but still be scared in a haunted house, or “believe” that one is just as deserving as anyone else, yet have trouble speaking up in a group.
However, the system 1/system 2 distinction is only the tip of the iceberg with respect to how beliefs and models work in the brain—there are meaningful subdistinctions within both system 1 and system 2, and there are differences in how permeable the systems are—the rate, you might say, at which a belief can “diffuse” through the brain and influence other things.
None of this, AFAICT, is incorporated into your naive model of epistemological contagion.
That being said, I don’t advocate teaching things that would be contagious.
For example, I wouldn’t support NVC’s teaching that violence is just learned and not inherent to human beings; that’s plain stupid. However, the apparent intended function of that belief is to communicate that the expression of violent impulses is modulated by choice and learning, and so I’d need to replace that belief with some other idea that conveys that same point… perhaps in a story or metaphor that conveys the idea implicitly, so as to help push it into students’ System 1models (where it really needs to be, anyway, if you want people to behave differently, vs. being able to regurgitate things on tests).
I can’t see why you would guess my model excluded it.
OK, so starting with the foundational belief of NVC, it’s important that learners of NVC not think any of that is true, and not to be misled by its association with the sound methods it apparently underlies. I haven’t seen any advocate of NVC say as much, but I haven’t delved into it.
Why should we expect the system to do a job of accumulating an effective set of methods that’s not easily improvable, if its teachers and practitioners believe these falsities? If they believe them, why are we confident they haven’t made erroneous, harmful extrapolations based on the theory being true?
Isn’t it likely that the leading academics (with studies, experiments, peer review, and decades of teaching experience) could separate what’s effective for teaching from what’s not, as well as what’s true from untrue, when those have been their dual goals? While their system is optimized for slightly different goals than NVC, HNP includes NVC’s goals, so asking HNP to compete against NVC by NVC’s goal criteria is like challenging a world-class triathlon champion to swim race against a guy who has never actually timed himself, but gets around pretty quickly at the YMCA pool, if he does say so himself, and that’s all he does all day. More accurately, it’s like asking both for advice on swimming rather than them racing.
Er… I think you misunderstand me. Most people don’t give a flying football whether that statement is true or not. The functional purpose of the statement is (IMO) to encourage people to rethink an existing bias to assume that certain classes of communication are normal, natural, expected, and/or the only available option.
So, the statement serves a functional purpose, and if it’s thrown out, it needs to be replaced with something else. I am not saying that people should be taught to consider it untrue, and I doubt that the NVC folks do so. I’m just saying that if I were to teach NVC, I would ideally replace that statement with something that was both more true and more useful.
All I’m trying to say here is that it’s silly for a rationalist (whose goal is to acquire skill in a given field) to discard a set of methods from serious consideration or study, simply on the basis of obviously-wrong and obviously-stupid theories. (If Richard Wiseman had done that, we wouldn’t have luck research, for example.)
For the same reasons we expect candlemakers to be able to make candles, even when they believed pholgiston exists. And that is because, generally speaking, theories follow successful practice of some kind.
For example, Anton Mesmer noticed that if he did certain things, he could get people to behave in odd ways. He then made up a nonsense theory (“animal magnetism”) to explain this peculiarity. The practice of hypnotism still exists today, despite a near-complete absence of an epistemically-sound theory for its method of operation.
Theories preceding practices are exceedingly rare, because people don’t usually make up their theories out of nothing; generally, they make them up to explain their observations. And it is these observations that a rationalist should concern themselves with, rather than the theories that were made up to explain them.
HNP includes the goal of becoming a more compassionate person?
I think you’re still mistaking me for an advocate of NVC, or someone trying to compare these two sets of practices. My sole purpose in this thread is to correct the all-too-common mis-perception that rationalists should discard bodies of practical knowledge that are packaged with verbal falsities. Such an attitude is poisonous to progress, since it needlessly discards quite a lot of otherwise perfectly-usable evidence and observations.
I don’t directly care whether they care about it or not, I care about the belief’s effect, regardless of whether or not students care, and I am concerned.
Fortunately, non-stupid theories following practices abound, though they are obviously not universal.
It’s a good thing I didn’t discard them, and instead qualified my skepticism based on my familiarity with it. You criticize me for downgrading my estimation of the likelihood value is worth extracting from NVC (huge piles of ore abound around a mining town near where I grew up, one could easily acquire millions of dollars worth of silver, though only extract it at the cost of at least twice that in refining costs. Hence, the silver ore is worthless,) after I read their silly theories. If they had said brilliant things, independently derived but in accord with the latest and greatest science, would I have been right to upgrade its predicted utility in my mind?
Seconded.