I wouldn’t think you better off if instead of believing this, you believed wrongly that it is never appropriate to deviate from standard models.
No, I believe that it is not appropriate to deviate from a model you are trying to learn, before you have mastered it. This also means that it’s inappropriate to critique a training program on the basis that it advocates not deviating from a model; it is, after all, material for people trying to learn that model.
[various confused things]
In order to explain why I think the rest of what you said is wrong and/or tangential, I’d have to take a lot of time to expand out each of your terms and assumptions, and I really don’t want to take the time right now. So, all I’m going to say at this point is that your map of “belief” does not match the territory of the brain’s hardware. Rather, it’s a naive intuition of an idealized non-physical mind, not unlike the intuition that makes humans inclined to believe in things like souls.
IOW, the term “belief” is extremely overloaded. I deliberately have been referring to “models” rather than beliefs, specifically to narrow down the overloading. At minimum, we can divide beliefs into anticipative (Kahneman/Tversky System 1 aka “near”) beliefs and verbal/symbolic (K/T System 2 aka “far”) beliefs. Anticipative beliefs control your actual real-world anticipations, behaviors, and emotional responses, while verbal/symbolic beliefs drive your verbal reasoning, professions of belief, and long-term expectations.
This split explains why one can “not believe in ghosts”, but still be scared in a haunted house, or “believe” that one is just as deserving as anyone else, yet have trouble speaking up in a group.
However, the system 1/system 2 distinction is only the tip of the iceberg with respect to how beliefs and models work in the brain—there are meaningful subdistinctions within both system 1 and system 2, and there are differences in how permeable the systems are—the rate, you might say, at which a belief can “diffuse” through the brain and influence other things.
None of this, AFAICT, is incorporated into your naive model of epistemological contagion.
That being said, I don’t advocate teaching things that would be contagious.
For example, I wouldn’t support NVC’s teaching that violence is just learned and not inherent to human beings; that’s plain stupid. However, the apparent intended function of that belief is to communicate that the expression of violent impulses is modulated by choice and learning, and so I’d need to replace that belief with some other idea that conveys that same point… perhaps in a story or metaphor that conveys the idea implicitly, so as to help push it into students’ System 1models (where it really needs to be, anyway, if you want people to behave differently, vs. being able to regurgitate things on tests).
None of this, AFAICT, is incorporated into your naive model of epistemological contagion.
I can’t see why you would guess my model excluded it.
I wouldn’t support NVC’s teaching that violence is just learned and not inherent to human beings; that’s plain stupid.
OK, so starting with the foundational belief of NVC, it’s important that learners of NVC not think any of that is true, and not to be misled by its association with the sound methods it apparently underlies. I haven’t seen any advocate of NVC say as much, but I haven’t delved into it.
Why should we expect the system to do a job of accumulating an effective set of methods that’s not easily improvable, if its teachers and practitioners believe these falsities? If they believe them, why are we confident they haven’t made erroneous, harmful extrapolations based on the theory being true?
Isn’t it likely that the leading academics (with studies, experiments, peer review, and decades of teaching experience) could separate what’s effective for teaching from what’s not, as well as what’s true from untrue, when those have been their dual goals? While their system is optimized for slightly different goals than NVC, HNP includes NVC’s goals, so asking HNP to compete against NVC by NVC’s goal criteria is like challenging a world-class triathlon champion to swim race against a guy who has never actually timed himself, but gets around pretty quickly at the YMCA pool, if he does say so himself, and that’s all he does all day. More accurately, it’s like asking both for advice on swimming rather than them racing.
OK, so starting with the foundational belief of NVC, it’s important that learners of NVC not think any of that is true, and not to be misled by its association with the sound methods it apparently underlies. I haven’t seen any advocate of NVC say as much, but I haven’t delved into it.
Er… I think you misunderstand me. Most people don’t give a flying football whether that statement is true or not. The functional purpose of the statement is (IMO) to encourage people to rethink an existing bias to assume that certain classes of communication are normal, natural, expected, and/or the only available option.
So, the statement serves a functional purpose, and if it’s thrown out, it needs to be replaced with something else. I am not saying that people should be taught to consider it untrue, and I doubt that the NVC folks do so. I’m just saying that if I were to teach NVC, I would ideally replace that statement with something that was both more true and more useful.
All I’m trying to say here is that it’s silly for a rationalist (whose goal is to acquire skill in a given field) to discard a set of methods from serious consideration or study, simply on the basis of obviously-wrong and obviously-stupid theories. (If Richard Wiseman had done that, we wouldn’t have luck research, for example.)
Why should we expect the system to do a job of accumulating an effective set of methods that’s not easily improvable, if its teachers and practitioners believe these falsities? If they believe them, why are we confident they haven’t made erroneous, harmful extrapolations based on the theory being true?
For the same reasons we expect candlemakers to be able to make candles, even when they believed pholgiston exists. And that is because, generally speaking, theories follow successful practice of some kind.
For example, Anton Mesmer noticed that if he did certain things, he could get people to behave in odd ways. He then made up a nonsense theory (“animal magnetism”) to explain this peculiarity. The practice of hypnotism still exists today, despite a near-complete absence of an epistemically-sound theory for its method of operation.
Theories preceding practices are exceedingly rare, because people don’t usually make up their theories out of nothing; generally, they make them up to explain their observations. And it is these observations that a rationalist should concern themselves with, rather than the theories that were made up to explain them.
While their system is optimized for slightly different goals than NVC, HNP includes NVC’s goals
HNP includes the goal of becoming a more compassionate person?
[Other HNP vs. NVC stuff]
I think you’re still mistaking me for an advocate of NVC, or someone trying to compare these two sets of practices. My sole purpose in this thread is to correct the all-too-common mis-perception that rationalists should discard bodies of practical knowledge that are packaged with verbal falsities. Such an attitude is poisonous to progress, since it needlessly discards quite a lot of otherwise perfectly-usable evidence and observations.
Most people don’t give a flying football whether that statement is true or not.
I don’t directly care whether they care about it or not, I care about the belief’s effect, regardless of whether or not students care, and I am concerned.
Theories preceding practices are exceedingly rare,
Fortunately, non-stupid theories following practices abound, though they are obviously not universal.
[I]t’s silly...to discard a set of methods from serious consideration or study, simply on the basis of obviously-wrong and obviously-stupid theories.
It’s a good thing I didn’t discard them, and instead qualified my skepticism based on my familiarity with it. You criticize me for downgrading my estimation of the likelihood value is worth extracting from NVC (huge piles of ore abound around a mining town near where I grew up, one could easily acquire millions of dollars worth of silver, though only extract it at the cost of at least twice that in refining costs. Hence, the silver ore is worthless,) after I read their silly theories. If they had said brilliant things, independently derived but in accord with the latest and greatest science, would I have been right to upgrade its predicted utility in my mind?
No, I believe that it is not appropriate to deviate from a model you are trying to learn, before you have mastered it. This also means that it’s inappropriate to critique a training program on the basis that it advocates not deviating from a model; it is, after all, material for people trying to learn that model.
In order to explain why I think the rest of what you said is wrong and/or tangential, I’d have to take a lot of time to expand out each of your terms and assumptions, and I really don’t want to take the time right now. So, all I’m going to say at this point is that your map of “belief” does not match the territory of the brain’s hardware. Rather, it’s a naive intuition of an idealized non-physical mind, not unlike the intuition that makes humans inclined to believe in things like souls.
IOW, the term “belief” is extremely overloaded. I deliberately have been referring to “models” rather than beliefs, specifically to narrow down the overloading. At minimum, we can divide beliefs into anticipative (Kahneman/Tversky System 1 aka “near”) beliefs and verbal/symbolic (K/T System 2 aka “far”) beliefs. Anticipative beliefs control your actual real-world anticipations, behaviors, and emotional responses, while verbal/symbolic beliefs drive your verbal reasoning, professions of belief, and long-term expectations.
This split explains why one can “not believe in ghosts”, but still be scared in a haunted house, or “believe” that one is just as deserving as anyone else, yet have trouble speaking up in a group.
However, the system 1/system 2 distinction is only the tip of the iceberg with respect to how beliefs and models work in the brain—there are meaningful subdistinctions within both system 1 and system 2, and there are differences in how permeable the systems are—the rate, you might say, at which a belief can “diffuse” through the brain and influence other things.
None of this, AFAICT, is incorporated into your naive model of epistemological contagion.
That being said, I don’t advocate teaching things that would be contagious.
For example, I wouldn’t support NVC’s teaching that violence is just learned and not inherent to human beings; that’s plain stupid. However, the apparent intended function of that belief is to communicate that the expression of violent impulses is modulated by choice and learning, and so I’d need to replace that belief with some other idea that conveys that same point… perhaps in a story or metaphor that conveys the idea implicitly, so as to help push it into students’ System 1models (where it really needs to be, anyway, if you want people to behave differently, vs. being able to regurgitate things on tests).
I can’t see why you would guess my model excluded it.
OK, so starting with the foundational belief of NVC, it’s important that learners of NVC not think any of that is true, and not to be misled by its association with the sound methods it apparently underlies. I haven’t seen any advocate of NVC say as much, but I haven’t delved into it.
Why should we expect the system to do a job of accumulating an effective set of methods that’s not easily improvable, if its teachers and practitioners believe these falsities? If they believe them, why are we confident they haven’t made erroneous, harmful extrapolations based on the theory being true?
Isn’t it likely that the leading academics (with studies, experiments, peer review, and decades of teaching experience) could separate what’s effective for teaching from what’s not, as well as what’s true from untrue, when those have been their dual goals? While their system is optimized for slightly different goals than NVC, HNP includes NVC’s goals, so asking HNP to compete against NVC by NVC’s goal criteria is like challenging a world-class triathlon champion to swim race against a guy who has never actually timed himself, but gets around pretty quickly at the YMCA pool, if he does say so himself, and that’s all he does all day. More accurately, it’s like asking both for advice on swimming rather than them racing.
Er… I think you misunderstand me. Most people don’t give a flying football whether that statement is true or not. The functional purpose of the statement is (IMO) to encourage people to rethink an existing bias to assume that certain classes of communication are normal, natural, expected, and/or the only available option.
So, the statement serves a functional purpose, and if it’s thrown out, it needs to be replaced with something else. I am not saying that people should be taught to consider it untrue, and I doubt that the NVC folks do so. I’m just saying that if I were to teach NVC, I would ideally replace that statement with something that was both more true and more useful.
All I’m trying to say here is that it’s silly for a rationalist (whose goal is to acquire skill in a given field) to discard a set of methods from serious consideration or study, simply on the basis of obviously-wrong and obviously-stupid theories. (If Richard Wiseman had done that, we wouldn’t have luck research, for example.)
For the same reasons we expect candlemakers to be able to make candles, even when they believed pholgiston exists. And that is because, generally speaking, theories follow successful practice of some kind.
For example, Anton Mesmer noticed that if he did certain things, he could get people to behave in odd ways. He then made up a nonsense theory (“animal magnetism”) to explain this peculiarity. The practice of hypnotism still exists today, despite a near-complete absence of an epistemically-sound theory for its method of operation.
Theories preceding practices are exceedingly rare, because people don’t usually make up their theories out of nothing; generally, they make them up to explain their observations. And it is these observations that a rationalist should concern themselves with, rather than the theories that were made up to explain them.
HNP includes the goal of becoming a more compassionate person?
I think you’re still mistaking me for an advocate of NVC, or someone trying to compare these two sets of practices. My sole purpose in this thread is to correct the all-too-common mis-perception that rationalists should discard bodies of practical knowledge that are packaged with verbal falsities. Such an attitude is poisonous to progress, since it needlessly discards quite a lot of otherwise perfectly-usable evidence and observations.
I don’t directly care whether they care about it or not, I care about the belief’s effect, regardless of whether or not students care, and I am concerned.
Fortunately, non-stupid theories following practices abound, though they are obviously not universal.
It’s a good thing I didn’t discard them, and instead qualified my skepticism based on my familiarity with it. You criticize me for downgrading my estimation of the likelihood value is worth extracting from NVC (huge piles of ore abound around a mining town near where I grew up, one could easily acquire millions of dollars worth of silver, though only extract it at the cost of at least twice that in refining costs. Hence, the silver ore is worthless,) after I read their silly theories. If they had said brilliant things, independently derived but in accord with the latest and greatest science, would I have been right to upgrade its predicted utility in my mind?