Thus, they can be more impulsive while maintaining the same beliefs about risk.
I’ll unpack that…
Thus, they can be overconfident while maintaining the same beliefs about risk.
Being impulsive is being overconfident, impulsive is a lack of estimating risk, which is underestimating risk.
Being impulsive is being overconfident, impulsive is a lack of estimating risk
I think we’re looking at different dictionaries, so I’ll abandon the word impulsive and try with a more object-level phrase. They can drive less carefully while maintaining the same beliefs about risk.
I think we’re looking at different dictionaries, so I’ll abandon the word impulsive and try with a more object-level phrase.
Hilarious, the point you have abandoned has +2, whilst my point that forced the abandoning still has −1.
anyways...
They can drive less carefully while maintaining the same beliefs about risk.
and if those same beliefs are already an underestimation of risk? strike 1, just clipped the outside of the plate.
Let’s unpack that last quote in context of driving… a-yawn-gain.
they can drive less carefully.
Less carefully, is about less care—what is “care”, that’s about
Care = Feel concern or interest; attach importance to something: “they don’t care about human life”. (dictionary.com)
So they feel less concern, they attach less importance to driving. What’s the key word there, hmmm?
“Less” well that’s a term, in context that goes with “under”-estimate. Do you think? I do. Strike 2 - straight up the middle of the plate. Batter says, I didn’t see that. Too bad says the ref.
Let’s examine the opposite side, to include a process for minimising disconfirmation bias.
They drive less carefully.
Ok, I’m flipping my brain. The less carefully has nothing to do with underestimating risk, actually in this flip it’s about overestimating risk… why do I say overestimate—well apparently that’s part of the argument opposing my viewpoint, check above.
Well what’s the dictionary say about what “Over estimate” means
o·ver·es·ti·mate/ˌōvərˈestəˌmāt/
Verb: Estimate (something) to be better, larger, or more important than it really is.
(dictionary.com)
hang on, hang on—overestimate = estimate something to be more important than it really is.
Does overestimate sound at all like “less care”?
No it doesn’t, contradiction found, conclusion is Driving less carefully is about underestimating risk.
Strike 3. Yer outta here!
Now, here’s a thing. When the teenagers judge the reward highly, sufficiently highly to outweigh the risk of death—they have underestimated the risk. Perception of reward and risk are not in opposition, they go hand in hand.
Now let’s look at the rest of the sentence.
They can drive less carefully while maintaining the same beliefs about risk.
The implication in context, is that it’s reward driving the behaviour, supposedly being the entire reason for the behaviour, one significant context of the reward perception was peer involvement (see article). Let’s try that one.
They can drive less carefully with more people in the car, while maintaining the same beliefs about risk, because they perceive the rewards are higher.
That fits the counter argument to my viewpoint… but hang on, now with more people in the car the risk of death is multiplied. So factually the risk has increased—yet the behaviour is supposedly all due to the reward, now if the behaviour is truly all to do with the reward, then yep the teen has discounted the risk—for the risk increased and it’s not changing the behaviour.
So in that situation we’ve got another example where a teen has underestimated the risk due to a perception of a higher reward.
Am I being too anecdotal for you guys? Of course, discount outgroup behaviour whilst permitting the same ingroup. The article is itself filled with anecdotes… maybe we should just dismiss the entire article… stop press no no, don’t do that there’s no counter to my op then, lets just pick and choose the parts of it that support the counter, dismiss those that don’t—both in the research and the anecdotes.
Please by all means, chuck up the −1, I’m considering them badges of honour now.
I think we’re all well past the point of expecting you to actually read and/or seriously consider anything. However, in case other people are still reading this thread:
Hilarious, the point you have abandoned has +2, whilst my point that forced the abandoning still has −1. anyways...
Ignore the fact that the parent abandoned a word, not a point. Karma never has been, and never ought to be, about deciding the correctness of arguments. Also, the usual litany of objections to people mindlessly invoking karma. Downvoted in accordance with my policy.
thanks for the link paper-machine, that’s quite a reasonable policy.
If I wasn’t downvoted to such a degree that I have no opportunity to downvote, I might consider implementing it. I’ll certainly use the concept to more thoroughly mitigate my annoyance about those unable to follow argument.
I’ll up vote you in accordance with my policy.
Which is that if a person says a single useful thing, regardless of the rest of their post, I’ll give it a +1.
My reasoning for this policy is twofold. I reject the negativity that is encouraged by criticism and it’s aim of proving or showing that some one is wrong, rather than proving oneself right. I accept that when one focuses upon the positive, or worthwhile components of someone’s beliefs/actions/arguments one creates a valuable synergy that encourages a pathway towards truth and understanding.
Sometimes I don’t implement my own policy, but hey, it’s all a work in progress.
On reflection the sites name “lesswrong” really should have set off an alarm bell. I’m not interested particularly in being lesswrong. I am interested in being moreright.
Positive psychology and educational psychology have shown that positivity contributes more readily to learning than negativity.
On reflection the sites name “lesswrong” really should have set off an alarm bell. I’m not interested particularly in being lesswrong. I am interested in being moreright.
The name is a deliberate choice, and it’s rooted in a belief in the difficulty of being completely right. It seeks to minimize arrogance and maximize doubt. At the start of every post, I try to imagine the ways that I am currently being wrong, and reduce those.
For example, my first reaction to this comment was to pull out my dictionary and argue that my use of “impulsive” was right, because I knew what I meant when I wrote it and could find that meaning in a dictionary. Instead, I decided that it takes two to communicate, and that if you disagreed with the implications of the word, it was the wrong word to choose. So I abandoned the word in an attempt to become less wrong.
Positive psychology and educational psychology have shown that positivity contributes more readily to learning than negativity.
I agree with you that positivity is generally more powerful than negativity; that’s why I try to be positive. Even so, negativity has its uses.
For example, my first reaction to this comment was to pull out my dictionary and argue that my use of “impulsive” was right, because I knew what I meant when I wrote it and could find that meaning in a dictionary. Instead, I decided that it takes two to communicate, and that if you disagreed with the implications of the word, it was the wrong word to choose. So I abandoned the word in an attempt to become less wrong.
Kahneman wrote, in Thinking, Fast and Slow, that he wrote a book for gossips and critics — rather than a book for movers and shakers — because people are better at identifying other people’s biases than their own. I took this as meaning that his intention was to make his readers better equipped to criticize others’ biases correctly; and thus, to make people who wish to avoid being criticized need to debias themselves to accomplish this.
Presumably, part of the reason that a commenter would avoid making a dictionary argument on LW is if that commenter knows that LWers are unlikely to tolerate dictionary arguments. Teaching people about biases may lead them to be less tolerant of biases in others; and if we seek to avoid doing things that are odious to our fellows, we will be forced to check our own biases before someone else checks them for us.
Knowing about biases can hurt you chiefly if you’re the only one who’s sophisticated about biases and can argue fluently about them. But we should expect that in an environment with a raised sanity waterline, where everyone knows about biases and is prepared to point them out, people will perpetrate less egregious bias than in an environment where they can get away with it socially.
(OTOH, I don’t take this to excuse people saying “Nah nah nah, I caught you in a conjunction fallacy, you’re a poopy stupid head.” We should be intolerant of biased arguments, not of people who make them — so long as they’re learning.)
Good point. I normally don’t like accusing others of bias, and I will continue to try to refrain from doing so when I’m involved in something that looks like a debate, but I agree that it is useful information that should not be discouraged.
It seeks to minimize arrogance and maximize doubt.
but I dispute that it achieves those. I believe instead that it maximises arrogance and maximises doubt in the others point of view, and in maximising doubt in the other persons view we minimize our doubt in our own view.
The belief that it’s difficult to be completely right, encourages people to look for that gap that is “wrong” and then drive a wedge into it and expand it until it’s all that’s being talked about.
If 95% is correct and 5% is wrong, criticising the 5% is a means to hurting the person—they have after all gotten 95% correct. It’s not rational to discount peoples feelings by focusing upon their error and ignoring their correctness. It’s destructive, it breaks people. Sure some few thrive on that kind of struggle—most don’t, again this is proven stuff. And I’m not going to post 10 freeking sources on that—all that’s doing for me is wasting my time and providing more opportunity for others to confirm their bias by fighting against it. If someone wants to find that information it’s out there.
When you (or anyone else) got a high distinction for a unit or assignment or exam, was that a moment to go, fuck—didn’t remember that a pre ganglionic fibre doesn’t look anything like a post gangleoic nerve (aka ds9), or was it a moment to leap for joy and go, you little ripper I got 95%!
I agree negativity has its uses, often it’s about “piss off” and go away, leave me alone; sometimes that’s useful, but you’ll note that those fall on the arrogant side of emotions—that of self. (this will get a wedge driven in it too, heck I could drive one in, but it remains somewhat true).
Vaniver, I’d consider it a positive discussion to talk about negativity. Would you mind explaining to me where “negativity has its uses”.
And to show that I consider the
It seeks to minimize arrogance and maximize doubt.
viewpoint.
Yeh, ok I get that, when we apply the concept to ourselves then we are minimizing our arrogance and maximizing our doubt. And that’ll work. We’ll second guess ourselves, we’ll edit our posts, and re edit, and check our dictionaries and quote our sources and these are all useful things. They keep us honest.
But what about when we apply those concepts to others—as is our tendency due to the self serving bias and the group serving bias?
If 95% is correct and 5% is wrong, criticising the 5% is a means to hurting the person—they have after all gotten 95% correct.
There are many fields in which it is better to not try than to get 5% wrong. Would you go bungee jumping if it had a 5% failure rate?
Vaniver, I’d consider it a positive discussion to talk about negativity. Would you mind explaining to me where “negativity has its uses”.
Mostly in discouraging behavior. As well, an important rationality skill is updating on valuable information from sources you dislike; dealing with negativity in safer circumstances may help people learn to better deal with negativity in less safe circumstances.
Thanks for the post on negativity Vaniver.
I wouldn’t go bungee jumping if it had a 5% failure rate.
Mostly in discouraging behavior...
That viewpoint can be considered as based upon Skinners model of Behaviourism, it’s been shown to be less effective for learning than being positive.
Makes sense—we tend to remember what we are emotionally engaged in and what is reinforced. When the negativity is associated with the 5%, what is reinforced is that a person is “wrong”, that’s associated with feelings of low self efficacy and tends to discourage (most) people from the topic. When that happens they regress—not progress, they tend to get even more wrong next time as they’ve not stayed engaged in the topic.
...As well, an important rationality skill is updating on valuable information from sources you dislike; dealing with negativity in safer circumstances may help people learn to better deal with negativity in less safe circumstances.
I agree that an important skill is to update ones information, however the discouragement that is provoked by negativity isn’t efficient in evoking updating. Confident people update their information, people who aren’t attacked have no need to defend and so they remain open, openess is the key attitude for updating information. Negativity destroys and/or minimizes confidence which contributes to closing a mind.
What negativity does, in context of learning, is to encourage secrecy, resentment, avoidance and close mindedness. Again this stuff is all known as a consequence of punishment, which is what negativity—as discouraging behaviour is associated with.
Apparently a more effective way forward is to model the behaviour that one wants to encourage and ignore the behaviour one wants to discourage—extinction.
That viewpoint can be considered as based upon Skinners model of Behaviourism, it’s been shown to be less effective for learning than being positive.
I agree that saying “Good job putting down that toy” to my 22-month-old is more effective at reducing throwing of his toys than saying “Don’t throw toys.” And extinction works great on tantrums.
But you seem to be overgeneralizing the point a bit. When dealing with competent adults, saying “X is wrong” is an effective way of improving the listener’s beliefs. If the speaker doesn’t justify the assertion, that will and should effect whether the listener changes beliefs.
Of course, this is probably bad management style. We might explain that fact about people-management by invoking psychological bias, power imbalance, or something else. But here, we’re just having a discussion. No one is asserting a right to authority over anyone else.
Without necessarily asserting its truth, this just-so story/parable might help:
For various social reasons, popular kids and nerds have developed very different politeness rules. Popular kids are used to respect, so they accept everything that they hear. As a consequence, they think relatively carefully before saying something, because their experience is that what is said will be taken seriously. By contrast, nerds seldom receive social respect from their peers. Therefore, they seldom take what is said to them to heart. As a consequence, nerds don’t tend to think before they speak, because their experience is that the listener will filter out a fair amount of what is said. In brief, the popular filter at the mouth, the nerds filter at the ear.
This all works fine (more or less) when communicating within type. But you can imagine the problems when a nerd says something mean to a popular, expecting that it will be filtered out. Or a popular says something only vaguely nice, but the nerd removes negative that isn’t there and hears sincere and deep interest.
TimS, I’m glad we agree on several points, extinction and positive reinforcement of children.
I wonder why these methods are espoused for children, yet tend to be used less for “competent adults”.
Thanks for planting the seed that I might be overgeneralizing the point a bit, I’ll keep an eye on that.
I am reminded that saying “X is wrong” to an adult with a belief is ineffective in many circumstances, most notably the circumstance were the belief is a preconception, based in emotion or more specifically an irrational belief. Is this not one consequence of bias? That a person, in some cases/topics, won’t update their beliefs and indeed strengthen their belief in the counterargument against the updating. Presumably you’ve read http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/
Which alludes to how knowledge of bias can be used dismissively, i.e. an irrational use of a rationale.
“Why logical argument has never been successful at changing prejudices, beliefs, emotions or perceptions. Why these things can be changed only through perception.” De Bono, “I am right, you are wrong”.
De Bono discusses this extensively.
If the belief is rational, and perhaps that’s one component of what you consider a “competent adult”, the adult could be more open to updating the fact/knowledge—yet even this situation has a wealth of counter examples, such that there is a term for it—belief perseverance.
In my experience unsolicited advice is rarely accepted regardless of its utility and veracity. Perhaps I communicate with many closed minds, or perhaps I am merely experiencing the availability heuristic in context of our discussion.
When you (or anyone else) got a high distinction for a unit or assignment or exam, was that a moment to go, fuck—didn’t remember that a pre ganglionic fibre doesn’t look anything like a post gangleoic nerve (aka ds9), or was it a moment to leap for joy and go, you little ripper I got 95%!
When you (or anyone else) get a high grade on a paper or assignment or exam, is that a moment to think “Darn- I didn’t remember (single obscure thing you got wrong),” or is it a moment to leap for joy and say “I got a 95! Ahaha!”?
The belief that it’s difficult to be completely right, encourages people to look for that gap that is “wrong” and then drive a wedge into it and expand it until it’s all that’s being talked about.
Sure, if you’re running in debate mode and thinking in terms of ‘sides’ or ‘us versus them’ and trying to ‘win’, then that might be something to do. Solution: don’t do that in the first place.
If 95% is correct and 5% is wrong
Don’t worry, everything you believe is almost certainly wrong—don’t expect to find yourself in the 95% correct state any time soon. We’re running on corrupted hardware in the first place, and nowhere near the end of science. We can reduce hardly any of our high-level concepts to their physical working parts.
But what about when we apply those concepts to others—as is our tendency due to the self serving bias and the group serving bias?
Sure, if you’re running in debate mode and thinking in terms of ‘sides’ or ‘us versus them’ and trying to ‘win’, then that might be something to do. Solution: don’t do that in the first place.
Indeed, a valuable point. So what’s up with the score keeping system of LW then. It encourages thinking in terms of sides and competition. −1, not my side, +1 my side. −1 lost, +1 won.
Don’t worry, everything you believe is almost certainly wrong—don’t expect to find yourself in the 95% correct state any time soon. We’re running on corrupted hardware in the first place, and nowhere near the end of science. We can reduce hardly any of our high-level concepts to their physical working parts.
lol. Fair enough. I would place the 95% not on some unknown scale of what is absolutely true—that science doesn’t yet know, but instead on the relative scale of what science currently knows. Does that make a difference to your point?
First, fix those too.
Yep, tough to become self less, yet still place enough value upon oneself to not be a door mat. Rudyard Kiplings “If” shows a pathway.
So what’s up with the score keeping system of LW then. It encourages thinking in terms of sides and competition. −1, not my side, +1 my side. −1 lost, +1 won.
Karma allows users to easily aggregate the community opinion of their comments, and allows busy users to prioritize which comments to read. I try to make more posts like my highly upvoted posts, and less posts like my highly downvoted posts. It is common to see discussions where both users are upvoted, or discussions where both users are downvoted. When there’s a large karma split between users, that’s a message from the community that the users are using different modes of discussion, and one is strongly preferred to the other.
Both positive and negative options are necessary so that posts which are loved by half of the users and hated by the other half of the users have a neutral score, rather than a high score. Similarly, posts which are disliked by many users should be different from posts that everyone is indifferent to.
that are a thousand years ahead of western science.
What was the motivation behind this addition? Was it positive?
that are a thousand years ahead of western science.
What was the motivation behind this addition? Was it positive?
The motivation was to plant a seed… motivated by the +2 on my comment.
In my experience debiasing others who have strongly held opinions is far more effort than it’s worth, a better road seems to be to facilitate them debiasing themselves. Plant the seed and move on, coming back to assess and perhaps water it later on. I don’t try to cut down their tree… as it were.
http://lesswrong.com/lw/7ep/practical_debiasing/5ah1?context=1#5ah1
It is not uncommon to see scientists who have studied Eastern philosophy. Thus, how could Eastern philsophy be a thousand years ahead of science, when it is part of science?
Indeed, a valuable point. So what’s up with the score keeping system of LW then. It encourages thinking in terms of sides and competition. −1, not my side, +1 my side. −1 lost, +1 won.
It’s a hurdle to get past thinking of it in that way for some people, to be sure. It seems a worthwhile cost though, for an easy way to efficiently express approval/disapproval of a comment, combined with automatic hiding of really bad comments from casual readers.
While some people use them that way, voting should not generally be used to mean “I agree” or “I disagree”. The preferred interpretation is “I would like to see [more/fewer] comments like this one” (which may yet include agreement/disagreement, but they should be minor factors as compared to quality).
Parts of the parent comment that are particularly wrong:
Hilarious, the point you have abandoned has +2, whilst my point that forced the abandoning still has −1. anyways...
paper-machine fairly well handled that one in terms of “Rule 1 of karma is you do not talk about karma”. Also, it was not a point that was abandoned, but a word. It is a common technique here to taboo a word whose definition is under dispute, since arguing about definitions is a waste of time.
Since you do not seem to understand, what happened there is that your ‘unpacking’ did not convey what Vaniver’s statement actually was intended to convey, so Vaniver replaced the word ‘impulsive’ with a more object-level description less amenable to misunderstanding.
a-yawn-gain
What’s the key word there, hmmm?
Strike 2 - straight up the middle of the plate. Batter says, I didn’t see that. Too bad says the ref.
hang on, hang on
Strike 3. Yer outta here!
This is not a good way to communicate. If you really don’t see what in this language would make someone take your arguments less seriously, someone could explain.
o·ver·es·ti·mate/ˌōvərˈestəˌmāt/ Verb:
Estimate (something) to be better, larger, or more important than it really is. (dictionary.com)
hang on, hang on—overestimate = estimate something to be more important than it really is.
Perhaps you are not familiar with the study of risk, but the phrase “overestimate risk” means “estimate risk to be larger than it really is”, not “more important”. Either you are too ill-informed about risk analysis to be involved in this conversation, or you are trolling.
Also, appeals to the dictionary are just about the worst thing you can do in a substantive argument. If there is a misunderstanding, then definitions (whether from a dictionary or not) are useful for resolving the misunderstanding. They are really not useful to prove a point about what’s actually occurring.
Am I being too anecdotal for you guys? Of course, discount outgroup behaviour whilst permitting the same ingroup. The article is itself filled with anecdotes… maybe we should just dismiss the entire article… stop press no no, don’t do that there’s no counter to my op then, lets just pick and choose the parts of it that support the counter, dismiss those that don’t—both in the research and the anecdotes.
This is a clear violation of the principle of charity.
And as a general rule, fixing your own bias is good, but accusing others of bias is bad. We must be particularly careful to remember that knowing about biases can hurt people. EDIT: Updating on this comment: It is useful to point out examples of bias in others; but do so in a way that does not score points in a debate, to be sure you’re not fooling yourself..
Please by all means, chuck up the −1, I’m considering them badges of honour now.
You should not. Votes are an indication of whether the readers of this site would like to see more comments like yours. If you’re getting feedback that you’re making comments we wouldn’t like on our site, and you consider that a ‘badge of honor’, then you’re a troll and should actually be banned entirely.
I’ll unpack that… Thus, they can be overconfident while maintaining the same beliefs about risk. Being impulsive is being overconfident, impulsive is a lack of estimating risk, which is underestimating risk.
I think we’re looking at different dictionaries, so I’ll abandon the word impulsive and try with a more object-level phrase. They can drive less carefully while maintaining the same beliefs about risk.
Hilarious, the point you have abandoned has +2, whilst my point that forced the abandoning still has −1. anyways...
and if those same beliefs are already an underestimation of risk? strike 1, just clipped the outside of the plate.
Let’s unpack that last quote in context of driving… a-yawn-gain. they can drive less carefully. Less carefully, is about less care—what is “care”, that’s about
So they feel less concern, they attach less importance to driving. What’s the key word there, hmmm? “Less” well that’s a term, in context that goes with “under”-estimate. Do you think? I do. Strike 2 - straight up the middle of the plate. Batter says, I didn’t see that. Too bad says the ref.
Let’s examine the opposite side, to include a process for minimising disconfirmation bias. They drive less carefully. Ok, I’m flipping my brain. The less carefully has nothing to do with underestimating risk, actually in this flip it’s about overestimating risk… why do I say overestimate—well apparently that’s part of the argument opposing my viewpoint, check above.
Well what’s the dictionary say about what “Over estimate” means
hang on, hang on—overestimate = estimate something to be more important than it really is. Does overestimate sound at all like “less care”? No it doesn’t, contradiction found, conclusion is Driving less carefully is about underestimating risk. Strike 3. Yer outta here!
Now, here’s a thing. When the teenagers judge the reward highly, sufficiently highly to outweigh the risk of death—they have underestimated the risk. Perception of reward and risk are not in opposition, they go hand in hand.
Now let’s look at the rest of the sentence.
The implication in context, is that it’s reward driving the behaviour, supposedly being the entire reason for the behaviour, one significant context of the reward perception was peer involvement (see article). Let’s try that one.
They can drive less carefully with more people in the car, while maintaining the same beliefs about risk, because they perceive the rewards are higher.
That fits the counter argument to my viewpoint… but hang on, now with more people in the car the risk of death is multiplied. So factually the risk has increased—yet the behaviour is supposedly all due to the reward, now if the behaviour is truly all to do with the reward, then yep the teen has discounted the risk—for the risk increased and it’s not changing the behaviour.
So in that situation we’ve got another example where a teen has underestimated the risk due to a perception of a higher reward.
Am I being too anecdotal for you guys? Of course, discount outgroup behaviour whilst permitting the same ingroup. The article is itself filled with anecdotes… maybe we should just dismiss the entire article… stop press no no, don’t do that there’s no counter to my op then, lets just pick and choose the parts of it that support the counter, dismiss those that don’t—both in the research and the anecdotes.
Please by all means, chuck up the −1, I’m considering them badges of honour now.
I think we’re all well past the point of expecting you to actually read and/or seriously consider anything. However, in case other people are still reading this thread:
Ignore the fact that the parent abandoned a word, not a point. Karma never has been, and never ought to be, about deciding the correctness of arguments. Also, the usual litany of objections to people mindlessly invoking karma. Downvoted in accordance with my policy.
thanks for the link paper-machine, that’s quite a reasonable policy.
If I wasn’t downvoted to such a degree that I have no opportunity to downvote, I might consider implementing it. I’ll certainly use the concept to more thoroughly mitigate my annoyance about those unable to follow argument.
I’ll up vote you in accordance with my policy. Which is that if a person says a single useful thing, regardless of the rest of their post, I’ll give it a +1.
My reasoning for this policy is twofold. I reject the negativity that is encouraged by criticism and it’s aim of proving or showing that some one is wrong, rather than proving oneself right. I accept that when one focuses upon the positive, or worthwhile components of someone’s beliefs/actions/arguments one creates a valuable synergy that encourages a pathway towards truth and understanding.
Sometimes I don’t implement my own policy, but hey, it’s all a work in progress.
On reflection the sites name “lesswrong” really should have set off an alarm bell. I’m not interested particularly in being lesswrong. I am interested in being moreright.
Positive psychology and educational psychology have shown that positivity contributes more readily to learning than negativity.
The name is a deliberate choice, and it’s rooted in a belief in the difficulty of being completely right. It seeks to minimize arrogance and maximize doubt. At the start of every post, I try to imagine the ways that I am currently being wrong, and reduce those.
For example, my first reaction to this comment was to pull out my dictionary and argue that my use of “impulsive” was right, because I knew what I meant when I wrote it and could find that meaning in a dictionary. Instead, I decided that it takes two to communicate, and that if you disagreed with the implications of the word, it was the wrong word to choose. So I abandoned the word in an attempt to become less wrong.
I agree with you that positivity is generally more powerful than negativity; that’s why I try to be positive. Even so, negativity has its uses.
Kahneman wrote, in Thinking, Fast and Slow, that he wrote a book for gossips and critics — rather than a book for movers and shakers — because people are better at identifying other people’s biases than their own. I took this as meaning that his intention was to make his readers better equipped to criticize others’ biases correctly; and thus, to make people who wish to avoid being criticized need to debias themselves to accomplish this.
Presumably, part of the reason that a commenter would avoid making a dictionary argument on LW is if that commenter knows that LWers are unlikely to tolerate dictionary arguments. Teaching people about biases may lead them to be less tolerant of biases in others; and if we seek to avoid doing things that are odious to our fellows, we will be forced to check our own biases before someone else checks them for us.
Knowing about biases can hurt you chiefly if you’re the only one who’s sophisticated about biases and can argue fluently about them. But we should expect that in an environment with a raised sanity waterline, where everyone knows about biases and is prepared to point them out, people will perpetrate less egregious bias than in an environment where they can get away with it socially.
(OTOH, I don’t take this to excuse people saying “Nah nah nah, I caught you in a conjunction fallacy, you’re a poopy stupid head.” We should be intolerant of biased arguments, not of people who make them — so long as they’re learning.)
Good point. I normally don’t like accusing others of bias, and I will continue to try to refrain from doing so when I’m involved in something that looks like a debate, but I agree that it is useful information that should not be discouraged.
Vaniver. Mate. I accept that you believe
but I dispute that it achieves those. I believe instead that it maximises arrogance and maximises doubt in the others point of view, and in maximising doubt in the other persons view we minimize our doubt in our own view.
The belief that it’s difficult to be completely right, encourages people to look for that gap that is “wrong” and then drive a wedge into it and expand it until it’s all that’s being talked about.
If 95% is correct and 5% is wrong, criticising the 5% is a means to hurting the person—they have after all gotten 95% correct. It’s not rational to discount peoples feelings by focusing upon their error and ignoring their correctness. It’s destructive, it breaks people. Sure some few thrive on that kind of struggle—most don’t, again this is proven stuff. And I’m not going to post 10 freeking sources on that—all that’s doing for me is wasting my time and providing more opportunity for others to confirm their bias by fighting against it. If someone wants to find that information it’s out there.
When you (or anyone else) got a high distinction for a unit or assignment or exam, was that a moment to go, fuck—didn’t remember that a pre ganglionic fibre doesn’t look anything like a post gangleoic nerve (aka ds9), or was it a moment to leap for joy and go, you little ripper I got 95%!
I agree negativity has its uses, often it’s about “piss off” and go away, leave me alone; sometimes that’s useful, but you’ll note that those fall on the arrogant side of emotions—that of self. (this will get a wedge driven in it too, heck I could drive one in, but it remains somewhat true).
Vaniver, I’d consider it a positive discussion to talk about negativity. Would you mind explaining to me where “negativity has its uses”.
And to show that I consider the
viewpoint.
Yeh, ok I get that, when we apply the concept to ourselves then we are minimizing our arrogance and maximizing our doubt. And that’ll work. We’ll second guess ourselves, we’ll edit our posts, and re edit, and check our dictionaries and quote our sources and these are all useful things. They keep us honest. But what about when we apply those concepts to others—as is our tendency due to the self serving bias and the group serving bias?
There are many fields in which it is better to not try than to get 5% wrong. Would you go bungee jumping if it had a 5% failure rate?
Mostly in discouraging behavior. As well, an important rationality skill is updating on valuable information from sources you dislike; dealing with negativity in safer circumstances may help people learn to better deal with negativity in less safe circumstances.
Thanks for the post on negativity Vaniver. I wouldn’t go bungee jumping if it had a 5% failure rate.
That viewpoint can be considered as based upon Skinners model of Behaviourism, it’s been shown to be less effective for learning than being positive.
Makes sense—we tend to remember what we are emotionally engaged in and what is reinforced. When the negativity is associated with the 5%, what is reinforced is that a person is “wrong”, that’s associated with feelings of low self efficacy and tends to discourage (most) people from the topic. When that happens they regress—not progress, they tend to get even more wrong next time as they’ve not stayed engaged in the topic.
I agree that an important skill is to update ones information, however the discouragement that is provoked by negativity isn’t efficient in evoking updating. Confident people update their information, people who aren’t attacked have no need to defend and so they remain open, openess is the key attitude for updating information. Negativity destroys and/or minimizes confidence which contributes to closing a mind.
What negativity does, in context of learning, is to encourage secrecy, resentment, avoidance and close mindedness. Again this stuff is all known as a consequence of punishment, which is what negativity—as discouraging behaviour is associated with.
Apparently a more effective way forward is to model the behaviour that one wants to encourage and ignore the behaviour one wants to discourage—extinction.
I agree that saying “Good job putting down that toy” to my 22-month-old is more effective at reducing throwing of his toys than saying “Don’t throw toys.” And extinction works great on tantrums.
But you seem to be overgeneralizing the point a bit. When dealing with competent adults, saying “X is wrong” is an effective way of improving the listener’s beliefs. If the speaker doesn’t justify the assertion, that will and should effect whether the listener changes beliefs.
Of course, this is probably bad management style. We might explain that fact about people-management by invoking psychological bias, power imbalance, or something else. But here, we’re just having a discussion. No one is asserting a right to authority over anyone else.
Without necessarily asserting its truth, this just-so story/parable might help:
For various social reasons, popular kids and nerds have developed very different politeness rules. Popular kids are used to respect, so they accept everything that they hear. As a consequence, they think relatively carefully before saying something, because their experience is that what is said will be taken seriously. By contrast, nerds seldom receive social respect from their peers. Therefore, they seldom take what is said to them to heart. As a consequence, nerds don’t tend to think before they speak, because their experience is that the listener will filter out a fair amount of what is said. In brief, the popular filter at the mouth, the nerds filter at the ear.
This all works fine (more or less) when communicating within type. But you can imagine the problems when a nerd says something mean to a popular, expecting that it will be filtered out. Or a popular says something only vaguely nice, but the nerd removes negative that isn’t there and hears sincere and deep interest.
TimS, I’m glad we agree on several points, extinction and positive reinforcement of children. I wonder why these methods are espoused for children, yet tend to be used less for “competent adults”. Thanks for planting the seed that I might be overgeneralizing the point a bit, I’ll keep an eye on that.
I am reminded that saying “X is wrong” to an adult with a belief is ineffective in many circumstances, most notably the circumstance were the belief is a preconception, based in emotion or more specifically an irrational belief. Is this not one consequence of bias? That a person, in some cases/topics, won’t update their beliefs and indeed strengthen their belief in the counterargument against the updating. Presumably you’ve read http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/ Which alludes to how knowledge of bias can be used dismissively, i.e. an irrational use of a rationale.
“Why logical argument has never been successful at changing prejudices, beliefs, emotions or perceptions. Why these things can be changed only through perception.” De Bono, “I am right, you are wrong”. De Bono discusses this extensively.
If the belief is rational, and perhaps that’s one component of what you consider a “competent adult”, the adult could be more open to updating the fact/knowledge—yet even this situation has a wealth of counter examples, such that there is a term for it—belief perseverance.
In my experience unsolicited advice is rarely accepted regardless of its utility and veracity. Perhaps I communicate with many closed minds, or perhaps I am merely experiencing the availability heuristic in context of our discussion.
Checking dictionaries doesn’t really help eliminate bias. Just saying.
I could not parse this paragraph. It might be just that it was written in the Australian idiom or something; maybe quotation marks would help.
When you (or anyone else) get a high grade on a paper or assignment or exam, is that a moment to think “Darn- I didn’t remember (single obscure thing you got wrong),” or is it a moment to leap for joy and say “I got a 95! Ahaha!”?
Thanks!
thomblake, consider a high distinction as an A+ grade. Perhaps as along the lines of Newtonian Mechanics. It’s mostly right.
Sure, if you’re running in debate mode and thinking in terms of ‘sides’ or ‘us versus them’ and trying to ‘win’, then that might be something to do. Solution: don’t do that in the first place.
Don’t worry, everything you believe is almost certainly wrong—don’t expect to find yourself in the 95% correct state any time soon. We’re running on corrupted hardware in the first place, and nowhere near the end of science. We can reduce hardly any of our high-level concepts to their physical working parts.
First, fix those too.
Indeed, a valuable point. So what’s up with the score keeping system of LW then. It encourages thinking in terms of sides and competition. −1, not my side, +1 my side. −1 lost, +1 won.
lol. Fair enough. I would place the 95% not on some unknown scale of what is absolutely true—that science doesn’t yet know, but instead on the relative scale of what science currently knows. Does that make a difference to your point?
Yep, tough to become self less, yet still place enough value upon oneself to not be a door mat. Rudyard Kiplings “If” shows a pathway.
Eastern philosophy also has approaches—that are a thousand years ahead of western science.
Karma allows users to easily aggregate the community opinion of their comments, and allows busy users to prioritize which comments to read. I try to make more posts like my highly upvoted posts, and less posts like my highly downvoted posts. It is common to see discussions where both users are upvoted, or discussions where both users are downvoted. When there’s a large karma split between users, that’s a message from the community that the users are using different modes of discussion, and one is strongly preferred to the other.
Both positive and negative options are necessary so that posts which are loved by half of the users and hated by the other half of the users have a neutral score, rather than a high score. Similarly, posts which are disliked by many users should be different from posts that everyone is indifferent to.
What was the motivation behind this addition? Was it positive?
The motivation was to plant a seed… motivated by the +2 on my comment.
But why that seed in this conversation?
It is not uncommon to see scientists who have studied Eastern philosophy. Thus, how could Eastern philsophy be a thousand years ahead of science, when it is part of science?
To assist in debiasing the ageism that was being expressed in the conversation.
Yes. The difference in perspective probably explains why Eliezer thought Less Wrong was a good name, whereas you do not. Do not compare yourself to others; “The best physicist in ancient Greece could not calculate the path of a falling apple.”
It’s a hurdle to get past thinking of it in that way for some people, to be sure. It seems a worthwhile cost though, for an easy way to efficiently express approval/disapproval of a comment, combined with automatic hiding of really bad comments from casual readers.
While some people use them that way, voting should not generally be used to mean “I agree” or “I disagree”. The preferred interpretation is “I would like to see [more/fewer] comments like this one” (which may yet include agreement/disagreement, but they should be minor factors as compared to quality).
Parts of the parent comment that are particularly wrong:
paper-machine fairly well handled that one in terms of “Rule 1 of karma is you do not talk about karma”. Also, it was not a point that was abandoned, but a word. It is a common technique here to taboo a word whose definition is under dispute, since arguing about definitions is a waste of time.
Since you do not seem to understand, what happened there is that your ‘unpacking’ did not convey what Vaniver’s statement actually was intended to convey, so Vaniver replaced the word ‘impulsive’ with a more object-level description less amenable to misunderstanding.
This is not a good way to communicate. If you really don’t see what in this language would make someone take your arguments less seriously, someone could explain.
Perhaps you are not familiar with the study of risk, but the phrase “overestimate risk” means “estimate risk to be larger than it really is”, not “more important”. Either you are too ill-informed about risk analysis to be involved in this conversation, or you are trolling.
Also, appeals to the dictionary are just about the worst thing you can do in a substantive argument. If there is a misunderstanding, then definitions (whether from a dictionary or not) are useful for resolving the misunderstanding. They are really not useful to prove a point about what’s actually occurring.
This is a clear violation of the principle of charity.
And as a general rule, fixing your own bias is good, but accusing others of bias is bad. We must be particularly careful to remember that knowing about biases can hurt people. EDIT: Updating on this comment: It is useful to point out examples of bias in others; but do so in a way that does not score points in a debate, to be sure you’re not fooling yourself..
You should not. Votes are an indication of whether the readers of this site would like to see more comments like yours. If you’re getting feedback that you’re making comments we wouldn’t like on our site, and you consider that a ‘badge of honor’, then you’re a troll and should actually be banned entirely.