Of course, the real benefit of a “nice” atmosphere is that it attracts more people and grows the community. This could be worth sacrificing accuracy for: 100 people with 99% accurate beliefs is worth more (in reality) than 50 people with 100% accurate beliefs.
Strongly agreed. It’s been observed elsewhere on LW that sarcasm can hide gaping logical flaws in an argument that straightforward statement would immediately reveal. I have more than once found that I have been forced to be more intellectually rigorous when I’m unable to cover the gaps in my argument with dripping scorn.
Consider P Z Myers’ scorn for transhumanists—I’d love to hear his “being nice” argument against us, and I think he’d have a much harder time sounding convincing.
It’s possible that this is a disagreement on the meaning of “effortful niceness”. As I’ve said, I find that it is often more effort to express a point straightforwardly than to express it meanly, because I’m forced to think through the argument more carefully. I’m guessing that doesn’t fall under the banner of “effortful niceness” for you, even though it involves effort and makes the comment nicer? Could you give an example of counterproductive effortful niceness?
I can’t extract any meaning from these percentages. Well over 99% of an ordinary person’s beliefs are true, because they are about prosaic, uncontroversial things like “I have fingernails”.
I can’t either, but my basic reaction is simply that in practice purity is critical here. If, in order to act correctly, a person needs to do more than 70 cognitive things correctly, their expected value falls by half for every 1% that they are wrong.
In practice, if you are only talking about the 70 most important steps that people are prone to messing up, that could easily be correct. Not to mention the probability of doing harm. Certainly there are a lot more than 10 steps that people are prone to messing up which reduce value by more than 80% in practice.
I suppose it depends what kinds of decisions you’re talking about making. (eg keeping AIs from destroying humanity.) I was thinking along the lines of day-to-day decision making, in which people generally manage to survive for decades in spite of ridiculously flawed beliefs—so it seems there are lots of situations where performance doesn’t appear to degrade nearly so sharply.
At any rate, I guess I’m with ciphergoth, the more interesting question is why 99% accurate is “maybe maybe” okay, but 95% is “hell no”. Where do those numbers come from?
No one gets it 99% right. (Modulo my expectation that we are speaking only of questions of a minimal difficulty; say, at least as difficult as the simplest questions that the person has never considered before.)
When I was a cryptographer, an information source with a .000001% bulge (information content above randomness) would break a code wide open for me. Lack of bias was much more important than % right.
You’re onto me. Yes, that’s with a large corpus. The kind you get when people encrypt non-textual information. So, I lied a little. You need a bigger bulge with shorter messages.
I was thinking exactly the same thing. I have literally no idea what ‘percentage’ of the things I believe are true, and certainly wouldn’t be willing to put a figure on what percentage is acceptable.
Of course, the real benefit of a “nice” atmosphere is that it attracts more people and grows the community. This could be worth sacrificing accuracy for: 100 people with 99% accurate beliefs is worth more (in reality) than 50 people with 100% accurate beliefs.
Why are you assuming that being nice must decrease accuracy? Several of the points that Alicorn mentioned increase accuracy.
Strongly agreed. It’s been observed elsewhere on LW that sarcasm can hide gaping logical flaws in an argument that straightforward statement would immediately reveal. I have more than once found that I have been forced to be more intellectually rigorous when I’m unable to cover the gaps in my argument with dripping scorn.
Consider P Z Myers’ scorn for transhumanists—I’d love to hear his “being nice” argument against us, and I think he’d have a much harder time sounding convincing.
I think that either effortful niceness or effortful horribleness decreases accuracy.
It’s possible that this is a disagreement on the meaning of “effortful niceness”. As I’ve said, I find that it is often more effort to express a point straightforwardly than to express it meanly, because I’m forced to think through the argument more carefully. I’m guessing that doesn’t fall under the banner of “effortful niceness” for you, even though it involves effort and makes the comment nicer? Could you give an example of counterproductive effortful niceness?
Yeah, I guess you’re right actually. After all, it is more effort to express a point straightforwardly than to express it meanly.
No, but if it were the case that it decreased accuracy of beliefs, I think it would still be worth it in some cases.
Not for us ‘average accuratarians’.
If it’s literally 99%, then maybe maybe. If it’s actually more like 95%, then hell no.
I can’t extract any meaning from these percentages. Well over 99% of an ordinary person’s beliefs are true, because they are about prosaic, uncontroversial things like “I have fingernails”.
I can’t either, but my basic reaction is simply that in practice purity is critical here. If, in order to act correctly, a person needs to do more than 70 cognitive things correctly, their expected value falls by half for every 1% that they are wrong.
Assuming any action anywhere short of optimal results in zero value, sure. In practice?
In practice, if you are only talking about the 70 most important steps that people are prone to messing up, that could easily be correct. Not to mention the probability of doing harm. Certainly there are a lot more than 10 steps that people are prone to messing up which reduce value by more than 80% in practice.
I suppose it depends what kinds of decisions you’re talking about making. (eg keeping AIs from destroying humanity.) I was thinking along the lines of day-to-day decision making, in which people generally manage to survive for decades in spite of ridiculously flawed beliefs—so it seems there are lots of situations where performance doesn’t appear to degrade nearly so sharply.
At any rate, I guess I’m with ciphergoth, the more interesting question is why 99% accurate is “maybe maybe” okay, but 95% is “hell no”. Where do those numbers come from?
Someone who gets it 99% right is useful to me, someone who gets it 95% right is so much work to deal with that I usually don’t bother.
No one gets it 99% right. (Modulo my expectation that we are speaking only of questions of a minimal difficulty; say, at least as difficult as the simplest questions that the person has never considered before.)
When I was a cryptographer, an information source with a .000001% bulge (information content above randomness) would break a code wide open for me. Lack of bias was much more important than % right.
From a curious non-cryptographer: what size of corpus are you talking about here?
You’re onto me. Yes, that’s with a large corpus. The kind you get when people encrypt non-textual information. So, I lied a little. You need a bigger bulge with shorter messages.
I didn’t mean to call you out—I was just curious. A curve of data set size versus required bulge would be interesting.
In that case, a second information source of that quality wouldn’t have been that much use to you.
The first person who gets it 95% right would be very valuable. But there are diminishing returns.
I was thinking exactly the same thing. I have literally no idea what ‘percentage’ of the things I believe are true, and certainly wouldn’t be willing to put a figure on what percentage is acceptable.
Repugnant conlusion: 3^^^3 people with 0.00001% accurate beliefs is worth more than 100000 people with 100% accurate beliefs?
doesn’t apply: there’s an optimal tradeoff implied by the goals of LW.
Sacrificing accuracy can be (and usually is) a slippery slope.