This post relies on several assumptions that I believe are false:
1. The rationalist community has managed to avoid bringing in any outside cultural baggage so when someone admits they were wrong about something important (and not making a strategic disclosure) people will only raise their estimate of incompetence by a Bayesian 0.42%.
2. The base rate of being “stupid and bad” by rationalist standards is 5% or lower (The sample has been selected for being better than average, but the implicit standards are much higher)
3. When people say they are worried about being “wrong” and therefore “stupid” and “bad”, they are referring to things with standard definitions that are precise enough to do math with.
4. The individuals you’re attempting to reassure with this post get enough of a spotlight that their 1 instance of publicly being wrong is balanced by a *salient* memory of the 9 other times they were right.
5. Not being seen as “stupid and bad” in this community is sufficient for someone to get the things they want/avoid the things they don’t want.
6. In situations where judgements must be made with limited information (e.g. job interviews) using a small sample of data is worse than defaulting to base rates. (Thought experiment: you’re at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?)
The rationalist community [...] rationalist standards [...] in this community
Uh, remind me why I’m supposed to care what some Bay Area robot cult thinks? (Although I heard there was an offshoot in Manchester that might be performing better!) The square quotes around “rationalist” “community” in the second paragraph are there for a reason.
The OP is a very narrowly focused post, trying to establish a single point (Being Wrong Doesn’t Mean You’re Stupid and Bad, Probably) by appealing to probability theory as normative reasoning (and some plausible assumptions). If you’re worried about someone thinking you’re stupid and bad because you were wrong, you should just show them this post, and if they care about probability theory as normative reasoning, then they’ll realize that they were wrong and stop mistakenly thinking that you’re stupid and bad. On the other hand, if the person you’re trying to impress doesn’t care about probability theory as normative reasoning, then they’re stupid and bad, and you shouldn’t care about impressing them.
outside cultural baggage
Was there ever an “inside”, really? I thought there was. I think I was wrong.
people will only raise their estimate of incompetence by a Bayesian 0.42%.
But that’s the correct update! People who update more or less than the Bayesian 0.42% are wrong! (Although that doesn’t mean they’re stupid or bad, obviously.)
they are referring to things with standard definitions that are precise enough to do math with.
This is an isolated demand for rigor and I’m not going to fall for it. I shouldn’t need to have a reduction of what brain computations correspond to people’s concept of “stupid and bad” in order to write a post like this.
using a small sample of data is worse than defaulting to base rates
What does this mean? If you have a small sample of data and you update on it the correct amount, you don’t do worse than you would have without the data.
you’re at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?
Analyzing the signaling game governing how people choose to dress at tech conferences does look like a fun game-theory exercise; thanks for the suggestion! I don’t have time for that now, though.
3. When people say they are worried about being “wrong” and therefore “stupid” and “bad”, they are referring to things with standard definitions that are precise enough to do math with.
I’d highlight the likelihood of conflicting definitions, precision or no precision.
This post relies on several assumptions that I believe are false:
1. The rationalist community has managed to avoid bringing in any outside cultural baggage so when someone admits they were wrong about something important (and not making a strategic disclosure) people will only raise their estimate of incompetence by a Bayesian 0.42%.
2. The base rate of being “stupid and bad” by rationalist standards is 5% or lower (The sample has been selected for being better than average, but the implicit standards are much higher)
3. When people say they are worried about being “wrong” and therefore “stupid” and “bad”, they are referring to things with standard definitions that are precise enough to do math with.
4. The individuals you’re attempting to reassure with this post get enough of a spotlight that their 1 instance of publicly being wrong is balanced by a *salient* memory of the 9 other times they were right.
5. Not being seen as “stupid and bad” in this community is sufficient for someone to get the things they want/avoid the things they don’t want.
6. In situations where judgements must be made with limited information (e.g. job interviews) using a small sample of data is worse than defaulting to base rates. (Thought experiment: you’re at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?)
Uh, remind me why I’m supposed to care what some Bay Area robot cult thinks? (Although I heard there was an offshoot in Manchester that might be performing better!) The square quotes around “rationalist” “community” in the second paragraph are there for a reason.
The OP is a very narrowly focused post, trying to establish a single point (Being Wrong Doesn’t Mean You’re Stupid and Bad, Probably) by appealing to probability theory as normative reasoning (and some plausible assumptions). If you’re worried about someone thinking you’re stupid and bad because you were wrong, you should just show them this post, and if they care about probability theory as normative reasoning, then they’ll realize that they were wrong and stop mistakenly thinking that you’re stupid and bad. On the other hand, if the person you’re trying to impress doesn’t care about probability theory as normative reasoning, then they’re stupid and bad, and you shouldn’t care about impressing them.
Was there ever an “inside”, really? I thought there was. I think I was wrong.
But that’s the correct update! People who update more or less than the Bayesian 0.42% are wrong! (Although that doesn’t mean they’re stupid or bad, obviously.)
This is an isolated demand for rigor and I’m not going to fall for it. I shouldn’t need to have a reduction of what brain computations correspond to people’s concept of “stupid and bad” in order to write a post like this.
What does this mean? If you have a small sample of data and you update on it the correct amount, you don’t do worse than you would have without the data.
Analyzing the signaling game governing how people choose to dress at tech conferences does look like a fun game-theory exercise; thanks for the suggestion! I don’t have time for that now, though.
I’d highlight the likelihood of conflicting definitions, precision or no precision.