And ideally, you’d take that fact into account in forming your actual beliefs. I think it’s pretty well-established here that having accurate beliefs shouldn’t actually hurt you. It’s not a good strategy to change your actual beliefs so that you can signal more effectively—and it probably wouldn’t work, anyway.
I haven’t read that paper—but thanks for the link, I’ll definitely do so—but it seems that that’s a separate issue from choosing which beliefs to have based on what it will do for your social status. Still, I would argue that limiting knowledge is only preferable in select cases—not a good general rule to abide by, partial knowledge of biases and such notwithstanding.
I think it’s pretty well-established here that having accurate beliefs shouldn’t actually hurt you.
Not at all. It is well established having accurate beliefs should not hurt a perfect bayesian intelligence. Believing it applied to mere humans would be naive in the extreme.
It’s not a good strategy to change your actual beliefs so that you can signal more effectively—and it probably wouldn’t work, anyway.
The fact that we are so damn good at it is evidence to the contrary!
I’m not understanding the disagreement here. I’ll grant that imperfect knowledge can be harmful, but is anybody really going to argue that it isn’t useful to try to have the most accurate map of the territory?
And ideally, you’d take that fact into account in forming your actual beliefs. I think it’s pretty well-established here that having accurate beliefs shouldn’t actually hurt you. It’s not a good strategy to change your actual beliefs so that you can signal more effectively—and it probably wouldn’t work, anyway.
Hmm: Information Hazards: A Typology of Potential Harms from Knowledge …?
I haven’t read that paper—but thanks for the link, I’ll definitely do so—but it seems that that’s a separate issue from choosing which beliefs to have based on what it will do for your social status. Still, I would argue that limiting knowledge is only preferable in select cases—not a good general rule to abide by, partial knowledge of biases and such notwithstanding.
Not at all. It is well established having accurate beliefs should not hurt a perfect bayesian intelligence. Believing it applied to mere humans would be naive in the extreme.
The fact that we are so damn good at it is evidence to the contrary!
I’m not understanding the disagreement here. I’ll grant that imperfect knowledge can be harmful, but is anybody really going to argue that it isn’t useful to try to have the most accurate map of the territory?
We are talking about signalling. So for most people yes.