I have come to believe that people’s ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain.
I think this can be read many ways. First, obviously if a person is subject to an incentive to hold true beliefs about X, they will start trying to learn about X and their beliefs will become more accurate. This part isn’t very interesting.
The more interesting parts of your idea, I think, are the notions that
(1) In the absence of incentives to have true beliefs about X, people don’t just have no beliefs about X, but in fact tend to have beliefs that are wrong.
(2) In the presence of incentives to have wrong beliefs about X, people tend to adopt those wrong beliefs.
I’m less convinced that these things are true generally. I do think they are true of many people if we define “belief” as “an opinion that a person expresses”. But whether that’s a reasonable definition of belief is unclear—I think that often the people for whom (1) and (2) are true are the same people who don’t care whether their expressed opinions are correct. In that case the observation reduces to “if people don’t care about saying true things, they will say things they are incentivized to say”, which isn’t surprising.
For the average LessWrong reader, I’m not convinced (1) and (2) are accurate. The observation that people tend to have beliefs that align with their incentives might instead be explained by a tendency for people with belief X to gravitate towards a position that rewards them for having it.
It seems to me that the way humans acquire language pretty strongly suggests that (2) is true. (1) seems probably false, depending on what you mean by incentives, though.
I do think people (including myself) tend towards adopting politically expedient beliefs when there is pressure to do so (esp. when their job, community or narrative are on the line).
This is based in part on person experience, and developing the skill of noticing what motions my brain makes in what circumstances.
I think this can be read many ways. First, obviously if a person is subject to an incentive to hold true beliefs about X, they will start trying to learn about X and their beliefs will become more accurate. This part isn’t very interesting.
The more interesting parts of your idea, I think, are the notions that
(1) In the absence of incentives to have true beliefs about X, people don’t just have no beliefs about X, but in fact tend to have beliefs that are wrong.
(2) In the presence of incentives to have wrong beliefs about X, people tend to adopt those wrong beliefs.
I’m less convinced that these things are true generally. I do think they are true of many people if we define “belief” as “an opinion that a person expresses”. But whether that’s a reasonable definition of belief is unclear—I think that often the people for whom (1) and (2) are true are the same people who don’t care whether their expressed opinions are correct. In that case the observation reduces to “if people don’t care about saying true things, they will say things they are incentivized to say”, which isn’t surprising.
For the average LessWrong reader, I’m not convinced (1) and (2) are accurate. The observation that people tend to have beliefs that align with their incentives might instead be explained by a tendency for people with belief X to gravitate towards a position that rewards them for having it.
It seems to me that the way humans acquire language pretty strongly suggests that (2) is true. (1) seems probably false, depending on what you mean by incentives, though.
I do think people (including myself) tend towards adopting politically expedient beliefs when there is pressure to do so (esp. when their job, community or narrative are on the line).
This is based in part on person experience, and developing the skill of noticing what motions my brain makes in what circumstances.