For contingent evolutionary-psychological reasons, humans are innately biased to prefer “their own” ideas, and in that context, a “principle of charity” can be useful as a corrective heuristic
I claim that the reasons for this bias are, in an important sense, not contingent. i.e. an alien race would almost certainly have similar biases, and the forces in favor of this bias won’t entirely disappear in a world with magically-different discourse norms (at least as long as speakers’ identities are attached to their statements).
As soon as I’ve said “P”, it is the case that my epistemic reputation is bound up with the group’s belief in the truth of P. If people later come to believe P, it means that (a) whatever scoring rule we’re using to incentivize good predictions in the first place will reward me, and (b) people will update more on things I say in the future.
If you wanted to find convincing evidence for P, I’m now a much better candidate to find that evidence than someone who has instead said “eh; maybe P?” And someone who has said “~P” is similarly well-incentivized to find evidence for ~P.
I claim that the reasons for this bias are, in an important sense, not contingent. i.e. an alien race would almost certainly have similar biases, and the forces in favor of this bias won’t entirely disappear in a world with magically-different discourse norms (at least as long as speakers’ identities are attached to their statements).
As soon as I’ve said “P”, it is the case that my epistemic reputation is bound up with the group’s belief in the truth of P. If people later come to believe P, it means that (a) whatever scoring rule we’re using to incentivize good predictions in the first place will reward me, and (b) people will update more on things I say in the future.
If you wanted to find convincing evidence for P, I’m now a much better candidate to find that evidence than someone who has instead said “eh; maybe P?” And someone who has said “~P” is similarly well-incentivized to find evidence for ~P.