Economics and futurism are hard domains for epistemology in general, but I’m not sure that they’d become disproportionately harder in the presence of disinformation.
I think the hard cases are when people have access to lots of local information that is hard for others to verify. In futurism and economics people are using logical facts and publicly verifiable observations to an unusual extent, so in that sense I’d expect trust to be unusually unimportant.
I was just thinking that we would be able to do better than being keynesian or believing in the singularity if we could aggregate information from everyone reliably.
If we could form a shared narrative and get reliable updates from chip manufacturers about the future of semiconductors we could make better predictions about the pace of computational improvement. Than if you assume they will be saying things with half an eye on their share price.
There might be an asymptote you can reach on doing “well” under these mildly adversarial settings. I think knowing the incentives of people helps a lot so can know when people are incentivised deceive you.
Are you assuming you can identify people in H reliably?
Ignore all non-gears level feedback: Getting feedback is important for epistemolgical correctness. But if it is in an adversarial setting feedback may be trying to make you believe something to their benefit. Ignore all karma scores, for example. If however someone can tell you how and why you are going wrong (or right) that can be useful, if you agree with their reasoning.
Only update on facts that logically follow from things you already believe. If someone has followed an inference chain further than you, you can use their work safely.
If arguments rely on facts new to you, look at the world and see if those facts are consistent with what is around you.
That said, as I don’t believe in a sudden switch to utopia, I think it important to strengthen the less-adversarial parts of society, so I will be seeking those out. “Start as you mean to go on,” seems like decent wisdom, in this day and age.
Economics and futurism are hard domains for epistemology in general, but I’m not sure that they’d become disproportionately harder in the presence of disinformation.
I think the hard cases are when people have access to lots of local information that is hard for others to verify. In futurism and economics people are using logical facts and publicly verifiable observations to an unusual extent, so in that sense I’d expect trust to be unusually unimportant.
I was just thinking that we would be able to do better than being keynesian or believing in the singularity if we could aggregate information from everyone reliably.
If we could form a shared narrative and get reliable updates from chip manufacturers about the future of semiconductors we could make better predictions about the pace of computational improvement. Than if you assume they will be saying things with half an eye on their share price.
There might be an asymptote you can reach on doing “well” under these mildly adversarial settings. I think knowing the incentives of people helps a lot so can know when people are incentivised deceive you.
Are you assuming you can identify people in H reliably?
If you can identify people in H reliably then you would ignore everyone outside of H. The whole point of the game is that you can’t tell who is who.
So what you can do is.
Ignore all non-gears level feedback: Getting feedback is important for epistemolgical correctness. But if it is in an adversarial setting feedback may be trying to make you believe something to their benefit. Ignore all karma scores, for example. If however someone can tell you how and why you are going wrong (or right) that can be useful, if you agree with their reasoning.
Only update on facts that logically follow from things you already believe. If someone has followed an inference chain further than you, you can use their work safely.
If arguments rely on facts new to you, look at the world and see if those facts are consistent with what is around you.
That said, as I don’t believe in a sudden switch to utopia, I think it important to strengthen the less-adversarial parts of society, so I will be seeking those out. “Start as you mean to go on,” seems like decent wisdom, in this day and age.