It seems that in very adversarial settings there is limited incentive to share information. Also resources used for generating object level information might have to be deployed trying to figure out the goals and of any adversarial behavior. Adversarial behavior directed at you might not be against your goals. For example an allied spy might lie to you to maintain their cover.
For these reasons I am skeptical of a productive group epistemology.
I meant to propose the goal: find a strategy such that the set H of people who use it do “well” (maybe: “have beliefs almost as accurate as if they had known each others’ identities, pooled their info honestly, and built a common narrative”) regardless of what others do.
There aren’t really any incentives in the setup. People in H won’t behave adversarially towards you, since they are using the proposed collective epistemology. You might as well assume that people outside of H will behave adversarially to you, since a system that is supposed to work under no assumptions about their behavior must work when they behave adversarially (and conversely, if it works when they behave adversarially then it works no matter what).
I think the group will struggle with accurate models of things where data is scarce. This scarcity might be due to separation in time or space between the members and the phenomenon being discussed. Or could be due to the data being dispersed.
This fits with physics and chemistry being more productive than things like economics or studying the future. In these kinds of fields narratives that serve certain group members can take hold and be very hard to dislodge.
Economics and futurism are hard domains for epistemology in general, but I’m not sure that they’d become disproportionately harder in the presence of disinformation.
I think the hard cases are when people have access to lots of local information that is hard for others to verify. In futurism and economics people are using logical facts and publicly verifiable observations to an unusual extent, so in that sense I’d expect trust to be unusually unimportant.
I was just thinking that we would be able to do better than being keynesian or believing in the singularity if we could aggregate information from everyone reliably.
If we could form a shared narrative and get reliable updates from chip manufacturers about the future of semiconductors we could make better predictions about the pace of computational improvement. Than if you assume they will be saying things with half an eye on their share price.
There might be an asymptote you can reach on doing “well” under these mildly adversarial settings. I think knowing the incentives of people helps a lot so can know when people are incentivised deceive you.
Are you assuming you can identify people in H reliably?
Ignore all non-gears level feedback: Getting feedback is important for epistemolgical correctness. But if it is in an adversarial setting feedback may be trying to make you believe something to their benefit. Ignore all karma scores, for example. If however someone can tell you how and why you are going wrong (or right) that can be useful, if you agree with their reasoning.
Only update on facts that logically follow from things you already believe. If someone has followed an inference chain further than you, you can use their work safely.
If arguments rely on facts new to you, look at the world and see if those facts are consistent with what is around you.
That said, as I don’t believe in a sudden switch to utopia, I think it important to strengthen the less-adversarial parts of society, so I will be seeking those out. “Start as you mean to go on,” seems like decent wisdom, in this day and age.
It seems that in very adversarial settings there is limited incentive to share information. Also resources used for generating object level information might have to be deployed trying to figure out the goals and of any adversarial behavior. Adversarial behavior directed at you might not be against your goals. For example an allied spy might lie to you to maintain their cover.
For these reasons I am skeptical of a productive group epistemology.
I meant to propose the goal: find a strategy such that the set H of people who use it do “well” (maybe: “have beliefs almost as accurate as if they had known each others’ identities, pooled their info honestly, and built a common narrative”) regardless of what others do.
There aren’t really any incentives in the setup. People in H won’t behave adversarially towards you, since they are using the proposed collective epistemology. You might as well assume that people outside of H will behave adversarially to you, since a system that is supposed to work under no assumptions about their behavior must work when they behave adversarially (and conversely, if it works when they behave adversarially then it works no matter what).
I think the group will struggle with accurate models of things where data is scarce. This scarcity might be due to separation in time or space between the members and the phenomenon being discussed. Or could be due to the data being dispersed.
This fits with physics and chemistry being more productive than things like economics or studying the future. In these kinds of fields narratives that serve certain group members can take hold and be very hard to dislodge.
Economics and futurism are hard domains for epistemology in general, but I’m not sure that they’d become disproportionately harder in the presence of disinformation.
I think the hard cases are when people have access to lots of local information that is hard for others to verify. In futurism and economics people are using logical facts and publicly verifiable observations to an unusual extent, so in that sense I’d expect trust to be unusually unimportant.
I was just thinking that we would be able to do better than being keynesian or believing in the singularity if we could aggregate information from everyone reliably.
If we could form a shared narrative and get reliable updates from chip manufacturers about the future of semiconductors we could make better predictions about the pace of computational improvement. Than if you assume they will be saying things with half an eye on their share price.
There might be an asymptote you can reach on doing “well” under these mildly adversarial settings. I think knowing the incentives of people helps a lot so can know when people are incentivised deceive you.
Are you assuming you can identify people in H reliably?
If you can identify people in H reliably then you would ignore everyone outside of H. The whole point of the game is that you can’t tell who is who.
So what you can do is.
Ignore all non-gears level feedback: Getting feedback is important for epistemolgical correctness. But if it is in an adversarial setting feedback may be trying to make you believe something to their benefit. Ignore all karma scores, for example. If however someone can tell you how and why you are going wrong (or right) that can be useful, if you agree with their reasoning.
Only update on facts that logically follow from things you already believe. If someone has followed an inference chain further than you, you can use their work safely.
If arguments rely on facts new to you, look at the world and see if those facts are consistent with what is around you.
That said, as I don’t believe in a sudden switch to utopia, I think it important to strengthen the less-adversarial parts of society, so I will be seeking those out. “Start as you mean to go on,” seems like decent wisdom, in this day and age.