I think it’s worth asking: can we find a collective epistemology which would work well for the people who use it, even if everyone else behaved adversarially? (More generally: can we find a policy that works well for the people who follow it, even if others behave adversarially?)
The general problem is clearly impossible in some settings. For example, in the physical world, without secure property rights, an adversary can just shoot you and there’s not much you can do. No policy is going to work well for the people who follow it in that setting, unless the other people play nice or the group of people following it is large enough to overpower those who don’t.
In others settings it seems quite easy, e.g. if you want to make a series of binary decisions about who to trust, and you immediately get a statistical clue when someone defects, then even a small group can make good decisions regardless of what everyone else does.
For collective epistemology I think you can probably do well even assuming if 99% of people behave adversarially (including impersonating good faith actors until its convenient to defect—in this setting, “figure out who is rational, and then trust them” is a non-starter approach). And more generally, given secure property rights I think you could probably organize an entire rational economy in a way that is robust to large groups defecting strategically, yet still pretty efficient.
The adversarial setting is of course too pessimistic. But I think it’s probably a good thing to work on anyway in cases where it looks possible, since (a) it’s an easy setting to think about, (b) in some respects the real world is surprisingly adversarial, (c) robustness lets you relax about lots of things and is often pretty cheap.
I’m way more skeptical than you about maintaining a canonical perspective; I almost can’t imagine how that would work well given real humans, and its advantages just don’t seem that big compared to the basic unworkability.
It seems to me that part of the subtext here is that humans for the most part track a shared perspective, and can’t help but default to it quite often, because (a) we want to communicate with other humans, and (b) it’s expensive to track the map-territory distinction.
For instance, let’s take the Syria example. Here are some facts that I think are tacitly assumed by just about everyone talking about the Syria question, without evaluating whether there is sufficient evidence to believe them, simply because they are in the canonical perspective:
Syria is a place.
People live there.
There is or was recently some sort of armed conflict going on there.
Syria is adjacent to other places, in roughly the spatial arrangement a map would tell you.
Syria contains cities in which people live, in roughly the places a map would tell you. The people in those cities for the most part refer to them by the names on the map, or some reasonably analogous name in their native language.
One of the belligerents formerly had almost exclusive force-projection capacity over the whole of Syria. The nominal leader of this faction is Bashar al-Assad.
ISIL/ISIS was a real organization, that held real territory.
The level of skepticism that would not default to the canonical perspective on facts like that seems—well, I don’t know of anyone who seems to have actually internalized that level of skepticism of canon, aside from the President of the United States. He seems to have done pretty well for himself, if he in fact exists.
Robust collective epistemology need not look like “normal epistemology but really skeptical.” Treating knowledge as provisional and tentative doesn’t require a high level of skepticism. It may involve some revision to the default way humans think, but that ship had already sailed well before the enlightenment.
It seems reasonable to believe X “merely” because it is falsifiable, no one credible objects, and you’ve personally seen no evidence to the contrary. That protocol probably won’t lead you astray, but for most interesting claims it is going to be easy for an adversary to DoS it (since even if the claim is true someone could object without compromising their own credibility) and so you are going to need to rely on more robust fallbacks.
My point isn’t that you should doubt that sort of stuff strongly, it’s that it seems to me to be prohibitively computationally expensive to evaluate it at all rather than passively accepting it as background observations presumed true. How, in practice, does one treat that sort of knowledge as provisional and tentative?
My best guess is that someone with the right level of doubt in social reality ends up looking like they have a substantially higher than normal level of psychosis, and ends up finding it difficult to track when they’re being weird.
How, in practice, does one treat that sort of knowledge as provisional and tentative?
A belief being tentative is a property of your algorithm-for-deciding-things, not what a state of mind feels like from the inside. You can get a lot of mileage by e.g. (a) independently revisiting tentative claims with small probability, (b) responding appropriately when someone points out to you that a load-bearing tentative assumption might be wrong.
I don’t think this question should be expected to have a really short answer, even if there are ironclad collective epistemology protocols. It’s like saying “how, in practice, do people securely communicate over untrusted internet infrastructure?” There is a great answer, but even once you have a hint that it’s possible it will still take quite a lot of work to figure out exactly how the protocol works.
Do we actually have a disagreement here? I’m saying that actually-existing humans can’t actually do this. You seem to be saying that it’s conceivable that future humans might develop a protocol for doing this, and it’s worth exploring.
These can both be true! But in the meantime we’d need to explore this with our actually-existing minds, not the ones we might like to have, so it’s worth figuring out what the heck we’re actually doing.
I agree that it would take some work to figure out how to do this well.
I would say “figure out how to do this well” is at a similar level of complexity to “figure out what the heck we’re actually doing.” The “what should we do” question is more likely to have a clean and actionable answer. The “what do we do” question is more relevant to understanding the world now at the object level.
I’m way more skeptical than you about maintaining a canonical perspective
Which part of the post are you refering to?
I read Jessica as being pretty down on the canonical perspective in this post. As in this part:
Since then, aesthetic intuitions have led me to instead think of the problem of collective epistemology as one of decentralized coordination: how can good-faith actors reason and act well as a collective superintelligence in conditions of fog of war, where deception is prevalent and creation of common knowledge is difficult? I find this framing of collective epistemology more beautiful than the idea of a immediately deferring to a canonical perspective, and it is a better fit for the real world.
Try to make information and models common knowledge among a group when possible, so they can be integrated into a canonical perspective. This allows the group to build on this, rather than having to re-derive or re-state it repeatedly.
and:
This can result in a question being definitively settled, which is great for the group’s ability to reliably get the right answer to the question, rather than having a range of “acceptable” answers that will be chosen from based on factors other than accuracy.
For example, in the physical world, without secure property rights, an adversary can just shoot you and there’s not much you can do. No policy is going to work well for the people who follow it in that setting, unless the other people play nice or the group of people following it is large enough to overpower those who don’t.
There are actual processes by which people have managed to transition from a state in which they were not protected against violence, to a state in which they were protected against some violence, or were able to coordinate via high-trust networks within a broader low-trust context. The formation of states and other polities is a real thing that happens, and to write off the relevant coordination problem as impossibly hard seems bizarre, so perhaps I’m misunderstanding you. It has historically been fairly popular to attribute this sort of transition to gods, but that seems more like reifying our confusion than actually alleviating it.
The “everyone else behaves adversarially” assumption is very pessimistic. Lots of problems are solvable in practice but insoluble if there is an adversarial majority.
Collective epistemology may be such a case. But I suspect that it’s actually much easier to build robust collective epistemology than to build a robust society, in the sense that it can be done under much weaker assumptions.
Hmm. Are you assuming e.g. that we don’t know who’s part of a kinship group etc., so there’s no a priori way of inferring who’s likely to have aligned interests? That seems like an interesting case to model, but it’s worth noting that the modern era is historically unusual in resembling it.
I’m just asking: can you design a system that works well, for the people who adopt it, without assuming anything about everyone else? So in particular: without making assumptions about aligned interests, about rationality, about kindness, etc.
When this harder problem is solvable, I think it buys you a lot. (For example, it may spare you from having to go literally form a new society.)
It seems that in very adversarial settings there is limited incentive to share information. Also resources used for generating object level information might have to be deployed trying to figure out the goals and of any adversarial behavior. Adversarial behavior directed at you might not be against your goals. For example an allied spy might lie to you to maintain their cover.
For these reasons I am skeptical of a productive group epistemology.
I meant to propose the goal: find a strategy such that the set H of people who use it do “well” (maybe: “have beliefs almost as accurate as if they had known each others’ identities, pooled their info honestly, and built a common narrative”) regardless of what others do.
There aren’t really any incentives in the setup. People in H won’t behave adversarially towards you, since they are using the proposed collective epistemology. You might as well assume that people outside of H will behave adversarially to you, since a system that is supposed to work under no assumptions about their behavior must work when they behave adversarially (and conversely, if it works when they behave adversarially then it works no matter what).
I think the group will struggle with accurate models of things where data is scarce. This scarcity might be due to separation in time or space between the members and the phenomenon being discussed. Or could be due to the data being dispersed.
This fits with physics and chemistry being more productive than things like economics or studying the future. In these kinds of fields narratives that serve certain group members can take hold and be very hard to dislodge.
Economics and futurism are hard domains for epistemology in general, but I’m not sure that they’d become disproportionately harder in the presence of disinformation.
I think the hard cases are when people have access to lots of local information that is hard for others to verify. In futurism and economics people are using logical facts and publicly verifiable observations to an unusual extent, so in that sense I’d expect trust to be unusually unimportant.
I was just thinking that we would be able to do better than being keynesian or believing in the singularity if we could aggregate information from everyone reliably.
If we could form a shared narrative and get reliable updates from chip manufacturers about the future of semiconductors we could make better predictions about the pace of computational improvement. Than if you assume they will be saying things with half an eye on their share price.
There might be an asymptote you can reach on doing “well” under these mildly adversarial settings. I think knowing the incentives of people helps a lot so can know when people are incentivised deceive you.
Are you assuming you can identify people in H reliably?
Ignore all non-gears level feedback: Getting feedback is important for epistemolgical correctness. But if it is in an adversarial setting feedback may be trying to make you believe something to their benefit. Ignore all karma scores, for example. If however someone can tell you how and why you are going wrong (or right) that can be useful, if you agree with their reasoning.
Only update on facts that logically follow from things you already believe. If someone has followed an inference chain further than you, you can use their work safely.
If arguments rely on facts new to you, look at the world and see if those facts are consistent with what is around you.
That said, as I don’t believe in a sudden switch to utopia, I think it important to strengthen the less-adversarial parts of society, so I will be seeking those out. “Start as you mean to go on,” seems like decent wisdom, in this day and age.
I think it’s worth asking: can we find a collective epistemology which would work well for the people who use it, even if everyone else behaved adversarially? (More generally: can we find a policy that works well for the people who follow it, even if others behave adversarially?)
The general problem is clearly impossible in some settings. For example, in the physical world, without secure property rights, an adversary can just shoot you and there’s not much you can do. No policy is going to work well for the people who follow it in that setting, unless the other people play nice or the group of people following it is large enough to overpower those who don’t.
In others settings it seems quite easy, e.g. if you want to make a series of binary decisions about who to trust, and you immediately get a statistical clue when someone defects, then even a small group can make good decisions regardless of what everyone else does.
For collective epistemology I think you can probably do well even assuming if 99% of people behave adversarially (including impersonating good faith actors until its convenient to defect—in this setting, “figure out who is rational, and then trust them” is a non-starter approach). And more generally, given secure property rights I think you could probably organize an entire rational economy in a way that is robust to large groups defecting strategically, yet still pretty efficient.
The adversarial setting is of course too pessimistic. But I think it’s probably a good thing to work on anyway in cases where it looks possible, since (a) it’s an easy setting to think about, (b) in some respects the real world is surprisingly adversarial, (c) robustness lets you relax about lots of things and is often pretty cheap.
I’m way more skeptical than you about maintaining a canonical perspective; I almost can’t imagine how that would work well given real humans, and its advantages just don’t seem that big compared to the basic unworkability.
It seems to me that part of the subtext here is that humans for the most part track a shared perspective, and can’t help but default to it quite often, because (a) we want to communicate with other humans, and (b) it’s expensive to track the map-territory distinction.
For instance, let’s take the Syria example. Here are some facts that I think are tacitly assumed by just about everyone talking about the Syria question, without evaluating whether there is sufficient evidence to believe them, simply because they are in the canonical perspective:
Syria is a place.
People live there.
There is or was recently some sort of armed conflict going on there.
Syria is adjacent to other places, in roughly the spatial arrangement a map would tell you.
Syria contains cities in which people live, in roughly the places a map would tell you. The people in those cities for the most part refer to them by the names on the map, or some reasonably analogous name in their native language.
One of the belligerents formerly had almost exclusive force-projection capacity over the whole of Syria. The nominal leader of this faction is Bashar al-Assad.
ISIL/ISIS was a real organization, that held real territory.
The level of skepticism that would not default to the canonical perspective on facts like that seems—well, I don’t know of anyone who seems to have actually internalized that level of skepticism of canon, aside from the President of the United States. He seems to have done pretty well for himself, if he in fact exists.
Robust collective epistemology need not look like “normal epistemology but really skeptical.” Treating knowledge as provisional and tentative doesn’t require a high level of skepticism. It may involve some revision to the default way humans think, but that ship had already sailed well before the enlightenment.
It seems reasonable to believe X “merely” because it is falsifiable, no one credible objects, and you’ve personally seen no evidence to the contrary. That protocol probably won’t lead you astray, but for most interesting claims it is going to be easy for an adversary to DoS it (since even if the claim is true someone could object without compromising their own credibility) and so you are going to need to rely on more robust fallbacks.
My point isn’t that you should doubt that sort of stuff strongly, it’s that it seems to me to be prohibitively computationally expensive to evaluate it at all rather than passively accepting it as background observations presumed true. How, in practice, does one treat that sort of knowledge as provisional and tentative?
My best guess is that someone with the right level of doubt in social reality ends up looking like they have a substantially higher than normal level of psychosis, and ends up finding it difficult to track when they’re being weird.
A belief being tentative is a property of your algorithm-for-deciding-things, not what a state of mind feels like from the inside. You can get a lot of mileage by e.g. (a) independently revisiting tentative claims with small probability, (b) responding appropriately when someone points out to you that a load-bearing tentative assumption might be wrong.
I don’t think this question should be expected to have a really short answer, even if there are ironclad collective epistemology protocols. It’s like saying “how, in practice, do people securely communicate over untrusted internet infrastructure?” There is a great answer, but even once you have a hint that it’s possible it will still take quite a lot of work to figure out exactly how the protocol works.
Do we actually have a disagreement here? I’m saying that actually-existing humans can’t actually do this. You seem to be saying that it’s conceivable that future humans might develop a protocol for doing this, and it’s worth exploring.
These can both be true! But in the meantime we’d need to explore this with our actually-existing minds, not the ones we might like to have, so it’s worth figuring out what the heck we’re actually doing.
I agree that it would take some work to figure out how to do this well.
I would say “figure out how to do this well” is at a similar level of complexity to “figure out what the heck we’re actually doing.” The “what should we do” question is more likely to have a clean and actionable answer. The “what do we do” question is more relevant to understanding the world now at the object level.
Which part of the post are you refering to?
I read Jessica as being pretty down on the canonical perspective in this post. As in this part:
I’m on board with the paragraph you quoted.
I’m objecting to:
and:
There are actual processes by which people have managed to transition from a state in which they were not protected against violence, to a state in which they were protected against some violence, or were able to coordinate via high-trust networks within a broader low-trust context. The formation of states and other polities is a real thing that happens, and to write off the relevant coordination problem as impossibly hard seems bizarre, so perhaps I’m misunderstanding you. It has historically been fairly popular to attribute this sort of transition to gods, but that seems more like reifying our confusion than actually alleviating it.
The “everyone else behaves adversarially” assumption is very pessimistic. Lots of problems are solvable in practice but insoluble if there is an adversarial majority.
Collective epistemology may be such a case. But I suspect that it’s actually much easier to build robust collective epistemology than to build a robust society, in the sense that it can be done under much weaker assumptions.
Hmm. Are you assuming e.g. that we don’t know who’s part of a kinship group etc., so there’s no a priori way of inferring who’s likely to have aligned interests? That seems like an interesting case to model, but it’s worth noting that the modern era is historically unusual in resembling it.
I’m just asking: can you design a system that works well, for the people who adopt it, without assuming anything about everyone else? So in particular: without making assumptions about aligned interests, about rationality, about kindness, etc.
When this harder problem is solvable, I think it buys you a lot. (For example, it may spare you from having to go literally form a new society.)
It seems that in very adversarial settings there is limited incentive to share information. Also resources used for generating object level information might have to be deployed trying to figure out the goals and of any adversarial behavior. Adversarial behavior directed at you might not be against your goals. For example an allied spy might lie to you to maintain their cover.
For these reasons I am skeptical of a productive group epistemology.
I meant to propose the goal: find a strategy such that the set H of people who use it do “well” (maybe: “have beliefs almost as accurate as if they had known each others’ identities, pooled their info honestly, and built a common narrative”) regardless of what others do.
There aren’t really any incentives in the setup. People in H won’t behave adversarially towards you, since they are using the proposed collective epistemology. You might as well assume that people outside of H will behave adversarially to you, since a system that is supposed to work under no assumptions about their behavior must work when they behave adversarially (and conversely, if it works when they behave adversarially then it works no matter what).
I think the group will struggle with accurate models of things where data is scarce. This scarcity might be due to separation in time or space between the members and the phenomenon being discussed. Or could be due to the data being dispersed.
This fits with physics and chemistry being more productive than things like economics or studying the future. In these kinds of fields narratives that serve certain group members can take hold and be very hard to dislodge.
Economics and futurism are hard domains for epistemology in general, but I’m not sure that they’d become disproportionately harder in the presence of disinformation.
I think the hard cases are when people have access to lots of local information that is hard for others to verify. In futurism and economics people are using logical facts and publicly verifiable observations to an unusual extent, so in that sense I’d expect trust to be unusually unimportant.
I was just thinking that we would be able to do better than being keynesian or believing in the singularity if we could aggregate information from everyone reliably.
If we could form a shared narrative and get reliable updates from chip manufacturers about the future of semiconductors we could make better predictions about the pace of computational improvement. Than if you assume they will be saying things with half an eye on their share price.
There might be an asymptote you can reach on doing “well” under these mildly adversarial settings. I think knowing the incentives of people helps a lot so can know when people are incentivised deceive you.
Are you assuming you can identify people in H reliably?
If you can identify people in H reliably then you would ignore everyone outside of H. The whole point of the game is that you can’t tell who is who.
So what you can do is.
Ignore all non-gears level feedback: Getting feedback is important for epistemolgical correctness. But if it is in an adversarial setting feedback may be trying to make you believe something to their benefit. Ignore all karma scores, for example. If however someone can tell you how and why you are going wrong (or right) that can be useful, if you agree with their reasoning.
Only update on facts that logically follow from things you already believe. If someone has followed an inference chain further than you, you can use their work safely.
If arguments rely on facts new to you, look at the world and see if those facts are consistent with what is around you.
That said, as I don’t believe in a sudden switch to utopia, I think it important to strengthen the less-adversarial parts of society, so I will be seeking those out. “Start as you mean to go on,” seems like decent wisdom, in this day and age.