It seems to me that part of the subtext here is that humans for the most part track a shared perspective, and can’t help but default to it quite often, because (a) we want to communicate with other humans, and (b) it’s expensive to track the map-territory distinction.
For instance, let’s take the Syria example. Here are some facts that I think are tacitly assumed by just about everyone talking about the Syria question, without evaluating whether there is sufficient evidence to believe them, simply because they are in the canonical perspective:
Syria is a place.
People live there.
There is or was recently some sort of armed conflict going on there.
Syria is adjacent to other places, in roughly the spatial arrangement a map would tell you.
Syria contains cities in which people live, in roughly the places a map would tell you. The people in those cities for the most part refer to them by the names on the map, or some reasonably analogous name in their native language.
One of the belligerents formerly had almost exclusive force-projection capacity over the whole of Syria. The nominal leader of this faction is Bashar al-Assad.
ISIL/ISIS was a real organization, that held real territory.
The level of skepticism that would not default to the canonical perspective on facts like that seems—well, I don’t know of anyone who seems to have actually internalized that level of skepticism of canon, aside from the President of the United States. He seems to have done pretty well for himself, if he in fact exists.
Robust collective epistemology need not look like “normal epistemology but really skeptical.” Treating knowledge as provisional and tentative doesn’t require a high level of skepticism. It may involve some revision to the default way humans think, but that ship had already sailed well before the enlightenment.
It seems reasonable to believe X “merely” because it is falsifiable, no one credible objects, and you’ve personally seen no evidence to the contrary. That protocol probably won’t lead you astray, but for most interesting claims it is going to be easy for an adversary to DoS it (since even if the claim is true someone could object without compromising their own credibility) and so you are going to need to rely on more robust fallbacks.
My point isn’t that you should doubt that sort of stuff strongly, it’s that it seems to me to be prohibitively computationally expensive to evaluate it at all rather than passively accepting it as background observations presumed true. How, in practice, does one treat that sort of knowledge as provisional and tentative?
My best guess is that someone with the right level of doubt in social reality ends up looking like they have a substantially higher than normal level of psychosis, and ends up finding it difficult to track when they’re being weird.
How, in practice, does one treat that sort of knowledge as provisional and tentative?
A belief being tentative is a property of your algorithm-for-deciding-things, not what a state of mind feels like from the inside. You can get a lot of mileage by e.g. (a) independently revisiting tentative claims with small probability, (b) responding appropriately when someone points out to you that a load-bearing tentative assumption might be wrong.
I don’t think this question should be expected to have a really short answer, even if there are ironclad collective epistemology protocols. It’s like saying “how, in practice, do people securely communicate over untrusted internet infrastructure?” There is a great answer, but even once you have a hint that it’s possible it will still take quite a lot of work to figure out exactly how the protocol works.
Do we actually have a disagreement here? I’m saying that actually-existing humans can’t actually do this. You seem to be saying that it’s conceivable that future humans might develop a protocol for doing this, and it’s worth exploring.
These can both be true! But in the meantime we’d need to explore this with our actually-existing minds, not the ones we might like to have, so it’s worth figuring out what the heck we’re actually doing.
I agree that it would take some work to figure out how to do this well.
I would say “figure out how to do this well” is at a similar level of complexity to “figure out what the heck we’re actually doing.” The “what should we do” question is more likely to have a clean and actionable answer. The “what do we do” question is more relevant to understanding the world now at the object level.
It seems to me that part of the subtext here is that humans for the most part track a shared perspective, and can’t help but default to it quite often, because (a) we want to communicate with other humans, and (b) it’s expensive to track the map-territory distinction.
For instance, let’s take the Syria example. Here are some facts that I think are tacitly assumed by just about everyone talking about the Syria question, without evaluating whether there is sufficient evidence to believe them, simply because they are in the canonical perspective:
Syria is a place.
People live there.
There is or was recently some sort of armed conflict going on there.
Syria is adjacent to other places, in roughly the spatial arrangement a map would tell you.
Syria contains cities in which people live, in roughly the places a map would tell you. The people in those cities for the most part refer to them by the names on the map, or some reasonably analogous name in their native language.
One of the belligerents formerly had almost exclusive force-projection capacity over the whole of Syria. The nominal leader of this faction is Bashar al-Assad.
ISIL/ISIS was a real organization, that held real territory.
The level of skepticism that would not default to the canonical perspective on facts like that seems—well, I don’t know of anyone who seems to have actually internalized that level of skepticism of canon, aside from the President of the United States. He seems to have done pretty well for himself, if he in fact exists.
Robust collective epistemology need not look like “normal epistemology but really skeptical.” Treating knowledge as provisional and tentative doesn’t require a high level of skepticism. It may involve some revision to the default way humans think, but that ship had already sailed well before the enlightenment.
It seems reasonable to believe X “merely” because it is falsifiable, no one credible objects, and you’ve personally seen no evidence to the contrary. That protocol probably won’t lead you astray, but for most interesting claims it is going to be easy for an adversary to DoS it (since even if the claim is true someone could object without compromising their own credibility) and so you are going to need to rely on more robust fallbacks.
My point isn’t that you should doubt that sort of stuff strongly, it’s that it seems to me to be prohibitively computationally expensive to evaluate it at all rather than passively accepting it as background observations presumed true. How, in practice, does one treat that sort of knowledge as provisional and tentative?
My best guess is that someone with the right level of doubt in social reality ends up looking like they have a substantially higher than normal level of psychosis, and ends up finding it difficult to track when they’re being weird.
A belief being tentative is a property of your algorithm-for-deciding-things, not what a state of mind feels like from the inside. You can get a lot of mileage by e.g. (a) independently revisiting tentative claims with small probability, (b) responding appropriately when someone points out to you that a load-bearing tentative assumption might be wrong.
I don’t think this question should be expected to have a really short answer, even if there are ironclad collective epistemology protocols. It’s like saying “how, in practice, do people securely communicate over untrusted internet infrastructure?” There is a great answer, but even once you have a hint that it’s possible it will still take quite a lot of work to figure out exactly how the protocol works.
Do we actually have a disagreement here? I’m saying that actually-existing humans can’t actually do this. You seem to be saying that it’s conceivable that future humans might develop a protocol for doing this, and it’s worth exploring.
These can both be true! But in the meantime we’d need to explore this with our actually-existing minds, not the ones we might like to have, so it’s worth figuring out what the heck we’re actually doing.
I agree that it would take some work to figure out how to do this well.
I would say “figure out how to do this well” is at a similar level of complexity to “figure out what the heck we’re actually doing.” The “what should we do” question is more likely to have a clean and actionable answer. The “what do we do” question is more relevant to understanding the world now at the object level.