Robust collective epistemology need not look like “normal epistemology but really skeptical.” Treating knowledge as provisional and tentative doesn’t require a high level of skepticism. It may involve some revision to the default way humans think, but that ship had already sailed well before the enlightenment.
It seems reasonable to believe X “merely” because it is falsifiable, no one credible objects, and you’ve personally seen no evidence to the contrary. That protocol probably won’t lead you astray, but for most interesting claims it is going to be easy for an adversary to DoS it (since even if the claim is true someone could object without compromising their own credibility) and so you are going to need to rely on more robust fallbacks.
My point isn’t that you should doubt that sort of stuff strongly, it’s that it seems to me to be prohibitively computationally expensive to evaluate it at all rather than passively accepting it as background observations presumed true. How, in practice, does one treat that sort of knowledge as provisional and tentative?
My best guess is that someone with the right level of doubt in social reality ends up looking like they have a substantially higher than normal level of psychosis, and ends up finding it difficult to track when they’re being weird.
How, in practice, does one treat that sort of knowledge as provisional and tentative?
A belief being tentative is a property of your algorithm-for-deciding-things, not what a state of mind feels like from the inside. You can get a lot of mileage by e.g. (a) independently revisiting tentative claims with small probability, (b) responding appropriately when someone points out to you that a load-bearing tentative assumption might be wrong.
I don’t think this question should be expected to have a really short answer, even if there are ironclad collective epistemology protocols. It’s like saying “how, in practice, do people securely communicate over untrusted internet infrastructure?” There is a great answer, but even once you have a hint that it’s possible it will still take quite a lot of work to figure out exactly how the protocol works.
Do we actually have a disagreement here? I’m saying that actually-existing humans can’t actually do this. You seem to be saying that it’s conceivable that future humans might develop a protocol for doing this, and it’s worth exploring.
These can both be true! But in the meantime we’d need to explore this with our actually-existing minds, not the ones we might like to have, so it’s worth figuring out what the heck we’re actually doing.
I agree that it would take some work to figure out how to do this well.
I would say “figure out how to do this well” is at a similar level of complexity to “figure out what the heck we’re actually doing.” The “what should we do” question is more likely to have a clean and actionable answer. The “what do we do” question is more relevant to understanding the world now at the object level.
Robust collective epistemology need not look like “normal epistemology but really skeptical.” Treating knowledge as provisional and tentative doesn’t require a high level of skepticism. It may involve some revision to the default way humans think, but that ship had already sailed well before the enlightenment.
It seems reasonable to believe X “merely” because it is falsifiable, no one credible objects, and you’ve personally seen no evidence to the contrary. That protocol probably won’t lead you astray, but for most interesting claims it is going to be easy for an adversary to DoS it (since even if the claim is true someone could object without compromising their own credibility) and so you are going to need to rely on more robust fallbacks.
My point isn’t that you should doubt that sort of stuff strongly, it’s that it seems to me to be prohibitively computationally expensive to evaluate it at all rather than passively accepting it as background observations presumed true. How, in practice, does one treat that sort of knowledge as provisional and tentative?
My best guess is that someone with the right level of doubt in social reality ends up looking like they have a substantially higher than normal level of psychosis, and ends up finding it difficult to track when they’re being weird.
A belief being tentative is a property of your algorithm-for-deciding-things, not what a state of mind feels like from the inside. You can get a lot of mileage by e.g. (a) independently revisiting tentative claims with small probability, (b) responding appropriately when someone points out to you that a load-bearing tentative assumption might be wrong.
I don’t think this question should be expected to have a really short answer, even if there are ironclad collective epistemology protocols. It’s like saying “how, in practice, do people securely communicate over untrusted internet infrastructure?” There is a great answer, but even once you have a hint that it’s possible it will still take quite a lot of work to figure out exactly how the protocol works.
Do we actually have a disagreement here? I’m saying that actually-existing humans can’t actually do this. You seem to be saying that it’s conceivable that future humans might develop a protocol for doing this, and it’s worth exploring.
These can both be true! But in the meantime we’d need to explore this with our actually-existing minds, not the ones we might like to have, so it’s worth figuring out what the heck we’re actually doing.
I agree that it would take some work to figure out how to do this well.
I would say “figure out how to do this well” is at a similar level of complexity to “figure out what the heck we’re actually doing.” The “what should we do” question is more likely to have a clean and actionable answer. The “what do we do” question is more relevant to understanding the world now at the object level.