you already have to acknowledge that it doesn’t exist in the world. You can only get to 1-epsilon. And that approximation works for only needing to believe as many levels of commonality as is required for the action calculation.
And that approximation works for only needing to believe as many levels of commonality as is required for the action calculation.
Note that the standard theory of common knowledge contains explicit claims against this part of your statement. The importance of common knowledge is supposed to be that it has a different quality from finite levels, which is uniquely helpful for coordination; eg, that’s the SEP’s point with the electronic messaging example. Or in the classic analysis of two-generals, where (supposedly) no finite number of messages back and forth is sufficient to coordinate, (supposedly) because this only establishes finite levels of social knowledge.
So the seemingly natural update for someone to make, if they learn standard game theory, but then realize that common knowledge is literally impossible in the real world as we understand it, is that rational coordination is actually impossible, and empirical signs pointing in the other direction are actually about irrationality.
I’ve realized that I’m not the target of the post, and am bowing out. I think we’re in agreement as to the way forward (no knowledge is 1 or 0, and practically you can get “certain enough” with a finite number of iterations). We may or may not disagree on what other people think about this topic.
Fair enough. I note for the public record that I’m not agreeing (nor 100% disagreeing) with
practically you can get “certain enough” with a finite number of iterations
as an accurate characterization of something I think. For example, it currently seems to me like finite iterations doesn’t solve two-generals, while p-common knowledge does.
However, the main thrust of the post is more to question the standard picture than to say exactly what the real picture is (since I remain broadly skeptical about it).
I think one reason this has stuck around in academic philosophy is that western philosophy has a powerful anti-skepticism strain, to the point where “you can know something” is almost axiomatic. Everyone wants to have an argument against skepticism; they just haven’t agreed on exactly what it is. Skepticism is every philosopher’s bogeyman. (This even applies to Descartes, the big proponent of the skeptical method.)
In particular, a view that (in my limited experience) academic philosophers wants to at least accommodate as possible, if not outright endorse, is the “here is a hand” argument: IE, you can know (with 100% certainty) some simple facts about your situation, such as the fact that you have a hand.
western philosophy has a powerful anti-skepticism strain, to the point where “you can know something” is almost axiomatic
I’m pretty pessimistic about the strain of philosophy as you’ve described it. I have yet to run into a sense of “know” that is binary (i.e. not “believed with probability”) that I would accept as an accurate description of the phenomenon of “knowledge” in the real world rather than as an occasionally useful approximation. Between the preface paradox (or its minor modification, the lottery paradox) and Fitch’s paradox of knowability, I don’t trust the “knowledge” operator in any logical claim.
In my limited experience, it feels like a lot of epistemologists have sadly “missed the bus” on this one. Like, they’ve gone so far down the wrong track that it’s a lot of work to even explain how our way of thinking about it could be relevant to their area of concern.
Fair enough, but I’d expect this post is unhelpful to someone who doesn’t acknowledge a baseline universal uncertainty, and unnecessary for someone who does.
Presumably whatever axioms the anti-sceptic philosophers use to avoid infinite recursion in any knowledge apply here too.
Ah, well. In my experience, there are a lot of people who (a) acknowledge baseline universal uncertainty, but (b) accept the theory of common knowledge, and even the idea that Two Generals is a hard or insoluble problem. So I think a lot of people haven’t noticed the contradiction, or at least haven’t propagated it very far.[1]
Presumably whatever axioms the anti-sceptic philosophers use to avoid infinite recursion in any knowledge apply here too.
Based on how this comment section is going, I am updating toward “haven’t propagated it very far”; it seems like maybe a lot of people know somewhere in the back of their minds that common knowledge can’t be literally occurring, but have half-developed hand-wavy theories about why they can go on using the theory as if it applies.
This seems trivially true to me. If you require
you already have to acknowledge that it doesn’t exist in the world. You can only get to 1-epsilon. And that approximation works for only needing to believe as many levels of commonality as is required for the action calculation.
Note that the standard theory of common knowledge contains explicit claims against this part of your statement. The importance of common knowledge is supposed to be that it has a different quality from finite levels, which is uniquely helpful for coordination; eg, that’s the SEP’s point with the electronic messaging example. Or in the classic analysis of two-generals, where (supposedly) no finite number of messages back and forth is sufficient to coordinate, (supposedly) because this only establishes finite levels of social knowledge.
So the seemingly natural update for someone to make, if they learn standard game theory, but then realize that common knowledge is literally impossible in the real world as we understand it, is that rational coordination is actually impossible, and empirical signs pointing in the other direction are actually about irrationality.
I’ve realized that I’m not the target of the post, and am bowing out. I think we’re in agreement as to the way forward (no knowledge is 1 or 0, and practically you can get “certain enough” with a finite number of iterations). We may or may not disagree on what other people think about this topic.
Fair enough. I note for the public record that I’m not agreeing (nor 100% disagreeing) with
as an accurate characterization of something I think. For example, it currently seems to me like finite iterations doesn’t solve two-generals, while p-common knowledge does.
However, the main thrust of the post is more to question the standard picture than to say exactly what the real picture is (since I remain broadly skeptical about it).
I think one reason this has stuck around in academic philosophy is that western philosophy has a powerful anti-skepticism strain, to the point where “you can know something” is almost axiomatic. Everyone wants to have an argument against skepticism; they just haven’t agreed on exactly what it is. Skepticism is every philosopher’s bogeyman. (This even applies to Descartes, the big proponent of the skeptical method.)
In particular, a view that (in my limited experience) academic philosophers wants to at least accommodate as possible, if not outright endorse, is the “here is a hand” argument: IE, you can know (with 100% certainty) some simple facts about your situation, such as the fact that you have a hand.
I’m pretty pessimistic about the strain of philosophy as you’ve described it. I have yet to run into a sense of “know” that is binary (i.e. not “believed with probability”) that I would accept as an accurate description of the phenomenon of “knowledge” in the real world rather than as an occasionally useful approximation. Between the preface paradox (or its minor modification, the lottery paradox) and Fitch’s paradox of knowability, I don’t trust the “knowledge” operator in any logical claim.
In my limited experience, it feels like a lot of epistemologists have sadly “missed the bus” on this one. Like, they’ve gone so far down the wrong track that it’s a lot of work to even explain how our way of thinking about it could be relevant to their area of concern.
Fair enough, but I’d expect this post is unhelpful to someone who doesn’t acknowledge a baseline universal uncertainty, and unnecessary for someone who does.
Presumably whatever axioms the anti-sceptic philosophers use to avoid infinite recursion in any knowledge apply here too.
Ah, well. In my experience, there are a lot of people who (a) acknowledge baseline universal uncertainty, but (b) accept the theory of common knowledge, and even the idea that Two Generals is a hard or insoluble problem. So I think a lot of people haven’t noticed the contradiction, or at least haven’t propagated it very far.[1]
Not sure what you mean here.
Based on how this comment section is going, I am updating toward “haven’t propagated it very far”; it seems like maybe a lot of people know somewhere in the back of their minds that common knowledge can’t be literally occurring, but have half-developed hand-wavy theories about why they can go on using the theory as if it applies.