Aumann’s Agreement Theorem—which proves that they will always agree…under special conditions. Those conditions are that they must have common prior beliefs—things they believed before they encountered any of the evidence they know that supports their beliefs—and they must share all the information they have with each other. If they do those two things, then they will be mathematically forced to agree about everything!
To nitpick, this misstates Aumann in several ways. (It’s a nitpick because it’s obvious that you aren’t trying to be precise.)
Aumann does not require that they share all information with each other. This would make the result trivial. Instead, all that is required is common knowledge of each others posterior beliefs on the one question at hand—then they must agree on the probabilities of answers of that question.
Getting more into the weeds, Aumann also assumes partitional evidence, which means that the indistinguishability relationship between worlds (IE the relationship xRy saying you can’t rule out being in world x, when in world y) is symmetric, transitive, and reflexive (so, defines a partition on worlds, commonly called information sets in game theory). However, some of these assumptions can be weakened and still preserve Aumann’s theorem.
Thanks! I should be a bit more careful here. I’m definitely glossing over a lot of details. My goal in the book is to roughly 80⁄20 things because I have a lot of material to cover and I don’t have the time/energy to write a fully detailed account of everything, so I want to say a lot of things as pointers that are enough to point to key arguments/insights that I think matter on the path to talking about fundamental uncertainty and the inherently teleological nature of knowledge.
I view this as a book written for readers who can search for things so expect people to look stuff up for themselves if they want to know more. But I should still be careful and get the high level summary right, or at least approximately right.
As someone reading to try to engage with your views, the lack of precision is frustrating, since I don’t know which choices are real vs didactic. To where I’ve read so far, I’m still feeling an introductory sense and wondering where it becomes less so.
To some extent I expect the whole book to be introductory. My model is that the key people I need to reach are those who don’t yet buy the key ideas, not those interested in diving into the finer details.
There’s two sets of folks I’m trying to write to. My main audience is STEM folks who may not have engaged deeply with LW sequence type stuff and so have no version of these ideas (or have engaged with LW and have naive versions of the ideas). The second, smaller audience is LW-like folks who are for one reason or another some flavor of positivist because they only engaged enough layers of abstraction up with the ideas that positivism still seems reasonable.
Curious if you have work with either of the following properties:
You expect me to get something out of it by engaging with it;
You expect my comments to be able to engage with the “core” or “edge” of your thinking (“core” meaning foundational assumptions with high impact on the rest of your thinking; “edge” meaning the parts you are more actively working out), as opposed to useful mainly for didactic revisions / fixing details of presentation.
Also curious what you mean by “positivism” here—not because it’s too vague a term, just because I’m curious how you would state it.
For (1), my read is that you already get a lot of the core ideas I want people to understand, so possibly not. Maybe when I write chapter 8 there will be some interesting stuff there, since that will be roughly an expansion of this post to cover lots of misc things I think are important consequences or implications of the core ideas of the book.
For (2), I’m not quite sure where the edge of my thinking lies these days since I’m more in a phase of territory exploration rather than map drawing where I’m trying to get a bunch of data that will help me untangle things I can’t yet point to cleanly. Best I can say is that I know I don’t intuitively grasp my own embedded nature, even if I understand it theoretically, such that some sense that I am separate from the world permeates my ontology. I’m not really trying to figure anything out, though, just explain the bits I already grasp intuitively.
I think of positivism as the class of theories of truth that claim that the combination of logic and observation can lead to the discovery of universal ontology (universal in the sense that it’s the same for everyone and independent of any observer or what they care for). There’s a lot more I could say potentially about the most common positivist takes versus the most careful ones, but I’m not sure if there’s a need to go into that here.
To nitpick, this misstates Aumann in several ways. (It’s a nitpick because it’s obvious that you aren’t trying to be precise.)
Aumann does not require that they share all information with each other. This would make the result trivial. Instead, all that is required is common knowledge of each others posterior beliefs on the one question at hand—then they must agree on the probabilities of answers of that question.
Getting more into the weeds, Aumann also assumes partitional evidence, which means that the indistinguishability relationship between worlds (IE the relationship xRy saying you can’t rule out being in world x, when in world y) is symmetric, transitive, and reflexive (so, defines a partition on worlds, commonly called information sets in game theory). However, some of these assumptions can be weakened and still preserve Aumann’s theorem.
Thanks! I should be a bit more careful here. I’m definitely glossing over a lot of details. My goal in the book is to roughly 80⁄20 things because I have a lot of material to cover and I don’t have the time/energy to write a fully detailed account of everything, so I want to say a lot of things as pointers that are enough to point to key arguments/insights that I think matter on the path to talking about fundamental uncertainty and the inherently teleological nature of knowledge.
I view this as a book written for readers who can search for things so expect people to look stuff up for themselves if they want to know more. But I should still be careful and get the high level summary right, or at least approximately right.
Yep, makes sense.
As someone reading to try to engage with your views, the lack of precision is frustrating, since I don’t know which choices are real vs didactic. To where I’ve read so far, I’m still feeling an introductory sense and wondering where it becomes less so.
To some extent I expect the whole book to be introductory. My model is that the key people I need to reach are those who don’t yet buy the key ideas, not those interested in diving into the finer details.
There’s two sets of folks I’m trying to write to. My main audience is STEM folks who may not have engaged deeply with LW sequence type stuff and so have no version of these ideas (or have engaged with LW and have naive versions of the ideas). The second, smaller audience is LW-like folks who are for one reason or another some flavor of positivist because they only engaged enough layers of abstraction up with the ideas that positivism still seems reasonable.
Curious if you have work with either of the following properties:
You expect me to get something out of it by engaging with it;
You expect my comments to be able to engage with the “core” or “edge” of your thinking (“core” meaning foundational assumptions with high impact on the rest of your thinking; “edge” meaning the parts you are more actively working out), as opposed to useful mainly for didactic revisions / fixing details of presentation.
Also curious what you mean by “positivism” here—not because it’s too vague a term, just because I’m curious how you would state it.
For (1), my read is that you already get a lot of the core ideas I want people to understand, so possibly not. Maybe when I write chapter 8 there will be some interesting stuff there, since that will be roughly an expansion of this post to cover lots of misc things I think are important consequences or implications of the core ideas of the book.
For (2), I’m not quite sure where the edge of my thinking lies these days since I’m more in a phase of territory exploration rather than map drawing where I’m trying to get a bunch of data that will help me untangle things I can’t yet point to cleanly. Best I can say is that I know I don’t intuitively grasp my own embedded nature, even if I understand it theoretically, such that some sense that I am separate from the world permeates my ontology. I’m not really trying to figure anything out, though, just explain the bits I already grasp intuitively.
I think of positivism as the class of theories of truth that claim that the combination of logic and observation can lead to the discovery of universal ontology (universal in the sense that it’s the same for everyone and independent of any observer or what they care for). There’s a lot more I could say potentially about the most common positivist takes versus the most careful ones, but I’m not sure if there’s a need to go into that here.