[Disclaimer: not Rob, may not share Rob’s views, etc. The reason I’m writing this comment nonetheless is that I think I share enough of Rob’s relevant views here (not least because I think Rob’s views on this topic are mostly consonant with the LW “canon” view) to explain. Depending on how much you care about Rob’s view specifically versus the LW “canon” view, you can choose to regard or disregard this comment as you see fit.]
I don’t think people should be certain of anything
What about this claim itself?
I don’t think this is the gotcha [I think] you think it is. I think it is consistent to hold that (1) people should not place infinite certainty in any beliefs, including meta-beliefs about the normative best way to construct beliefs, and that (2) since (1) is itself a meta-belief, it too should not be afforded infinite certainty.
Of course, this conjunction has the interesting quality of feeling somewhat paradoxical, but I think this feeling doesn’t stand up to scrutiny. There doesn’t seem to me to be any actual contradiction you can derive from the conjunction of (1) and (2); the first seems simply to be a statement of a paradigm that one currently believes to be normative, and the second is a note that, just because one currently believes a paradigm to be normative, does not necessarily mean that that paradigm is normative. The fact that this second note can be construed as coming from the paradigm itself does not undermine it in my eyes; I think it is perfectly fine for paradigms to exist that fail to assert their own correctness.
I think, incidentally, that there are many people who [implicitly?] hold the negation of the above claim, i.e. they hold that (3)a valid paradigm must be one that has faith in its own validity. The paradigm may still turn out to be false, but this ought not be a possibility that is endorsed from inside the paradigm; just as individuals cannot consistently assert themselves to be mistaken about something (even if they are in fact mistaken), the inside of a paradigm ought not be the kind of thing that can undermine itself. If you hold something like (3) to be the case, then and only then does your quoted question become a gotcha.
Naturally, I think (3) is mistaken. Moreover, I not only think (3) is mistaken, I think it is unreasonable, i.e. I think there is no good reason to want (3) to be the case. I think the relevant paradox here is not Moore’s, but the lottery paradox, which I assert is not a paradox at all (though admittedly counterintuitive if one is not used to thinking in probabilities rather than certainties).
[There is also a resemblance here to Godel’s (second) incompleteness theorem, which asserts that sufficiently powerful formal systems cannot prove their own consistency unless they are actually inconsistent. I think this resemblance is more surface-level than deep, but it may provide at least an intuition that (1) there exist at least some “belief systems” that cannot “trust” themselves, and that (2) this is okay.]
[Disclaimer: not Rob, may not share Rob’s views, etc. The reason I’m writing this comment nonetheless is that I think I share enough of Rob’s relevant views here (not least because I think Rob’s views on this topic are mostly consonant with the LW “canon” view) to explain. Depending on how much you care about Rob’s view specifically versus the LW “canon” view, you can choose to regard or disregard this comment as you see fit.]
I don’t think this is the gotcha [I think] you think it is. I think it is consistent to hold that (1) people should not place infinite certainty in any beliefs, including meta-beliefs about the normative best way to construct beliefs, and that (2) since (1) is itself a meta-belief, it too should not be afforded infinite certainty.
Of course, this conjunction has the interesting quality of feeling somewhat paradoxical, but I think this feeling doesn’t stand up to scrutiny. There doesn’t seem to me to be any actual contradiction you can derive from the conjunction of (1) and (2); the first seems simply to be a statement of a paradigm that one currently believes to be normative, and the second is a note that, just because one currently believes a paradigm to be normative, does not necessarily mean that that paradigm is normative. The fact that this second note can be construed as coming from the paradigm itself does not undermine it in my eyes; I think it is perfectly fine for paradigms to exist that fail to assert their own correctness.
I think, incidentally, that there are many people who [implicitly?] hold the negation of the above claim, i.e. they hold that (3) a valid paradigm must be one that has faith in its own validity. The paradigm may still turn out to be false, but this ought not be a possibility that is endorsed from inside the paradigm; just as individuals cannot consistently assert themselves to be mistaken about something (even if they are in fact mistaken), the inside of a paradigm ought not be the kind of thing that can undermine itself. If you hold something like (3) to be the case, then and only then does your quoted question become a gotcha.
Naturally, I think (3) is mistaken. Moreover, I not only think (3) is mistaken, I think it is unreasonable, i.e. I think there is no good reason to want (3) to be the case. I think the relevant paradox here is not Moore’s, but the lottery paradox, which I assert is not a paradox at all (though admittedly counterintuitive if one is not used to thinking in probabilities rather than certainties).
[There is also a resemblance here to Godel’s (second) incompleteness theorem, which asserts that sufficiently powerful formal systems cannot prove their own consistency unless they are actually inconsistent. I think this resemblance is more surface-level than deep, but it may provide at least an intuition that (1) there exist at least some “belief systems” that cannot “trust” themselves, and that (2) this is okay.]
On reflection, it seems right to me that there may not be a contradiction here. I’ll post something later if I conclude otherwise.
(I think I got a bit too excited about a chance to use the old philosopher’s move of “what about that claim itself.”)
:) Yeah, it is an interesting case but I’m perfectly happy to say I’m not-maximally-certain about this.