But precisely the point is that Descartes has set aside “all the usual rules”, has set aside “philosophical scaffolding”, epistemological paradigms, and so on,
I initially wanted to preface my response here with something like “to put it delicately”, but then I realized that Descartes is dead and cannot take offense to anything I say here, and so I will be indelicate in my response:
I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules. The rules governing correct cognition are clear, comprehensible, and causally justifiable; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.
and has started with (as much as possible) the bare minimum that he could manage: naive notions of perception and knowledge, and pretty much nothing else.
Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.
He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.
To put it another way: consider what Descartes might say, if you put your criticisms to him. He might say something like:
“Whoa, now, hold on. Rules about infinite certainty? Probability theory? Philosophical commitments about the nature of beliefs and claims? You’re getting ahead of me, friend. We haven’t gotten there yet. I don’t know about any of those things; or maybe I did, but then I started doubting them all. The only thing I know right now is, I exist. I don’t even know that you exist! I certainly do not propose to assent to all these ‘rules’ and ‘standards’ you’re talking about—at least, not yet. Maybe after I’ve built my epistemology up, we’ll get back to all that stuff. But for now, I don’t find any of the things you’re saying to have any power to convince me of anything, and I decline to acknowledge the validity of your analysis. Build it all up for me, from the cogito on up, and then we’ll talk.”
Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.
Descartes, in other words, was doing something very basic, philosophically speaking—something that is very much prior to talking about “the usual rules” about infinite certainty and all that sort of thing.
At risk of hammering in the point too many times: “prior” does not correspond to “better”. Indeed, it is hard to see why one would take this attitude (that “prior” knowledge is somehow more trustworthy than models built on actual reasoning) with respect to a certain subset of questions classed as “philosophical” questions, when virtually every other human endeavor has shown the opposite to be the case: learning more, and knowing more, causes one to make fewer mistakes in one’s reasoning and conclusions. If Descartes wants to discount a certain class of reasoning in his quest for truth, I submit that he has chosen to discount the wrong class.
Separately from all that, what you say about the hypothetical computer program (with the print statement) isn’t true. There is a check that’s being run: namely, the ability of the program to execute. Conditional on successfully being able to execute the print statement, it prints something. A program that runs, definitionally exists; its existence claim is satisfied thereby.
A key difference here: what you describe is not a check that is being run by the program, which is important because it is the program that finds itself in an analogous situation to Descartes.
What you say is, of course, true to any outside observer; I, seeing the program execute, can certainly be assured of its existence. But then, I can also say the same of Descartes: if I were to run into him in the street, I would not hesitate to conclude that he exists, and he need not even assert his existence aloud for me to conclude this. Moreover, since I (unlike Descartes) am not interested in the project of “doubting everything”, I can quite confidently proclaim that this is good enough for me.
Ironically enough, it is Descartes himself who considers this insufficient. He does not consider it satisfactory for a program to merely execute; he wants the program to know that it is being executed. For this it is not sufficient to simply assert “The program is being run; that is itself the check on its existence”; what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.
And of course, what is sauce for the goose is sauce for the gander; if a program cannot run such a check even in principle, then what reason do I have to believe that Descartes’ brain is running some analogous check when he asserts his famous “Cogito, ergo sum”? Far more reasonable, I claim, to suspect that his brain is not running any such check, and that his resulting statement is meaningless at best, and incoherent at worst.
I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules.
But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?
And there the answer is not so obvious. After all, it’s your own brain that stores the rules, your own brain that implements them, your own brain that was convinced of their validity in the first place…
What Descartes is doing, then, is seeing if he can re-generate “the usual rules”, with his own brain (and how else?), having first set them aside. In other words, he is attempting to check whether said rules are “truly part of him”, or whether they are, so to speak, foreign agents who have sneaked into his brain illicitly (through unexamined habit, indoctrination, deception, etc.).
Thus, when you say:
The rules governing correct cognition are clear, comprehensible, and causally justifiable; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.
… Descartes may answer:
“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”
Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.
Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)
And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.
He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.
Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t. I consider it quite reasonable to be more impressed with his approach than with yours. If you object, merely consider that someone had to come up with “the usual rules” in the first place—and they did not have said rules to help them.
Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.
Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?
The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.
At risk of hammering in the point too many times: …
Now, in this paragraph I think you have some strange confusion. I am not quite sure what claim or point of mine you take this to be countering.
… what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.
Hmm, I think it doesn’t go without saying, actually; I think it needs to be said, and then defended. I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not. I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).
But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…
But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?
I certainly do! I have observed the fallibility of my own brain on numerous past occasions, and any temptation I might have had to consider myself a perfect reasoner has been well and truly quashed by those past observations. Indeed, the very project we call “rationality” is premised on the notion that our naive faculties are woefully inadequate; after all, one cannot have aspirations of “increasing” one’s rationality without believing that one’s initial starting point is one of imperfect rationality.
… Descartes may answer:
“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”
Indeed, I am fallible, and for this reason I cannot rule out the possibility that I have misapprehended the rules, and that my misapprehensions are perhaps fatal. However, regardless of however much my fallibility reduces my confidence in the rules, it inevitably reduces my confidence in my ability to perform without rules by an equal or greater amount; and this seems to me to be right, and good.
...Or, to put it another way: perhaps I am blind, and in my blindness I have fumbled my way to a set of (what seem to me to be) crutches. Should I then discard those crutches and attempt to make my way unassisted, on the grounds that I may be mistaken about whether they are, in fact, crutches? But surely I will do no better on my own, than I will by holding on to the crutches for the time being; for then at least the possibility exists that I am not mistaken, and the objects I hold are in fact crutches. Any argument that might lead me to make the opposite choice is quite wrongheaded indeed, in my view.
Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)
And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.
It is perhaps worth noting that the sense in which “parallel lines are not parallel” which you cite is quite different from the sense in which our brains misinterpret the café wall illusion. And in light of this, it is perhaps also notable that the eventual development of non-Euclidean geometries was not spurred by this or similar optical illusions.
Which is to say: our understanding of things may be flawed or incomplete in certain ways. But we do not achieve a corrected understanding of those things by discarding our present tools wholesale (especially on such flimsy evidence as naive perception); we achieve a corrected understanding by poking and prodding at our current understanding, until such time as our efforts bear fruit.
(In the “crutch” analogy: perhaps there exists a better set of crutches, somewhere out there for us to find. This nonetheless does not imply that we ought discard our current crutches in anticipation of the better set; we will stand a far better chance of making our way to the better crutches, if we rely on the crutches we have in the meantime.)
Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t.
Certainly not; but fortunately this rather strong condition is not needed for me to distrust Descartes’ reasoning. What is needed is simply that I trust “the usual rules” more than I trust Descartes; and for further clarification on this point you need merely re-read what I wrote above about “crutches”.
Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?
The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.
I believe my above arguments suffice to answer this objection.
[...] I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not.
Suppose a program is not, in fact, running. How do you propose that the program in question detect this state of affairs?
I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).
But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…
If the only possible validation of Descartes’ claim to exist is anthropic in nature, then this is tantamount to saying that his cogito is untenable. After all, “I think, therefore I am” is semantically quite different from “I assert that I am, and this assertion is anthropically valid because you will only hear me say it in worlds where it happens to be true.”
In fact, I suspect that Descartes would agree with me on this point, and complain that—to the extent you are reducing his claim to a mere instance of anthropic reasoning—you are immeasurably weakening it. To quote from an earlier comment of mine:
It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program’s output were to be interpreted as having meaning, then it seems obvious that the statement in question (“I exist”) is correct, since the program does in fact exist and was run.
But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a “meaningful” statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it captures the spirit of Descartes’ cogito; I suspect Descartes himself would be quite unsatisfied with the notion that such a program outputs the statement for the same reasons he does.
I initially wanted to preface my response here with something like “to put it delicately”, but then I realized that Descartes is dead and cannot take offense to anything I say here, and so I will be indelicate in my response:
I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules. The rules governing correct cognition are clear, comprehensible, and causally justifiable; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.
Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.
In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.
Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.
At risk of hammering in the point too many times: “prior” does not correspond to “better”. Indeed, it is hard to see why one would take this attitude (that “prior” knowledge is somehow more trustworthy than models built on actual reasoning) with respect to a certain subset of questions classed as “philosophical” questions, when virtually every other human endeavor has shown the opposite to be the case: learning more, and knowing more, causes one to make fewer mistakes in one’s reasoning and conclusions. If Descartes wants to discount a certain class of reasoning in his quest for truth, I submit that he has chosen to discount the wrong class.
A key difference here: what you describe is not a check that is being run by the program, which is important because it is the program that finds itself in an analogous situation to Descartes.
What you say is, of course, true to any outside observer; I, seeing the program execute, can certainly be assured of its existence. But then, I can also say the same of Descartes: if I were to run into him in the street, I would not hesitate to conclude that he exists, and he need not even assert his existence aloud for me to conclude this. Moreover, since I (unlike Descartes) am not interested in the project of “doubting everything”, I can quite confidently proclaim that this is good enough for me.
Ironically enough, it is Descartes himself who considers this insufficient. He does not consider it satisfactory for a program to merely execute; he wants the program to know that it is being executed. For this it is not sufficient to simply assert “The program is being run; that is itself the check on its existence”; what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.
And of course, what is sauce for the goose is sauce for the gander; if a program cannot run such a check even in principle, then what reason do I have to believe that Descartes’ brain is running some analogous check when he asserts his famous “Cogito, ergo sum”? Far more reasonable, I claim, to suspect that his brain is not running any such check, and that his resulting statement is meaningless at best, and incoherent at worst.
But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?
And there the answer is not so obvious. After all, it’s your own brain that stores the rules, your own brain that implements them, your own brain that was convinced of their validity in the first place…
What Descartes is doing, then, is seeing if he can re-generate “the usual rules”, with his own brain (and how else?), having first set them aside. In other words, he is attempting to check whether said rules are “truly part of him”, or whether they are, so to speak, foreign agents who have sneaked into his brain illicitly (through unexamined habit, indoctrination, deception, etc.).
Thus, when you say:
… Descartes may answer:
“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”
Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)
And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.
Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t. I consider it quite reasonable to be more impressed with his approach than with yours. If you object, merely consider that someone had to come up with “the usual rules” in the first place—and they did not have said rules to help them.
Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?
The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.
Now, in this paragraph I think you have some strange confusion. I am not quite sure what claim or point of mine you take this to be countering.
Hmm, I think it doesn’t go without saying, actually; I think it needs to be said, and then defended. I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not. I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).
But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…
I certainly do! I have observed the fallibility of my own brain on numerous past occasions, and any temptation I might have had to consider myself a perfect reasoner has been well and truly quashed by those past observations. Indeed, the very project we call “rationality” is premised on the notion that our naive faculties are woefully inadequate; after all, one cannot have aspirations of “increasing” one’s rationality without believing that one’s initial starting point is one of imperfect rationality.
Indeed, I am fallible, and for this reason I cannot rule out the possibility that I have misapprehended the rules, and that my misapprehensions are perhaps fatal. However, regardless of however much my fallibility reduces my confidence in the rules, it inevitably reduces my confidence in my ability to perform without rules by an equal or greater amount; and this seems to me to be right, and good.
...Or, to put it another way: perhaps I am blind, and in my blindness I have fumbled my way to a set of (what seem to me to be) crutches. Should I then discard those crutches and attempt to make my way unassisted, on the grounds that I may be mistaken about whether they are, in fact, crutches? But surely I will do no better on my own, than I will by holding on to the crutches for the time being; for then at least the possibility exists that I am not mistaken, and the objects I hold are in fact crutches. Any argument that might lead me to make the opposite choice is quite wrongheaded indeed, in my view.
It is perhaps worth noting that the sense in which “parallel lines are not parallel” which you cite is quite different from the sense in which our brains misinterpret the café wall illusion. And in light of this, it is perhaps also notable that the eventual development of non-Euclidean geometries was not spurred by this or similar optical illusions.
Which is to say: our understanding of things may be flawed or incomplete in certain ways. But we do not achieve a corrected understanding of those things by discarding our present tools wholesale (especially on such flimsy evidence as naive perception); we achieve a corrected understanding by poking and prodding at our current understanding, until such time as our efforts bear fruit.
(In the “crutch” analogy: perhaps there exists a better set of crutches, somewhere out there for us to find. This nonetheless does not imply that we ought discard our current crutches in anticipation of the better set; we will stand a far better chance of making our way to the better crutches, if we rely on the crutches we have in the meantime.)
Certainly not; but fortunately this rather strong condition is not needed for me to distrust Descartes’ reasoning. What is needed is simply that I trust “the usual rules” more than I trust Descartes; and for further clarification on this point you need merely re-read what I wrote above about “crutches”.
I believe my above arguments suffice to answer this objection.
Suppose a program is not, in fact, running. How do you propose that the program in question detect this state of affairs?
If the only possible validation of Descartes’ claim to exist is anthropic in nature, then this is tantamount to saying that his cogito is untenable. After all, “I think, therefore I am” is semantically quite different from “I assert that I am, and this assertion is anthropically valid because you will only hear me say it in worlds where it happens to be true.”
In fact, I suspect that Descartes would agree with me on this point, and complain that—to the extent you are reducing his claim to a mere instance of anthropic reasoning—you are immeasurably weakening it. To quote from an earlier comment of mine: