We can build software agents that live in virtual environments we’ve constructed, and we can program the agents to never make certain kinds of mistakes (e.g., never make an invalid reasoning step, or never misperceive the state of tiles they’re near). So in that sense, there’s nothing wrong with positing ‘faculties that always get the right answer in practice’, though I expect these to be much harder to evolve than to design.
But a software agent in that environment shouldn’t be able to arrive at 100% certainty that one of its faculties is infallible, if it’s a smart Bayesian. Even we, the programmers, can’t be 100% certain that we programmed the agent correctly. Even an automated proof of correctness won’t get us to 100% certainty, because the theorem-prover’s source code could always have some error (or the hardware it’s running on could have been struck by a spare gamma ray, etc.)
1. I am, I exist (i.e., the Cartesian cogito).
It’s not clear what “I” means here, but it seems fine to say that there’s some persistent psychological entity roughly corresponding to the phrase “Rob Bensinger”. :)
I’m likewise happy to say that “thinking”, “experience”, etc. can be interpreted in (possibly non-joint-carving) ways that will make them pick out real things.
Oh, sorry, this was a quote from Descartes that is the closest thing that actually appears in Descartes to “I think therefore I am” (which doesn’t expressly appear in the Meditations).
Descartes’s idea doesn’t rely on any claims about persistent psychological entities (that would require the supposition of memory, which Descartes isn’t ready to accept yet!). Instead, he postulates an all-powerful entity that is specifically designed to deceive him and tries to determine whether anything at all can be known given that circumstance. He concludes that he can know that he exists because something has to do the thinking. Here is the relevant quote from the Second Meditation:
I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind.
I find this pretty convincing personally. I’m interested in whether you think Descartes gets it wrong even here or whether you think his philosophical system gains its flaws later.
More generally, I’m still not quite sure what precise claims or what type of claim you predict you and Geoff would disagree about. My-model-of-Geoff suggests that he would agree with “it seems fine to say that there’s some persistent psychological entity roughly corresponding to the phrase “Rob Bensinger”.” and that “thinking”, “experience”, etc.” pick out “real” things (depending on what we mean by “real”).
Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?
‘Can a deceiver trick a thinker into falsely believing they’re a thinker?’ has relevantly the same structure as ‘Can you pick up a box that’s not a box?’—it deductively follows that ‘no’, because the thinker’s belief in this case wouldn’t be false.
(Though we’ve already established that I don’t believe in infinite certainty. I forgive Descartes for living 60 years before the birth of Thomas Bayes, however. :) And Bayes didn’t figure all this out either.)
Because the logical structure is trivial—Descartes might just as well have asked ‘could a deceiver make 2 + 2 not equal 4?’—I have to worry that Descartes is sneaking in more content that is in fact deducible here. For example, ‘a thought exists, therefore a thinker exists’ may not be deductively true, depending on what is meant by ‘thought’ and ‘thinker’. A lot of philosophers have commented that Descartes should have limited his conclusion to ‘a thought exists’ (or ‘a mental event exists’), rather than ‘a thinker exists’.
Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?
‘Phenomenal consciousness exists’.
I’d guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!
Sorry if this comes off as pedantic, but I don’t know what this means. The philosopher in me keeps saying “I think we’re playing a language game,” so I’d like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely?
Because the logical structure is trivial—Descartes might just as well have asked ‘could a deceiver make 2 + 2 not equal 4?’
[...]
I’d guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!
I don’t know Geoff’s view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful). That’s not the same as “treating them as probabilistic statements,” but I think it’s functionally the same from your perspective.
The project of the Meditations is that Descartes starts by refusing to accept anything which can be doubted and then he tries to nevertheless build a system of knowledge from there. I don’t think Descartes would assign infinite certainty to any claim except, perhaps, the cogito.
My view of Descartes’ cogito is either that (A) it is a standard claim, in which case all the usual rules apply, including the one about infinite certainty not being allowed, or (B) it is not a standard claim, in which case the usual rules don’t apply, but also it becomes less clear that the cogito is actually a thing which can be “believed” in a meaningful sense to begin with.
I currently think (B) is much closer to being the case than (A). When I try to imagine grounding and/or operationalizing the cogito by e.g. designing a computer program that makes the same claim for the same psychological reasons, I run into a dead end fairly quickly, which in my experience is strong evidence that the initial concept was confused and/or incoherent. Here’s a quick sketch of my reasoning:
Suppose I have a computer program that, when run, prints “I exist” onto the screen. Moreover, suppose this computer program accomplishes this via means of a simple print statement; there is no internal logic, no if-then conditional structure, that modulates the execution of the print statement, merely the naked statement, which is executed every time the program runs. Then I ask: is there a meaningful sense in which the text the program outputs is correct?
It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program’s output were to be interpreted as having meaning, then it seems obvious that the statement in question (“I exist”) is correct, since the program does in fact exist and was run.
But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a “meaningful” statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it captures the spirit of Descartes’ cogito; I suspect Descartes himself would be quite unsatisfied with the notion that such a program outputs the statement for the same reasons he does.
But when I try to query my intuition, to ask it “Then what reasons are those, exactly?”, I find that I come up blank. It’s a qualitatively similar experience to asking what the truth-condition is for a tautology, e.g. 2 + 2 = 4, except even worse than that, since I could at the very least imagine a world in which 2 + 2 != 4, whereas I cannot even imagine an if-then conditional statement that would capture the (supposed) truth-condition of Descartes’ cogito. The closest (flawed) thing my intuition outputs looks like this:
if (I AM ACTUALLY BEING RUN RIGHT NOW):
print("I exist")
else if (I AM NOT BEING RUN, ONLY DISCUSSED HYPOTHETICALLY):
print("I don't exist")
Which is obvious nonsense. Obviously. (Though it does inspire an amusing idea for a mathematical horror story about an impossible computer program whose behavior when investigated using static analysis completely differs from its behavior when actually run, because at the beginning of the program is a metaphysical conditional statement that executes different code depending on whether it detects itself to be in static analysis versus actual execution.)
Anyway, the upshot of all this is that I don’t think Descartes’ statement is actually meaningful. I’m not particularly surprised by this; to me, it dovetails strongly with the heuristic “If you’re a dealing with a claim that seems to ignore the usual rules, it’s probably not a ‘claim’ in the usual sense”, which would have immediately flagged Descartes for the whole infinite certainty thing, without having to go through the whole “How would I write a computer program that exhibits this behavior for the same reason humans exhibit it?” song-and-dance.
(And for the record: there obviously is a reason humans find Descartes’ argument so intuitively compelling, just as there is a reason humans find the idea of qualia so intuitively compelling. I just think that, as with qualia, the actual psychological reason—of the kind that can be implemented in a real computer program, not a program with weird impossible metaphysical conditional statements—is going to look very different from humans’ stated justifications for the claims in question.)
I think this is quite a wrongheaded way to think about Descartes’ cogito. Consider this, for instance:
My view of Descartes’ cogito is either that (A) it is a standard claim, in which case all the usual rules apply, including the one about infinite certainty not being allowed, or (B) it is not a standard claim, in which case the usual rules don’t apply, but also it becomes less clear that the cogito is actually a thing which can be “believed” in a meaningful sense to begin with.
But precisely the point is that Descartes has set aside “all the usual rules”, has set aside “philosophical scaffolding”, epistemological paradigms, and so on, and has started with (as much as possible) the bare minimum that he could manage: naive notions of perception and knowledge, and pretty much nothing else. He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
To put it another way: consider what Descartes might say, if you put your criticisms to him. He might say something like:
“Whoa, now, hold on. Rules about infinite certainty? Probability theory? Philosophical commitments about the nature of beliefs and claims? You’re getting ahead of me, friend. We haven’t gotten there yet. I don’t know about any of those things; or maybe I did, but then I started doubting them all. The only thing I know right now is, I exist. I don’t even know that you exist! I certainly do not propose to assent to all these ‘rules’ and ‘standards’ you’re talking about—at least, not yet. Maybe after I’ve built my epistemology up, we’ll get back to all that stuff. But for now, I don’t find any of the things you’re saying to have any power to convince me of anything, and I decline to acknowledge the validity of your analysis. Build it all up for me, from the cogito on up, and then we’ll talk.”
Descartes, in other words, was doing something very basic, philosophically speaking—something that is very much prior to talking about “the usual rules” about infinite certainty and all that sort of thing.
Separately from all that, what you say about the hypothetical computer program (with the print statement) isn’t true. There is a check that’s being run: namely, the ability of the program to execute. Conditional on successfully being able to execute the print statement, it prints something. A program that runs, definitionally exists; its existence claim is satisfied thereby.
But precisely the point is that Descartes has set aside “all the usual rules”, has set aside “philosophical scaffolding”, epistemological paradigms, and so on,
I initially wanted to preface my response here with something like “to put it delicately”, but then I realized that Descartes is dead and cannot take offense to anything I say here, and so I will be indelicate in my response:
I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules. The rules governing correct cognition are clear, comprehensible, and causally justifiable; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.
and has started with (as much as possible) the bare minimum that he could manage: naive notions of perception and knowledge, and pretty much nothing else.
Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.
He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.
To put it another way: consider what Descartes might say, if you put your criticisms to him. He might say something like:
“Whoa, now, hold on. Rules about infinite certainty? Probability theory? Philosophical commitments about the nature of beliefs and claims? You’re getting ahead of me, friend. We haven’t gotten there yet. I don’t know about any of those things; or maybe I did, but then I started doubting them all. The only thing I know right now is, I exist. I don’t even know that you exist! I certainly do not propose to assent to all these ‘rules’ and ‘standards’ you’re talking about—at least, not yet. Maybe after I’ve built my epistemology up, we’ll get back to all that stuff. But for now, I don’t find any of the things you’re saying to have any power to convince me of anything, and I decline to acknowledge the validity of your analysis. Build it all up for me, from the cogito on up, and then we’ll talk.”
Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.
Descartes, in other words, was doing something very basic, philosophically speaking—something that is very much prior to talking about “the usual rules” about infinite certainty and all that sort of thing.
At risk of hammering in the point too many times: “prior” does not correspond to “better”. Indeed, it is hard to see why one would take this attitude (that “prior” knowledge is somehow more trustworthy than models built on actual reasoning) with respect to a certain subset of questions classed as “philosophical” questions, when virtually every other human endeavor has shown the opposite to be the case: learning more, and knowing more, causes one to make fewer mistakes in one’s reasoning and conclusions. If Descartes wants to discount a certain class of reasoning in his quest for truth, I submit that he has chosen to discount the wrong class.
Separately from all that, what you say about the hypothetical computer program (with the print statement) isn’t true. There is a check that’s being run: namely, the ability of the program to execute. Conditional on successfully being able to execute the print statement, it prints something. A program that runs, definitionally exists; its existence claim is satisfied thereby.
A key difference here: what you describe is not a check that is being run by the program, which is important because it is the program that finds itself in an analogous situation to Descartes.
What you say is, of course, true to any outside observer; I, seeing the program execute, can certainly be assured of its existence. But then, I can also say the same of Descartes: if I were to run into him in the street, I would not hesitate to conclude that he exists, and he need not even assert his existence aloud for me to conclude this. Moreover, since I (unlike Descartes) am not interested in the project of “doubting everything”, I can quite confidently proclaim that this is good enough for me.
Ironically enough, it is Descartes himself who considers this insufficient. He does not consider it satisfactory for a program to merely execute; he wants the program to know that it is being executed. For this it is not sufficient to simply assert “The program is being run; that is itself the check on its existence”; what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.
And of course, what is sauce for the goose is sauce for the gander; if a program cannot run such a check even in principle, then what reason do I have to believe that Descartes’ brain is running some analogous check when he asserts his famous “Cogito, ergo sum”? Far more reasonable, I claim, to suspect that his brain is not running any such check, and that his resulting statement is meaningless at best, and incoherent at worst.
I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules.
But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?
And there the answer is not so obvious. After all, it’s your own brain that stores the rules, your own brain that implements them, your own brain that was convinced of their validity in the first place…
What Descartes is doing, then, is seeing if he can re-generate “the usual rules”, with his own brain (and how else?), having first set them aside. In other words, he is attempting to check whether said rules are “truly part of him”, or whether they are, so to speak, foreign agents who have sneaked into his brain illicitly (through unexamined habit, indoctrination, deception, etc.).
Thus, when you say:
The rules governing correct cognition are clear, comprehensible, and causally justifiable; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.
… Descartes may answer:
“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”
Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.
Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)
And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.
He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.
Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t. I consider it quite reasonable to be more impressed with his approach than with yours. If you object, merely consider that someone had to come up with “the usual rules” in the first place—and they did not have said rules to help them.
Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.
Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?
The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.
At risk of hammering in the point too many times: …
Now, in this paragraph I think you have some strange confusion. I am not quite sure what claim or point of mine you take this to be countering.
… what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.
Hmm, I think it doesn’t go without saying, actually; I think it needs to be said, and then defended. I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not. I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).
But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…
But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?
I certainly do! I have observed the fallibility of my own brain on numerous past occasions, and any temptation I might have had to consider myself a perfect reasoner has been well and truly quashed by those past observations. Indeed, the very project we call “rationality” is premised on the notion that our naive faculties are woefully inadequate; after all, one cannot have aspirations of “increasing” one’s rationality without believing that one’s initial starting point is one of imperfect rationality.
… Descartes may answer:
“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”
Indeed, I am fallible, and for this reason I cannot rule out the possibility that I have misapprehended the rules, and that my misapprehensions are perhaps fatal. However, regardless of however much my fallibility reduces my confidence in the rules, it inevitably reduces my confidence in my ability to perform without rules by an equal or greater amount; and this seems to me to be right, and good.
...Or, to put it another way: perhaps I am blind, and in my blindness I have fumbled my way to a set of (what seem to me to be) crutches. Should I then discard those crutches and attempt to make my way unassisted, on the grounds that I may be mistaken about whether they are, in fact, crutches? But surely I will do no better on my own, than I will by holding on to the crutches for the time being; for then at least the possibility exists that I am not mistaken, and the objects I hold are in fact crutches. Any argument that might lead me to make the opposite choice is quite wrongheaded indeed, in my view.
Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)
And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.
It is perhaps worth noting that the sense in which “parallel lines are not parallel” which you cite is quite different from the sense in which our brains misinterpret the café wall illusion. And in light of this, it is perhaps also notable that the eventual development of non-Euclidean geometries was not spurred by this or similar optical illusions.
Which is to say: our understanding of things may be flawed or incomplete in certain ways. But we do not achieve a corrected understanding of those things by discarding our present tools wholesale (especially on such flimsy evidence as naive perception); we achieve a corrected understanding by poking and prodding at our current understanding, until such time as our efforts bear fruit.
(In the “crutch” analogy: perhaps there exists a better set of crutches, somewhere out there for us to find. This nonetheless does not imply that we ought discard our current crutches in anticipation of the better set; we will stand a far better chance of making our way to the better crutches, if we rely on the crutches we have in the meantime.)
Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t.
Certainly not; but fortunately this rather strong condition is not needed for me to distrust Descartes’ reasoning. What is needed is simply that I trust “the usual rules” more than I trust Descartes; and for further clarification on this point you need merely re-read what I wrote above about “crutches”.
Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?
The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.
I believe my above arguments suffice to answer this objection.
[...] I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not.
Suppose a program is not, in fact, running. How do you propose that the program in question detect this state of affairs?
I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).
But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…
If the only possible validation of Descartes’ claim to exist is anthropic in nature, then this is tantamount to saying that his cogito is untenable. After all, “I think, therefore I am” is semantically quite different from “I assert that I am, and this assertion is anthropically valid because you will only hear me say it in worlds where it happens to be true.”
In fact, I suspect that Descartes would agree with me on this point, and complain that—to the extent you are reducing his claim to a mere instance of anthropic reasoning—you are immeasurably weakening it. To quote from an earlier comment of mine:
It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program’s output were to be interpreted as having meaning, then it seems obvious that the statement in question (“I exist”) is correct, since the program does in fact exist and was run.
But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a “meaningful” statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it captures the spirit of Descartes’ cogito; I suspect Descartes himself would be quite unsatisfied with the notion that such a program outputs the statement for the same reasons he does.
Sorry if this comes off as pedantic, but I don’t know what this means. The philosopher in me keeps saying “I think we’re playing a language game,” so I’d like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely?
We’re all philosophers here, this is a safe space for pedantry. :)
Below, I’ll use the words ‘phenomenal property’ and ‘quale’ interchangeably.
An example of a phenomenal property is the particular redness of a particular red thing in my visual field.
Geoff would say he’s certain, while he’s experiencing it, that this property is instantiated.
I would say that there’s no such property, though there is a highly similar property that serves all the same behavioral/cognitive/functional roles (and just lacks that extra ‘particular redness’, and perhaps that extra ‘inwardness / inner-light-ness / interiority / subjectivity / perspectivalness’—basically, lacks whatever aspects make the hard problem seem vastly harder than the ‘easy’ problems of reducing other mental states to physical ones).
This, of course, is a crazy-sounding view on my part. It’s weird that I even think Geoff and I have a meaningful, substantive disagreement. Like, if I don’t think that Geoff’s brain really instantiates qualia, then what do I think Geoff even means by ‘qualia’? How does Geoff successfully refer to “qualia, if he doesn’t have them? Why not just say that ‘qualia’ refers to something functional?
Two reasons:
I think hard-problem intuitions are grounded in a quasi-perceptual illusion, not a free-floating delusion.
If views like Geoff’s and David Chalmers’ were grounded in a free-floating delusion, then we would just say ‘they have a false belief about their experiences’ and stop there.
If we’re instead positing that there’s something analogous to an optical illusion happening in people’s basic perception of their own experiences, then it makes structural sense to draw some distinction between ‘the thing that’s really there’ and ‘the thing that’s not really there, but seems to be there when we fall for the illusion’.
I may not think that the latter concept really and truly has the full phenomenal richness that Geoff / Chalmers / etc. think it does (for the same reason it’s hard to imagine a p-zombie having a full and correct conception of ‘what red looks like’). But I’m still perfectly happy to use the word ‘qualia’ to refer to it, keeping in mind that I think our concept of ‘qualia’ is more like ‘a promissory note for “the kind of thing we’d need to instantiate in order to justify hard-problem arguments”’—it’s a p-zombie’s notion of qualia, though the p-zombie may not realize it.
I think the hard-problem reasoning is correct, in that if we instantiated properties like those we (illusorily) appear to have, then physicalism would be false, there would be ‘further facts’ over and above the physics facts (that aren’t logically entailed/constrained by physics), etc.
Basically, I’m saying that a p-zombie’s concept of ‘phenomenal consciousness’ (or we can call it ‘blenomenal consciousness’ or something, if we want to say that p-zombies lack the ‘full’ concept) is distinct from the p-zombie’s concept of ‘the closest functional/reducible analog of phenomenal consciousness’. I think this isn’t a weird view. The crazy part is when I take the further step of asserting that we’re p-zombies. :)
I don’t know Geoff’s view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful).
Sorry if this comes off as pedantic, but I don’t know what this means
It doesn’t have to mean anything strange or remarkable. It’s basically ordinary waking consciousness. If you are walking around noticing sounds and colours smells ,that’s phenomenal consciousness. As opposed to things that actually are strange , like blindsight or sleepwalking.
But it can be overloaded with other, more controversial, ideas, such as the idea that it is incorrigible (how we got on to the subject), or necessarily non-physical.
I think it can be reasonable to have 100% confidence in beliefs where the negation of the belief would invalidate the ability to reason, or to benefit from reason. Though with humans, I think it always makes sense to leave an epsilon for errors of reason.
[Disclaimer: not Rob, may not share Rob’s views, etc. The reason I’m writing this comment nonetheless is that I think I share enough of Rob’s relevant views here (not least because I think Rob’s views on this topic are mostly consonant with the LW “canon” view) to explain. Depending on how much you care about Rob’s view specifically versus the LW “canon” view, you can choose to regard or disregard this comment as you see fit.]
I don’t think people should be certain of anything
What about this claim itself?
I don’t think this is the gotcha [I think] you think it is. I think it is consistent to hold that (1) people should not place infinite certainty in any beliefs, including meta-beliefs about the normative best way to construct beliefs, and that (2) since (1) is itself a meta-belief, it too should not be afforded infinite certainty.
Of course, this conjunction has the interesting quality of feeling somewhat paradoxical, but I think this feeling doesn’t stand up to scrutiny. There doesn’t seem to me to be any actual contradiction you can derive from the conjunction of (1) and (2); the first seems simply to be a statement of a paradigm that one currently believes to be normative, and the second is a note that, just because one currently believes a paradigm to be normative, does not necessarily mean that that paradigm is normative. The fact that this second note can be construed as coming from the paradigm itself does not undermine it in my eyes; I think it is perfectly fine for paradigms to exist that fail to assert their own correctness.
I think, incidentally, that there are many people who [implicitly?] hold the negation of the above claim, i.e. they hold that (3)a valid paradigm must be one that has faith in its own validity. The paradigm may still turn out to be false, but this ought not be a possibility that is endorsed from inside the paradigm; just as individuals cannot consistently assert themselves to be mistaken about something (even if they are in fact mistaken), the inside of a paradigm ought not be the kind of thing that can undermine itself. If you hold something like (3) to be the case, then and only then does your quoted question become a gotcha.
Naturally, I think (3) is mistaken. Moreover, I not only think (3) is mistaken, I think it is unreasonable, i.e. I think there is no good reason to want (3) to be the case. I think the relevant paradox here is not Moore’s, but the lottery paradox, which I assert is not a paradox at all (though admittedly counterintuitive if one is not used to thinking in probabilities rather than certainties).
[There is also a resemblance here to Godel’s (second) incompleteness theorem, which asserts that sufficiently powerful formal systems cannot prove their own consistency unless they are actually inconsistent. I think this resemblance is more surface-level than deep, but it may provide at least an intuition that (1) there exist at least some “belief systems” that cannot “trust” themselves, and that (2) this is okay.]
I don’t think people should be certain of anything; see How to Convince Me That 2 + 2 = 3; Infinite Certainty; and 0 and 1 Are Not Probabilities.
We can build software agents that live in virtual environments we’ve constructed, and we can program the agents to never make certain kinds of mistakes (e.g., never make an invalid reasoning step, or never misperceive the state of tiles they’re near). So in that sense, there’s nothing wrong with positing ‘faculties that always get the right answer in practice’, though I expect these to be much harder to evolve than to design.
But a software agent in that environment shouldn’t be able to arrive at 100% certainty that one of its faculties is infallible, if it’s a smart Bayesian. Even we, the programmers, can’t be 100% certain that we programmed the agent correctly. Even an automated proof of correctness won’t get us to 100% certainty, because the theorem-prover’s source code could always have some error (or the hardware it’s running on could have been struck by a spare gamma ray, etc.)
It’s not clear what “I” means here, but it seems fine to say that there’s some persistent psychological entity roughly corresponding to the phrase “Rob Bensinger”. :)
I’m likewise happy to say that “thinking”, “experience”, etc. can be interpreted in (possibly non-joint-carving) ways that will make them pick out real things.
Oh, sorry, this was a quote from Descartes that is the closest thing that actually appears in Descartes to “I think therefore I am” (which doesn’t expressly appear in the Meditations).
Descartes’s idea doesn’t rely on any claims about persistent psychological entities (that would require the supposition of memory, which Descartes isn’t ready to accept yet!). Instead, he postulates an all-powerful entity that is specifically designed to deceive him and tries to determine whether anything at all can be known given that circumstance. He concludes that he can know that he exists because something has to do the thinking. Here is the relevant quote from the Second Meditation:
I find this pretty convincing personally. I’m interested in whether you think Descartes gets it wrong even here or whether you think his philosophical system gains its flaws later.
More generally, I’m still not quite sure what precise claims or what type of claim you predict you and Geoff would disagree about. My-model-of-Geoff suggests that he would agree with “it seems fine to say that there’s some persistent psychological entity roughly corresponding to the phrase “Rob Bensinger”.” and that “thinking”, “experience”, etc.” pick out “real” things (depending on what we mean by “real”).
Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?
‘Can a deceiver trick a thinker into falsely believing they’re a thinker?’ has relevantly the same structure as ‘Can you pick up a box that’s not a box?’—it deductively follows that ‘no’, because the thinker’s belief in this case wouldn’t be false.
(Though we’ve already established that I don’t believe in infinite certainty. I forgive Descartes for living 60 years before the birth of Thomas Bayes, however. :) And Bayes didn’t figure all this out either.)
Because the logical structure is trivial—Descartes might just as well have asked ‘could a deceiver make 2 + 2 not equal 4?’—I have to worry that Descartes is sneaking in more content that is in fact deducible here. For example, ‘a thought exists, therefore a thinker exists’ may not be deductively true, depending on what is meant by ‘thought’ and ‘thinker’. A lot of philosophers have commented that Descartes should have limited his conclusion to ‘a thought exists’ (or ‘a mental event exists’), rather than ‘a thinker exists’.
‘Phenomenal consciousness exists’.
I’d guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!
Sorry if this comes off as pedantic, but I don’t know what this means. The philosopher in me keeps saying “I think we’re playing a language game,” so I’d like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely?
I don’t know Geoff’s view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful). That’s not the same as “treating them as probabilistic statements,” but I think it’s functionally the same from your perspective.
The project of the Meditations is that Descartes starts by refusing to accept anything which can be doubted and then he tries to nevertheless build a system of knowledge from there. I don’t think Descartes would assign infinite certainty to any claim except, perhaps, the cogito.
My view of Descartes’ cogito is either that (A) it is a standard claim, in which case all the usual rules apply, including the one about infinite certainty not being allowed, or (B) it is not a standard claim, in which case the usual rules don’t apply, but also it becomes less clear that the cogito is actually a thing which can be “believed” in a meaningful sense to begin with.
I currently think (B) is much closer to being the case than (A). When I try to imagine grounding and/or operationalizing the cogito by e.g. designing a computer program that makes the same claim for the same psychological reasons, I run into a dead end fairly quickly, which in my experience is strong evidence that the initial concept was confused and/or incoherent. Here’s a quick sketch of my reasoning:
Suppose I have a computer program that, when run, prints “I exist” onto the screen. Moreover, suppose this computer program accomplishes this via means of a simple print statement; there is no internal logic, no if-then conditional structure, that modulates the execution of the print statement, merely the naked statement, which is executed every time the program runs. Then I ask: is there a meaningful sense in which the text the program outputs is correct?
It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program’s output were to be interpreted as having meaning, then it seems obvious that the statement in question (“I exist”) is correct, since the program does in fact exist and was run.
But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a “meaningful” statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it captures the spirit of Descartes’ cogito; I suspect Descartes himself would be quite unsatisfied with the notion that such a program outputs the statement for the same reasons he does.
But when I try to query my intuition, to ask it “Then what reasons are those, exactly?”, I find that I come up blank. It’s a qualitatively similar experience to asking what the truth-condition is for a tautology, e.g. 2 + 2 = 4, except even worse than that, since I could at the very least imagine a world in which 2 + 2 != 4, whereas I cannot even imagine an if-then conditional statement that would capture the (supposed) truth-condition of Descartes’ cogito. The closest (flawed) thing my intuition outputs looks like this:
Which is obvious nonsense. Obviously. (Though it does inspire an amusing idea for a mathematical horror story about an impossible computer program whose behavior when investigated using static analysis completely differs from its behavior when actually run, because at the beginning of the program is a metaphysical conditional statement that executes different code depending on whether it detects itself to be in static analysis versus actual execution.)
Anyway, the upshot of all this is that I don’t think Descartes’ statement is actually meaningful. I’m not particularly surprised by this; to me, it dovetails strongly with the heuristic “If you’re a dealing with a claim that seems to ignore the usual rules, it’s probably not a ‘claim’ in the usual sense”, which would have immediately flagged Descartes for the whole infinite certainty thing, without having to go through the whole “How would I write a computer program that exhibits this behavior for the same reason humans exhibit it?” song-and-dance.
(And for the record: there obviously is a reason humans find Descartes’ argument so intuitively compelling, just as there is a reason humans find the idea of qualia so intuitively compelling. I just think that, as with qualia, the actual psychological reason—of the kind that can be implemented in a real computer program, not a program with weird impossible metaphysical conditional statements—is going to look very different from humans’ stated justifications for the claims in question.)
I think this is quite a wrongheaded way to think about Descartes’ cogito. Consider this, for instance:
But precisely the point is that Descartes has set aside “all the usual rules”, has set aside “philosophical scaffolding”, epistemological paradigms, and so on, and has started with (as much as possible) the bare minimum that he could manage: naive notions of perception and knowledge, and pretty much nothing else. He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
To put it another way: consider what Descartes might say, if you put your criticisms to him. He might say something like:
“Whoa, now, hold on. Rules about infinite certainty? Probability theory? Philosophical commitments about the nature of beliefs and claims? You’re getting ahead of me, friend. We haven’t gotten there yet. I don’t know about any of those things; or maybe I did, but then I started doubting them all. The only thing I know right now is, I exist. I don’t even know that you exist! I certainly do not propose to assent to all these ‘rules’ and ‘standards’ you’re talking about—at least, not yet. Maybe after I’ve built my epistemology up, we’ll get back to all that stuff. But for now, I don’t find any of the things you’re saying to have any power to convince me of anything, and I decline to acknowledge the validity of your analysis. Build it all up for me, from the cogito on up, and then we’ll talk.”
Descartes, in other words, was doing something very basic, philosophically speaking—something that is very much prior to talking about “the usual rules” about infinite certainty and all that sort of thing.
Separately from all that, what you say about the hypothetical computer program (with the
print
statement) isn’t true. There is a check that’s being run: namely, the ability of the program to execute. Conditional on successfully being able to execute theprint
statement, it prints something. A program that runs, definitionally exists; its existence claim is satisfied thereby.I initially wanted to preface my response here with something like “to put it delicately”, but then I realized that Descartes is dead and cannot take offense to anything I say here, and so I will be indelicate in my response:
I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules. The rules governing correct cognition are clear, comprehensible, and causally justifiable; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.
Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.
In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.
Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.
At risk of hammering in the point too many times: “prior” does not correspond to “better”. Indeed, it is hard to see why one would take this attitude (that “prior” knowledge is somehow more trustworthy than models built on actual reasoning) with respect to a certain subset of questions classed as “philosophical” questions, when virtually every other human endeavor has shown the opposite to be the case: learning more, and knowing more, causes one to make fewer mistakes in one’s reasoning and conclusions. If Descartes wants to discount a certain class of reasoning in his quest for truth, I submit that he has chosen to discount the wrong class.
A key difference here: what you describe is not a check that is being run by the program, which is important because it is the program that finds itself in an analogous situation to Descartes.
What you say is, of course, true to any outside observer; I, seeing the program execute, can certainly be assured of its existence. But then, I can also say the same of Descartes: if I were to run into him in the street, I would not hesitate to conclude that he exists, and he need not even assert his existence aloud for me to conclude this. Moreover, since I (unlike Descartes) am not interested in the project of “doubting everything”, I can quite confidently proclaim that this is good enough for me.
Ironically enough, it is Descartes himself who considers this insufficient. He does not consider it satisfactory for a program to merely execute; he wants the program to know that it is being executed. For this it is not sufficient to simply assert “The program is being run; that is itself the check on its existence”; what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.
And of course, what is sauce for the goose is sauce for the gander; if a program cannot run such a check even in principle, then what reason do I have to believe that Descartes’ brain is running some analogous check when he asserts his famous “Cogito, ergo sum”? Far more reasonable, I claim, to suspect that his brain is not running any such check, and that his resulting statement is meaningless at best, and incoherent at worst.
But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?
And there the answer is not so obvious. After all, it’s your own brain that stores the rules, your own brain that implements them, your own brain that was convinced of their validity in the first place…
What Descartes is doing, then, is seeing if he can re-generate “the usual rules”, with his own brain (and how else?), having first set them aside. In other words, he is attempting to check whether said rules are “truly part of him”, or whether they are, so to speak, foreign agents who have sneaked into his brain illicitly (through unexamined habit, indoctrination, deception, etc.).
Thus, when you say:
… Descartes may answer:
“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”
Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)
And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.
Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t. I consider it quite reasonable to be more impressed with his approach than with yours. If you object, merely consider that someone had to come up with “the usual rules” in the first place—and they did not have said rules to help them.
Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?
The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.
Now, in this paragraph I think you have some strange confusion. I am not quite sure what claim or point of mine you take this to be countering.
Hmm, I think it doesn’t go without saying, actually; I think it needs to be said, and then defended. I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not. I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).
But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…
I certainly do! I have observed the fallibility of my own brain on numerous past occasions, and any temptation I might have had to consider myself a perfect reasoner has been well and truly quashed by those past observations. Indeed, the very project we call “rationality” is premised on the notion that our naive faculties are woefully inadequate; after all, one cannot have aspirations of “increasing” one’s rationality without believing that one’s initial starting point is one of imperfect rationality.
Indeed, I am fallible, and for this reason I cannot rule out the possibility that I have misapprehended the rules, and that my misapprehensions are perhaps fatal. However, regardless of however much my fallibility reduces my confidence in the rules, it inevitably reduces my confidence in my ability to perform without rules by an equal or greater amount; and this seems to me to be right, and good.
...Or, to put it another way: perhaps I am blind, and in my blindness I have fumbled my way to a set of (what seem to me to be) crutches. Should I then discard those crutches and attempt to make my way unassisted, on the grounds that I may be mistaken about whether they are, in fact, crutches? But surely I will do no better on my own, than I will by holding on to the crutches for the time being; for then at least the possibility exists that I am not mistaken, and the objects I hold are in fact crutches. Any argument that might lead me to make the opposite choice is quite wrongheaded indeed, in my view.
It is perhaps worth noting that the sense in which “parallel lines are not parallel” which you cite is quite different from the sense in which our brains misinterpret the café wall illusion. And in light of this, it is perhaps also notable that the eventual development of non-Euclidean geometries was not spurred by this or similar optical illusions.
Which is to say: our understanding of things may be flawed or incomplete in certain ways. But we do not achieve a corrected understanding of those things by discarding our present tools wholesale (especially on such flimsy evidence as naive perception); we achieve a corrected understanding by poking and prodding at our current understanding, until such time as our efforts bear fruit.
(In the “crutch” analogy: perhaps there exists a better set of crutches, somewhere out there for us to find. This nonetheless does not imply that we ought discard our current crutches in anticipation of the better set; we will stand a far better chance of making our way to the better crutches, if we rely on the crutches we have in the meantime.)
Certainly not; but fortunately this rather strong condition is not needed for me to distrust Descartes’ reasoning. What is needed is simply that I trust “the usual rules” more than I trust Descartes; and for further clarification on this point you need merely re-read what I wrote above about “crutches”.
I believe my above arguments suffice to answer this objection.
Suppose a program is not, in fact, running. How do you propose that the program in question detect this state of affairs?
If the only possible validation of Descartes’ claim to exist is anthropic in nature, then this is tantamount to saying that his cogito is untenable. After all, “I think, therefore I am” is semantically quite different from “I assert that I am, and this assertion is anthropically valid because you will only hear me say it in worlds where it happens to be true.”
In fact, I suspect that Descartes would agree with me on this point, and complain that—to the extent you are reducing his claim to a mere instance of anthropic reasoning—you are immeasurably weakening it. To quote from an earlier comment of mine:
We’re all philosophers here, this is a safe space for pedantry. :)
Below, I’ll use the words ‘phenomenal property’ and ‘quale’ interchangeably.
An example of a phenomenal property is the particular redness of a particular red thing in my visual field.
Geoff would say he’s certain, while he’s experiencing it, that this property is instantiated.
I would say that there’s no such property, though there is a highly similar property that serves all the same behavioral/cognitive/functional roles (and just lacks that extra ‘particular redness’, and perhaps that extra ‘inwardness / inner-light-ness / interiority / subjectivity / perspectivalness’—basically, lacks whatever aspects make the hard problem seem vastly harder than the ‘easy’ problems of reducing other mental states to physical ones).
This, of course, is a crazy-sounding view on my part. It’s weird that I even think Geoff and I have a meaningful, substantive disagreement. Like, if I don’t think that Geoff’s brain really instantiates qualia, then what do I think Geoff even means by ‘qualia’? How does Geoff successfully refer to “qualia, if he doesn’t have them? Why not just say that ‘qualia’ refers to something functional?
Two reasons:
I think hard-problem intuitions are grounded in a quasi-perceptual illusion, not a free-floating delusion.
If views like Geoff’s and David Chalmers’ were grounded in a free-floating delusion, then we would just say ‘they have a false belief about their experiences’ and stop there.
If we’re instead positing that there’s something analogous to an optical illusion happening in people’s basic perception of their own experiences, then it makes structural sense to draw some distinction between ‘the thing that’s really there’ and ‘the thing that’s not really there, but seems to be there when we fall for the illusion’.
I may not think that the latter concept really and truly has the full phenomenal richness that Geoff / Chalmers / etc. think it does (for the same reason it’s hard to imagine a p-zombie having a full and correct conception of ‘what red looks like’). But I’m still perfectly happy to use the word ‘qualia’ to refer to it, keeping in mind that I think our concept of ‘qualia’ is more like ‘a promissory note for “the kind of thing we’d need to instantiate in order to justify hard-problem arguments”’—it’s a p-zombie’s notion of qualia, though the p-zombie may not realize it.
I think the hard-problem reasoning is correct, in that if we instantiated properties like those we (illusorily) appear to have, then physicalism would be false, there would be ‘further facts’ over and above the physics facts (that aren’t logically entailed/constrained by physics), etc.
Basically, I’m saying that a p-zombie’s concept of ‘phenomenal consciousness’ (or we can call it ‘blenomenal consciousness’ or something, if we want to say that p-zombies lack the ‘full’ concept) is distinct from the p-zombie’s concept of ‘the closest functional/reducible analog of phenomenal consciousness’. I think this isn’t a weird view. The crazy part is when I take the further step of asserting that we’re p-zombies. :)
Interesting!
It doesn’t have to mean anything strange or remarkable. It’s basically ordinary waking consciousness. If you are walking around noticing sounds and colours smells ,that’s phenomenal consciousness. As opposed to things that actually are strange , like blindsight or sleepwalking.
But it can be overloaded with other, more controversial, ideas, such as the idea that it is incorrigible (how we got on to the subject), or necessarily non-physical.
I think it can be reasonable to have 100% confidence in beliefs where the negation of the belief would invalidate the ability to reason, or to benefit from reason. Though with humans, I think it always makes sense to leave an epsilon for errors of reason.
What about this claim itself?
[Disclaimer: not Rob, may not share Rob’s views, etc. The reason I’m writing this comment nonetheless is that I think I share enough of Rob’s relevant views here (not least because I think Rob’s views on this topic are mostly consonant with the LW “canon” view) to explain. Depending on how much you care about Rob’s view specifically versus the LW “canon” view, you can choose to regard or disregard this comment as you see fit.]
I don’t think this is the gotcha [I think] you think it is. I think it is consistent to hold that (1) people should not place infinite certainty in any beliefs, including meta-beliefs about the normative best way to construct beliefs, and that (2) since (1) is itself a meta-belief, it too should not be afforded infinite certainty.
Of course, this conjunction has the interesting quality of feeling somewhat paradoxical, but I think this feeling doesn’t stand up to scrutiny. There doesn’t seem to me to be any actual contradiction you can derive from the conjunction of (1) and (2); the first seems simply to be a statement of a paradigm that one currently believes to be normative, and the second is a note that, just because one currently believes a paradigm to be normative, does not necessarily mean that that paradigm is normative. The fact that this second note can be construed as coming from the paradigm itself does not undermine it in my eyes; I think it is perfectly fine for paradigms to exist that fail to assert their own correctness.
I think, incidentally, that there are many people who [implicitly?] hold the negation of the above claim, i.e. they hold that (3) a valid paradigm must be one that has faith in its own validity. The paradigm may still turn out to be false, but this ought not be a possibility that is endorsed from inside the paradigm; just as individuals cannot consistently assert themselves to be mistaken about something (even if they are in fact mistaken), the inside of a paradigm ought not be the kind of thing that can undermine itself. If you hold something like (3) to be the case, then and only then does your quoted question become a gotcha.
Naturally, I think (3) is mistaken. Moreover, I not only think (3) is mistaken, I think it is unreasonable, i.e. I think there is no good reason to want (3) to be the case. I think the relevant paradox here is not Moore’s, but the lottery paradox, which I assert is not a paradox at all (though admittedly counterintuitive if one is not used to thinking in probabilities rather than certainties).
[There is also a resemblance here to Godel’s (second) incompleteness theorem, which asserts that sufficiently powerful formal systems cannot prove their own consistency unless they are actually inconsistent. I think this resemblance is more surface-level than deep, but it may provide at least an intuition that (1) there exist at least some “belief systems” that cannot “trust” themselves, and that (2) this is okay.]
On reflection, it seems right to me that there may not be a contradiction here. I’ll post something later if I conclude otherwise.
(I think I got a bit too excited about a chance to use the old philosopher’s move of “what about that claim itself.”)
:) Yeah, it is an interesting case but I’m perfectly happy to say I’m not-maximally-certain about this.