fiddlemath: Anja notes a significant incident at the EA summit, 2014, where Geoff and Anna got into a … pretty heated? … argument about dualism. Seemed very emotional to both parties, given the degree of abstraction of the object-level conversation.
habrykas: Yeah, that was a big deal
habrykas: It caused me to have nightmares, and was a big component of me distancing myself from Leverage
And the transcript mentions:
Anna Salamon: Actually I think there’s something different, though. I think Leverage came—and by “Leverage” in the beginning, I mean you, Geoff—came in with a different pattern of something that I think a lot of people had an immune response to.
Geoff Anders: Yeah. I agree.
Anna Salamon: And a different pattern of something—it was partly the particular way that you weren’t into a particular kind of materialism. The “pole through the head” thing—I can say this more slowly for people to follow it.
This is referring to the same incident, where (at the 2014 EA Summit, which was much larger and more public than previous Leverage-hosted EA events) Anna and Geoff were on a scheduled panel discussion. My recollection from being in the audience was that Anna unexpectedly asked Geoff if he believed that shoving a pole through someone’s brain could change their beliefs (other than via sensory perception), and Geoff reluctantly said ‘no’. I don’t think he elaborated on why, but I took his view to be that various things about the mind are extraphysical and don’t depend on the brain’s state.
I had a couple of conversations with Geoff in 2014 about the hard problem of consciousness, where I endorsed “eliminativism about phenomenal consciousness” or “phenomenal anti-realism” (nowadays I’d use the more specific term “illusionism”, following Keith Frankish), as opposed to “phenomenal reductionism” (phenomenal consciousness exists, and “every positive fact about experience is logically entailed by the physical facts”) and “phenomenal fundamentalism” (it exists, and “some positive facts about experience aren’t logically entailed by the physical facts”).
Geoff never told me his full view, but he said he thought phenomenal fundamentalism was true, and he said he was completely certain that phenomental anti-realism is false.
(cw: things I consider epistemic traps and mistaken ways of thinking about experience)
I’m the person in the chat who admitted to ‘writing Geoff off on philosophical grounds’ pretty early on. To quote a pair of emails I wrote Geoff after the Twitch stream, elaborating on what I meant by ‘writing off’ and why I ‘wrote him off’ (in that sense) in 2014:
[...]
My impression was that you put extremely high (perhaps maximal?) confidence on ‘your epistemic access to your own experiences’, and that this led you to be confident in some version of ‘consciousness is fundamental’. I didn’t fully understand your view, but it seemed pretty out-there to me, based on the ‘destroying someone’s brain wouldn’t change their beliefs’ thing from your and Anna’s panel discussion at the 2014 EA Summit. This is the vague thing I had in mind when I said ‘Cartesian’; there are other versions of ‘being a Cartesian’ that wouldn’t make me “write someone off”.
By “I wrote Geoff off”, I didn’t mean that I thought you were doing something especially unvirtuous or stupid, and I didn’t mean ‘I won’t talk to Geoff’, ‘I won’t be a friendly colleague of Geoff’, etc. Rather, I meant that as a shorthand for ‘I’m pretty confident Geoff won’t do stuff that’s crucial for securing the light cone, and I think there’s not much potential for growth/change on that front’.
[...]
I think you’re a sharper, faster thinker than me, and I’d guess you know more history-of-philosophy facts and have spent more time thinking about the topics we disagree about. When I think about our philosophical disagreement, I don’t think of it as ‘I’m smarter than Geoff’ or ‘I was born with more epistemic virtue than Geoff’; I think of it as:
Cartesianism / putting-high-confidence-in-introspection (and following the implications of that wherever they lead, with ~maximal confidence) is incredibly intuitive, and it’s sort of amazing from my perspective that so few rationalists have fallen into that trap (and indeed I suspect many of them have insufficiently good inside-view reasons to reject Cartesian reasoning heuristics).
I’m very grateful that they haven’t, and I think their (relatively outside-view-ish) reasons for doubting Cartesianism are correct, but I also think they haven’t fully grokked the power of the opposing view.
Basically I think this is sort of the intuitively-hardest philosophy test anyone has to face, and I mostly endorse not subjecting rationalists and EAs to that and helping them skill up in other ways; but I do think it’s a way to get trapped, especially if you aren’t heavily grounded in Bayesianism/probabilism and the thermodynamic conception of reasoning.
So I don’t think your reasoning chain (to the extent I understand it) was unusually epistemically unvirtuous—I just think you happened on a reasoning style that doesn’t empirically/historically work (our brains just aren’t set up to do that, at least with much confidence/fidelity), but that is ‘self-endorsing’ and has a hard time updating away from itself. Hence I think of it as a trap / a risky memetic virus, not a sin.
And: A large important part of why I did the ‘writing off’ update, in spite of your reasoning chain not (IMO) being super epistemically unvirtuous, is my rough sense of your confidence + environment. (This is the main thing I wish I’d been able to go into in the Twitch chat.)
If I’d modeled you as ‘surrounded by tons of heavyweight philosophers who will argue with you constantly about Cartesianism stuff’, I would not have written you off (or would have only done so weakly). E.g., if I thought you had all the same beliefs but your day job was working side-by-side with Nick Bostrom and Will MacAskill and butting heads a bunch on these topics, I’d have been much more optimistic. My model instead was that Leverage wasn’t heavyweight enough, and was too deferential to you; so insofar as Cartesian views have implications (and I think they ought to have many implications, including more broadly updating you to weird views on tons of other things), I expected you to drag Leverage down more than it could drag you up.
I also had a sense that you weren’t Bayes-y enough? I definitely wouldn’t have written you off if you’d said ‘I’m 65-85% confident in the various components of Cartesianism, depending on time of day and depending on which one you’re asking about; and last year I was significantly less confident, though for the three years prior I was more confident’. (In fact, I think I’d have been sort of impressed that you were able to take such strange views so seriously without having extremal confidence about them.)
What I’m gesturing at with this bullet point is that I modeled you as having a very extreme prior probability (so it would be hard to update), and as endorsing reasoning patterns that make updating harder in this kind of case, and as embedded in a social context that would not do enough to counter this effect.
(If your views on this did change a lot since we last talked about this in Feb 2017 [when I sent you a follow-up email and you reiterated your view on phenomenal consciousness], then I lose Bayes points here.)
And:
Elaborating on the kind of reasoning chain that makes me think Cartesian-ish beliefs lead to lots of wild false views about the world:
1. It seems like the one thing I can know for sure is that I’m having these experiences. The external world is inferred, and an evil demon could trick me about it; but it can’t produce an illusion of ‘I’m experiencing the color red’, since the “illusion” would just amount to it producing the color red in my visual field, which is no illusion at all. (It could produce a delusion, but I don’t just believe I’m experiencing red; I’m actually looking at it as we speak.)
2. The hard problem of consciousness shows that these experiences like red aren’t fully reducible to any purely third-person account, like physics. So consciousness must be fundamental, or reducible to some other sort of thing than physics.
3. Ah, but how did I just type all that if consciousness isn’t part of physics? My keystrokes were physical events. It would be too great a coincidence for my fingers to get all this right without the thing-I’m-right-about causing them to get it right. So my consciousness has to be somehow moving my fingers in different patterns. Therefore:
3a. The laws of physics are wrong, and human minds have extra-physical powers to influence things. This is a large update in favor of some psychic phenomena being real. It also suggests that there’s plausibly some conspiracy on the part of physicists to keep this secret, since it’s implausible they’d have picked up no evidence by now of minds’ special powers. Sean Carroll’s claim that “the laws of physics underlying the phenomena of everyday life are completely known” is not just false—it is suspiciously false, and doesn’t seem like the kind of error you could make by accident. (In which case, what else might there be a scientific conspiracy about? And what’s the scientists’ agenda here? What does this suggest about the overall world order?)
3b. OR ALTERNATIVELY: phenomenal consciousness doesn’t directly causally move matter. In order for my beliefs about consciousness to not be correct entirely by coincidence, then, it seems like some form of occasionalism or pre-established harmony must be true: something outside the universe specifically designed (or is designing) our physical brains in such a way that they will have true beliefs about consciousness. So it seems like our souls are indeed separate from our bodies, and it seems like there’s some sort of optimizer outside the universe that cares a lot about whether we’re aware that we have souls—whence we need to update a lot in favor of historical religious claims having merit.
Whether you end up going down path 3a or path 3b, I think these ideas are quite false, and have the potential to leak out and cause more and more of one’s world-view to be wrong. I think the culprit is the very first step, even though it sounded reasonable as stated.
Rob: Where does the reasoning chain from 1 to 3a/3b go wrong in your view? I get that you think it goes wrong in that the conclusions aren’t true, but what is your view about which premise is wrong or why the conclusion doesn’t follow from the premises?
In particular, I’d be really interested in an argument against the claim “It seems like the one thing I can know for sure is that I’m having these experiences.”
I think that the place the reasoning goes wrong is at 1 (“It seems like the one thing I can know for sure is that I’m having these experiences.”). I think this is an incredibly intuitive view, and a cornerstone of a large portion of philosophical thought going back centuries. But I think it’s wrong.
(At least, it’s wrong—and traplike—when it’s articulated as “know for sure”. I have no objection to having a rather high prior probability that one’s experiences are real, as long as a reasonably large pile of evidence to the contrary could change your mind. But from a Descartes-ish perspective, ‘my experiences might not be real’ is just as absurd as ‘my experiences aren’t real’; the whole point is that we’re supposed to have certainty in our experiences.)
Here’s how I would try to motivate ‘illusionism is at least possibly true’ today, and more generally ‘there’s no way for a brain to (rationally) know with certainty that any of its faculties are infallible’:
_________________________________________________
First, to be clear: I share the visceral impression that my own consciousness is infallibly manifest to me, that I couldn’t possibly not be having this experience.
Even if all my beliefs are unreliable, the orange quale itself is no belief, and can’t be ‘wrong’. Sure, it could bear no resemblance to the external world—it could be a hallucination. But the existence of hallucinations can’t be a hallucination, trivially. If it merely ‘seems to me’, perceptually, as though I’m seeing orange—well, that perceptual seeming is the orange quale!
In some sense, it feels as though there’s no ‘gap’ between the ‘knower’ and the ‘known’. It feels as though I’m seeing the qualia, not some stand-in representation for qualia that could be mistaken.
All of that feels right to me, even after 10+ years of being an illusionist. But when I poke at it sufficiently, I think it doesn’t actually make sense.
Intuition pump 1: How would my physical brain, hands, etc. know any of this? For a brain to accurately represent some complex, logically contingent fact, it has to causally interact (at least indirectly / at some remove) with that fact. (Cf. The Second Law of Thermodynamics, and Engines of Cognition.)
Somehow I must have just written this comment. So some causal chain began in one part of my physical brain, which changed things about other parts of my brain, which changed things about how I moved my fingers and hands, which changed things about the contents of this comment.
What, even in principle, would it look like for one part of a brainto have infallible, “direct” epistemic access to a thing, and to then transmit this fact to some other part of the brain?
It’s easy to see how this works with, e.g., ‘my brain has (fallible, indirect) knowledge of how loud my refrigerator is’. We could build that causal model, showing how the refrigerator’s workings change things about the air in just the right way, to change things about my ears in just the right way, to change things about my brain in just the right way, to let me output accurate statements about the fridge’s loudness.
It’s even easy to see how this works with a lot of introspective facts, as long as we don’t demand infallibility or ‘directness’. One part of my brain can detect whether another part of my brain is in some state.
But what would it look like, even in principle, for one set of neurons that ‘has immediate infallible epistemic access to X’ to transmit that fact to another set of neurons in the brain? What would it look like to infallibly transmit it, such that a gamma ray couldn’t randomly strike your brain to make things go differently (since if it’s epistemically possible that a gamma ray could do that, you can’t retain certainty across transmissions-between-parts-of-your-brain)? What would it look like to not only infallibly transmit X, but infallibly transmit the (true, justified) knowledge of that very infallibility?
This is an impossible enough problem, AFAICT, but it’s just a warm-up for:
Intuition pump 2: What would it look like for even one part of a brain to have ‘infallible’ ‘direct’ access to something ‘manifest’?
If we accepted, from intuition pump 1, that you can’t transmit ‘infallible manifestness’ across different parts of the brain (even potentially quite small parts), we would still maybe be able to say:
‘I am not my brain. I am a sufficiently small part of my brain that is experiencing this thing. I may be helpless to transmit any of that to my hands, or even to any other portion of my brain. But that doesn’t change the fact that I have this knowledge—I, the momentarily-existing locked-in entity with no causal ability to transmit this knowledge to the verbal loop thinking these thoughts, the hands writing these sentences, or to my memory, or even to my own future self a millisecond from now.’
OK, let’s grant all that.
… But how could even that work?
Like, how do you build a part of a brain, or a part of a computer, to have infallible access to its own state and to rationally know that it’s infallible in this regard? How would you design a part of an AI to satisfy that property, such that it’s logically impossible for a gamma ray (or whatever) to make that-part-of-the-AI wrong? What would the gears and neural spike patterns underlying that knowing/perceiving/manifestness look like?
It’s one thing to say ‘there’s something it’s like to be that algorithm’; it’s quite another to say ‘there’s something it’s like to be that algorithm, and the algorithm has knowably infallible epistemic access to that what-it’s-like’. How do you design an algorithm like that, even in principle?
I think this is the big argument. I want to see a diagram of what this ‘manifestness’ thing could look like, in real life. I think there’s no good substitute for the process of actually trying to diagram it out.
Intuition Pump 3: The reliability of an organism’s introspection vs. its sensory observation is a contingent empirical fact.
We can imagine building a DescartesBot that has incredibly unreliable access to its external environment, but has really quite accurate (though maybe not infallible) access to its internal state. E.g., its sensors suck, but its brain is able to represent tons of facts about its own brain with high reliability (though perhaps not infallibility), and to form valid reasoning chains incorporating those facts. If humans are like DescartesBot, then we should at least be extremely wary of letting our scientific knowledge trump our phenomenological knowledge, when the two seem to conflict.
But humanity’s track record is the opposite of DescartesBot’s—we seem way better at sensing properties of our external environment, and drawing valid inferences about those properties, than at doing the same for our own introspected mental states. E.g., people are frequently wrong about their own motives and the causes of their behavior, but they’re rarely wrong about how big a given chair is.
This isn’t a knock-down argument, but it’s a sort of ‘take a step back’ argument that asks whether we should expect that we’d be the sorts of evolved organisms that have anything remotely approaching introspective certitude about various states of our brain. Does that seem like the genre-savvy view, the view that rhymes more with the history of science to date, the view that matches the apparent character of the rest of our knowledge of the world?
I think some sort of ‘taste for what’s genre-savvy’ is a surprisingly important component of how LW has avoided this epistemic trap. Even when folks here don’t know how to articulate their intuitions or turn them into explicit arguments, they’ve picked up on some important things about how this stuff tends to work.
If you want something that’s more philosopher-ish, and a bit further from how I think about the topic today, here’s what I said to Geoff in 2014 (in part):
[...]
Phenomenal realism [i.e., the belief that we are phenomenally conscious] has lots of prima facie plausibility, and standard reductionism looks easily refuted by the hard problem. But my experience is that the more one shifts from a big-picture ‘is reductionism tenable?’ to a detailed assessment of the non-physicalist options, the more problems arise—for interactionism and epiphenomenalism alike, for panpsychism and emergent dualism alike, for property and substance and ‘aspect’ dualism alike, for standard fundamentalism and ‘reductionism-to-nonphysical-properties’ alike.
All of the options look bad, and I take that as a strong hint that there’s something mistaken at a very deep level about introspection, and/or about our concept of ‘phenomenal consciousness’. We’re clearly conscious in some sense—we have access consciousness, ‘awake’ consciousness, and something functionally similar to phenomenal consciousness (we might call it ‘functional consciousness,’ or zombie consciousness) that’s causally responsible for all the papers our fingers write about the hard problem. But the least incredible of the available options is that there’s an error at the root of our intuitions (or, I’d argue, our perception-like introspection). It’s not as though we have evolutionary or neuroscientific reasons to expect brains to be as good at introspection or phenomenological metaphysics as they are at perceiving and manipulating ordinary objects.
[...]
Eliminativism is definitely counter-intuitive, and I went through many views of consciousness before arriving at it. It’s especially intuitions-subverting to those raised on Descartes and the phenomenological tradition. There are several ways I motivate and make sense of eliminativism:
(I’ll assume, for the moment, that the physical world is causally closed; if you disagree in a way that importantly undermines one of my arguments, let me know.)
1. Make an extremely strong case against both reductionism and fundamentalism. Then, though eliminativism still seems bizarre—we might even be tempted to endorse mysterianism here—we at least have strong negative grounds to suspect that it’s on the right track.
2. Oversimplifying somewhat: reductionism is conceptually absurd, fundamentalism is metaphysically absurd (for the reasons I gave in my last e-mail), and eliminativism is introspectively absurd. There are fairly good reasons to expect evolution to have selected for brains that are good at manipulating concepts (so we can predict the future, infer causality, relate instances to generalizations, …), and good reasons to expect evolution to have selected for brains that are good at metaphysics (so we can model reality, have useful priors, usefully update them, …). So, from an outside perspective, we should penalize reductionism and fundamentalism heavily for violating our intuitions about, respectively, the implications of our concepts and the nature of reality.
The selective benefits of introspection, on the other hand, are less obvious. There are clear advantages to knowing some things about our brains—to noticing when we’re hungry, to reflecting upon similarities between a nasty smell and past nasty smells, to verbally communicating our desires. But it’s a lot less obvious that the character of phenomenal consciousness is something our ancestral environment would have punished people for misinterpreting. As long as you can notice the similarity-relations between experiences, their spatial and temporal structure, etc. -- all their functional properties—it shouldn’t matter to evolution whether or not you can veridically introspect their nonfunctional properties, since (ex hypothesi) it makes no difference whatsoever which nonfunctional properties you instantiate.
And just as there’s no obvious evolutionary reason for you to be able to tell which quale you’re instantiating, there’s also no obvious evolutionary reason for you to be able to tell that you’re instantiating qualia at all.
Our cognition about P-consciousness looks plausibly like an evolutionary spandrel, a side-effect shaped by chance neural processes and genetic drift. Can we claim a large enough confidence in this process, all things considered, to refute mainstream physics?
3. The word ‘consciousness’ has theoretical content. It’s not, for instance, a completely bare demonstrative act—like saying ‘something is going on, and whatever it is, I dub it [foo]‘, or ‘that, whatever it is, is [foo]‘. If ‘I’m conscious’ were as theory-neutral as all that, then absolutely anything could count equally well as a candidate referent—a hat, the entire physical universe, etc.
Instead, implicitly embedded within the idea of ‘consciousness’ are ideas about what could or couldn’t qualify as a referent. As soon as we build in those expectations, we leave the charmed circle of the cogito and can turn out to be mistaken.
4. I’ll be more specific. When I say ‘I’m experiencing a red quale’, I think there are at least two key ideas we’re embedding in our concept ‘red quale’. One is subjectivity or inwardness: P-consciousness, unlike a conventional physical system, is structured like a vantage point plus some object-of-awareness. A second is what we might call phenomenal richness: the redness I’m experiencing is that specific hue, even though it seems like a different color (qualia inversion, alien qualia) or none at all (selective blindsight) would have sufficed.
I think our experiences’ apparent inwardness is what undergirds the zombie argument. Experiences and spacetime regions seem to be structured differently, and the association between the two seems contingent, because we have fundamentally different mental modules for modeling physical v. mental facts. You can always entertain the possibility that something is a zombie, and you can always entertain the possibility that something (e.g., a rock, or a starfish) has a conscious inner life, without thereby imagining altering its physical makeup. Imagining that a rock could be on fire without changing its physical makeup seems absurd, because fire and rocks are in the same magisterium; and imagining that an experience of disgust could include painfulness without changing its phenomenal character seems absurd, because disgust and pain are in the same magisterium; but when you cross magisteria, anything goes, at least in terms of what our brains allow us to posit in thought experiments.
Conceptually, mind and matter operate like non-overlapping magisteria; but an agent could have a conceptual division like that without actually being P-conscious or actually having an ‘inside’ irreducibly distinct from its physical ‘outside’. You could design an AI like that, much like Chalmers imagines designing an AI that spontaneously outputs ‘I think therefore I am’ and ‘my experiences aren’t fully reducible to any physical state’.
5. Phenomenal richness, I think, is a lot more difficult to make sense of (for physicalists) than inwardness. Chalmers gestures toward some explanations, but it still seems hard to tell an evolutionary/cognitive story here. The main reframe I find useful here is to recognize that introspected experiences aren’t atoms; they have complicated parts, structures, and dynamics. In particular, we can peek under the hood by treating them as metacognitive representations of lower-order neural states. (E.g., the experience of pain perhaps represents somatic damage, but it also represents the nociceptors carrying pain signals to my brain.)
With representation comes the possibility of misrepresentation. Sentence-shaped representations (‘beliefs’) can misrepresent, when people err or are deluded; and visual-field-shaped representations (‘visual perceptions’) can misrepresent, when people are subject to optical illusions or hallucinations. The metacognitive representations (of beliefs, visual impressions, etc.) we call ‘conscious experiences’, then, can also misrepresent what features are actually present in first-order experiences.
Dennett makes a point like this, but he treats the relevant metarepresentations as sentence-shaped ‘judgments’ or ‘hunches’. I would instead say that the relevant metarepresentations look like environmental perceptions, not like beliefs.
When conscious experience is treated like a real object ‘grasped’ by a subject, it’s hard to imagine how you could be wrong about your experience—after all, it’s right there! But when I try to come up with a neural mechanism for my phenomenal judgments, or a neural correlate for my experience of phenomenal ‘manifestness’, I run into the fact that consciousness is a representation like any other, and can have representational content that isn’t necessarily there.
In other words, it is not philosophically or scientifically obligatory to treat the introspectible contents of my visual field as real objects I grasp; one can instead treat them as intentional objects, promissory notes that may or may not be fulfilled. It is a live possibility that human introspection : a painting of a unicorn :: phenomenal redness : a unicorn, even though the more natural metaphor is to think of phenomenal redness as the painting’s ‘paint’. More exactly, the analogy is to a painting of a painting, where the first painting mostly depicts the second accurately, but gets a specific detail (e.g., its saturation level or size) systematically wrong.
One nice feature of this perspective shift is that treating phenomenal redness as an intentional object doesn’t prove that it isn’t present; but it allows us to leave the possibility of absence open at the outset, and evaluate the strengths and weaknesses of eliminativism, reductionism, and fundamentalism without assuming the truth or falsity of any one at the outset.
It seems to me that you’re arguing against a view in the family of claims that include “It seems like the one thing I can know for sure is that I’m having these experiences” but I’m having trouble determining the precise claim you are refuting. I think this is because I’m not sure which claims that are meant precisely and which are meant rhetorically or directionally.
Since this is a complex topic which lots of potential distinctions to be made, it might be useful to determine your views on a few different claims in the family of “It seems like the one thing I can know for sure is that I’m having these experiences” to determine where the disagreement lies.
Below are some claims in this family. Can you pinpoint which you think are fallible and which you think are infallible (if any)? Assuming that many or most of them are fallible can you give me a sense of something like “how susceptible to fallibility” you think they are? (Also if you don’t mind, it might be useful to distinguish your views from what your-model-of-Geoff thinks to help pinpoint disagreements.) Feel free to add additional claims if they seem like they would do a better job of pinpointing the disagreement.
I am, I exist (i.e., the Cartesian cogito).
I am thinking.
I am having an experience.
I am experiencing X.
I experienced X.
I am experiencing X because there is an X-producing thing in the world.
I believe X.
I am having the experience of believing X.
Edit: Wrote this before seeing this comment, so apologies if this doesn’t interact with the content there.
We can build software agents that live in virtual environments we’ve constructed, and we can program the agents to never make certain kinds of mistakes (e.g., never make an invalid reasoning step, or never misperceive the state of tiles they’re near). So in that sense, there’s nothing wrong with positing ‘faculties that always get the right answer in practice’, though I expect these to be much harder to evolve than to design.
But a software agent in that environment shouldn’t be able to arrive at 100% certainty that one of its faculties is infallible, if it’s a smart Bayesian. Even we, the programmers, can’t be 100% certain that we programmed the agent correctly. Even an automated proof of correctness won’t get us to 100% certainty, because the theorem-prover’s source code could always have some error (or the hardware it’s running on could have been struck by a spare gamma ray, etc.)
1. I am, I exist (i.e., the Cartesian cogito).
It’s not clear what “I” means here, but it seems fine to say that there’s some persistent psychological entity roughly corresponding to the phrase “Rob Bensinger”. :)
I’m likewise happy to say that “thinking”, “experience”, etc. can be interpreted in (possibly non-joint-carving) ways that will make them pick out real things.
Oh, sorry, this was a quote from Descartes that is the closest thing that actually appears in Descartes to “I think therefore I am” (which doesn’t expressly appear in the Meditations).
Descartes’s idea doesn’t rely on any claims about persistent psychological entities (that would require the supposition of memory, which Descartes isn’t ready to accept yet!). Instead, he postulates an all-powerful entity that is specifically designed to deceive him and tries to determine whether anything at all can be known given that circumstance. He concludes that he can know that he exists because something has to do the thinking. Here is the relevant quote from the Second Meditation:
I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind.
I find this pretty convincing personally. I’m interested in whether you think Descartes gets it wrong even here or whether you think his philosophical system gains its flaws later.
More generally, I’m still not quite sure what precise claims or what type of claim you predict you and Geoff would disagree about. My-model-of-Geoff suggests that he would agree with “it seems fine to say that there’s some persistent psychological entity roughly corresponding to the phrase “Rob Bensinger”.” and that “thinking”, “experience”, etc.” pick out “real” things (depending on what we mean by “real”).
Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?
‘Can a deceiver trick a thinker into falsely believing they’re a thinker?’ has relevantly the same structure as ‘Can you pick up a box that’s not a box?’—it deductively follows that ‘no’, because the thinker’s belief in this case wouldn’t be false.
(Though we’ve already established that I don’t believe in infinite certainty. I forgive Descartes for living 60 years before the birth of Thomas Bayes, however. :) And Bayes didn’t figure all this out either.)
Because the logical structure is trivial—Descartes might just as well have asked ‘could a deceiver make 2 + 2 not equal 4?’—I have to worry that Descartes is sneaking in more content that is in fact deducible here. For example, ‘a thought exists, therefore a thinker exists’ may not be deductively true, depending on what is meant by ‘thought’ and ‘thinker’. A lot of philosophers have commented that Descartes should have limited his conclusion to ‘a thought exists’ (or ‘a mental event exists’), rather than ‘a thinker exists’.
Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?
‘Phenomenal consciousness exists’.
I’d guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!
Sorry if this comes off as pedantic, but I don’t know what this means. The philosopher in me keeps saying “I think we’re playing a language game,” so I’d like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely?
Because the logical structure is trivial—Descartes might just as well have asked ‘could a deceiver make 2 + 2 not equal 4?’
[...]
I’d guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!
I don’t know Geoff’s view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful). That’s not the same as “treating them as probabilistic statements,” but I think it’s functionally the same from your perspective.
The project of the Meditations is that Descartes starts by refusing to accept anything which can be doubted and then he tries to nevertheless build a system of knowledge from there. I don’t think Descartes would assign infinite certainty to any claim except, perhaps, the cogito.
My view of Descartes’ cogito is either that (A) it is a standard claim, in which case all the usual rules apply, including the one about infinite certainty not being allowed, or (B) it is not a standard claim, in which case the usual rules don’t apply, but also it becomes less clear that the cogito is actually a thing which can be “believed” in a meaningful sense to begin with.
I currently think (B) is much closer to being the case than (A). When I try to imagine grounding and/or operationalizing the cogito by e.g. designing a computer program that makes the same claim for the same psychological reasons, I run into a dead end fairly quickly, which in my experience is strong evidence that the initial concept was confused and/or incoherent. Here’s a quick sketch of my reasoning:
Suppose I have a computer program that, when run, prints “I exist” onto the screen. Moreover, suppose this computer program accomplishes this via means of a simple print statement; there is no internal logic, no if-then conditional structure, that modulates the execution of the print statement, merely the naked statement, which is executed every time the program runs. Then I ask: is there a meaningful sense in which the text the program outputs is correct?
It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program’s output were to be interpreted as having meaning, then it seems obvious that the statement in question (“I exist”) is correct, since the program does in fact exist and was run.
But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a “meaningful” statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it captures the spirit of Descartes’ cogito; I suspect Descartes himself would be quite unsatisfied with the notion that such a program outputs the statement for the same reasons he does.
But when I try to query my intuition, to ask it “Then what reasons are those, exactly?”, I find that I come up blank. It’s a qualitatively similar experience to asking what the truth-condition is for a tautology, e.g. 2 + 2 = 4, except even worse than that, since I could at the very least imagine a world in which 2 + 2 != 4, whereas I cannot even imagine an if-then conditional statement that would capture the (supposed) truth-condition of Descartes’ cogito. The closest (flawed) thing my intuition outputs looks like this:
if (I AM ACTUALLY BEING RUN RIGHT NOW):
print("I exist")
else if (I AM NOT BEING RUN, ONLY DISCUSSED HYPOTHETICALLY):
print("I don't exist")
Which is obvious nonsense. Obviously. (Though it does inspire an amusing idea for a mathematical horror story about an impossible computer program whose behavior when investigated using static analysis completely differs from its behavior when actually run, because at the beginning of the program is a metaphysical conditional statement that executes different code depending on whether it detects itself to be in static analysis versus actual execution.)
Anyway, the upshot of all this is that I don’t think Descartes’ statement is actually meaningful. I’m not particularly surprised by this; to me, it dovetails strongly with the heuristic “If you’re a dealing with a claim that seems to ignore the usual rules, it’s probably not a ‘claim’ in the usual sense”, which would have immediately flagged Descartes for the whole infinite certainty thing, without having to go through the whole “How would I write a computer program that exhibits this behavior for the same reason humans exhibit it?” song-and-dance.
(And for the record: there obviously is a reason humans find Descartes’ argument so intuitively compelling, just as there is a reason humans find the idea of qualia so intuitively compelling. I just think that, as with qualia, the actual psychological reason—of the kind that can be implemented in a real computer program, not a program with weird impossible metaphysical conditional statements—is going to look very different from humans’ stated justifications for the claims in question.)
I think this is quite a wrongheaded way to think about Descartes’ cogito. Consider this, for instance:
My view of Descartes’ cogito is either that (A) it is a standard claim, in which case all the usual rules apply, including the one about infinite certainty not being allowed, or (B) it is not a standard claim, in which case the usual rules don’t apply, but also it becomes less clear that the cogito is actually a thing which can be “believed” in a meaningful sense to begin with.
But precisely the point is that Descartes has set aside “all the usual rules”, has set aside “philosophical scaffolding”, epistemological paradigms, and so on, and has started with (as much as possible) the bare minimum that he could manage: naive notions of perception and knowledge, and pretty much nothing else. He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
To put it another way: consider what Descartes might say, if you put your criticisms to him. He might say something like:
“Whoa, now, hold on. Rules about infinite certainty? Probability theory? Philosophical commitments about the nature of beliefs and claims? You’re getting ahead of me, friend. We haven’t gotten there yet. I don’t know about any of those things; or maybe I did, but then I started doubting them all. The only thing I know right now is, I exist. I don’t even know that you exist! I certainly do not propose to assent to all these ‘rules’ and ‘standards’ you’re talking about—at least, not yet. Maybe after I’ve built my epistemology up, we’ll get back to all that stuff. But for now, I don’t find any of the things you’re saying to have any power to convince me of anything, and I decline to acknowledge the validity of your analysis. Build it all up for me, from the cogito on up, and then we’ll talk.”
Descartes, in other words, was doing something very basic, philosophically speaking—something that is very much prior to talking about “the usual rules” about infinite certainty and all that sort of thing.
Separately from all that, what you say about the hypothetical computer program (with the print statement) isn’t true. There is a check that’s being run: namely, the ability of the program to execute. Conditional on successfully being able to execute the print statement, it prints something. A program that runs, definitionally exists; its existence claim is satisfied thereby.
But precisely the point is that Descartes has set aside “all the usual rules”, has set aside “philosophical scaffolding”, epistemological paradigms, and so on,
I initially wanted to preface my response here with something like “to put it delicately”, but then I realized that Descartes is dead and cannot take offense to anything I say here, and so I will be indelicate in my response:
I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules. The rules governing correct cognition are clear, comprehensible, and causally justifiable; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.
and has started with (as much as possible) the bare minimum that he could manage: naive notions of perception and knowledge, and pretty much nothing else.
Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.
He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.
To put it another way: consider what Descartes might say, if you put your criticisms to him. He might say something like:
“Whoa, now, hold on. Rules about infinite certainty? Probability theory? Philosophical commitments about the nature of beliefs and claims? You’re getting ahead of me, friend. We haven’t gotten there yet. I don’t know about any of those things; or maybe I did, but then I started doubting them all. The only thing I know right now is, I exist. I don’t even know that you exist! I certainly do not propose to assent to all these ‘rules’ and ‘standards’ you’re talking about—at least, not yet. Maybe after I’ve built my epistemology up, we’ll get back to all that stuff. But for now, I don’t find any of the things you’re saying to have any power to convince me of anything, and I decline to acknowledge the validity of your analysis. Build it all up for me, from the cogito on up, and then we’ll talk.”
Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.
Descartes, in other words, was doing something very basic, philosophically speaking—something that is very much prior to talking about “the usual rules” about infinite certainty and all that sort of thing.
At risk of hammering in the point too many times: “prior” does not correspond to “better”. Indeed, it is hard to see why one would take this attitude (that “prior” knowledge is somehow more trustworthy than models built on actual reasoning) with respect to a certain subset of questions classed as “philosophical” questions, when virtually every other human endeavor has shown the opposite to be the case: learning more, and knowing more, causes one to make fewer mistakes in one’s reasoning and conclusions. If Descartes wants to discount a certain class of reasoning in his quest for truth, I submit that he has chosen to discount the wrong class.
Separately from all that, what you say about the hypothetical computer program (with the print statement) isn’t true. There is a check that’s being run: namely, the ability of the program to execute. Conditional on successfully being able to execute the print statement, it prints something. A program that runs, definitionally exists; its existence claim is satisfied thereby.
A key difference here: what you describe is not a check that is being run by the program, which is important because it is the program that finds itself in an analogous situation to Descartes.
What you say is, of course, true to any outside observer; I, seeing the program execute, can certainly be assured of its existence. But then, I can also say the same of Descartes: if I were to run into him in the street, I would not hesitate to conclude that he exists, and he need not even assert his existence aloud for me to conclude this. Moreover, since I (unlike Descartes) am not interested in the project of “doubting everything”, I can quite confidently proclaim that this is good enough for me.
Ironically enough, it is Descartes himself who considers this insufficient. He does not consider it satisfactory for a program to merely execute; he wants the program to know that it is being executed. For this it is not sufficient to simply assert “The program is being run; that is itself the check on its existence”; what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.
And of course, what is sauce for the goose is sauce for the gander; if a program cannot run such a check even in principle, then what reason do I have to believe that Descartes’ brain is running some analogous check when he asserts his famous “Cogito, ergo sum”? Far more reasonable, I claim, to suspect that his brain is not running any such check, and that his resulting statement is meaningless at best, and incoherent at worst.
I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules.
But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?
And there the answer is not so obvious. After all, it’s your own brain that stores the rules, your own brain that implements them, your own brain that was convinced of their validity in the first place…
What Descartes is doing, then, is seeing if he can re-generate “the usual rules”, with his own brain (and how else?), having first set them aside. In other words, he is attempting to check whether said rules are “truly part of him”, or whether they are, so to speak, foreign agents who have sneaked into his brain illicitly (through unexamined habit, indoctrination, deception, etc.).
Thus, when you say:
The rules governing correct cognition are clear, comprehensible, and causally justifiable; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.
… Descartes may answer:
“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”
Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.
Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)
And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.
He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.
Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t. I consider it quite reasonable to be more impressed with his approach than with yours. If you object, merely consider that someone had to come up with “the usual rules” in the first place—and they did not have said rules to help them.
Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.
Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?
The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.
At risk of hammering in the point too many times: …
Now, in this paragraph I think you have some strange confusion. I am not quite sure what claim or point of mine you take this to be countering.
… what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.
Hmm, I think it doesn’t go without saying, actually; I think it needs to be said, and then defended. I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not. I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).
But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…
But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?
I certainly do! I have observed the fallibility of my own brain on numerous past occasions, and any temptation I might have had to consider myself a perfect reasoner has been well and truly quashed by those past observations. Indeed, the very project we call “rationality” is premised on the notion that our naive faculties are woefully inadequate; after all, one cannot have aspirations of “increasing” one’s rationality without believing that one’s initial starting point is one of imperfect rationality.
… Descartes may answer:
“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”
Indeed, I am fallible, and for this reason I cannot rule out the possibility that I have misapprehended the rules, and that my misapprehensions are perhaps fatal. However, regardless of however much my fallibility reduces my confidence in the rules, it inevitably reduces my confidence in my ability to perform without rules by an equal or greater amount; and this seems to me to be right, and good.
...Or, to put it another way: perhaps I am blind, and in my blindness I have fumbled my way to a set of (what seem to me to be) crutches. Should I then discard those crutches and attempt to make my way unassisted, on the grounds that I may be mistaken about whether they are, in fact, crutches? But surely I will do no better on my own, than I will by holding on to the crutches for the time being; for then at least the possibility exists that I am not mistaken, and the objects I hold are in fact crutches. Any argument that might lead me to make the opposite choice is quite wrongheaded indeed, in my view.
Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)
And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.
It is perhaps worth noting that the sense in which “parallel lines are not parallel” which you cite is quite different from the sense in which our brains misinterpret the café wall illusion. And in light of this, it is perhaps also notable that the eventual development of non-Euclidean geometries was not spurred by this or similar optical illusions.
Which is to say: our understanding of things may be flawed or incomplete in certain ways. But we do not achieve a corrected understanding of those things by discarding our present tools wholesale (especially on such flimsy evidence as naive perception); we achieve a corrected understanding by poking and prodding at our current understanding, until such time as our efforts bear fruit.
(In the “crutch” analogy: perhaps there exists a better set of crutches, somewhere out there for us to find. This nonetheless does not imply that we ought discard our current crutches in anticipation of the better set; we will stand a far better chance of making our way to the better crutches, if we rely on the crutches we have in the meantime.)
Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t.
Certainly not; but fortunately this rather strong condition is not needed for me to distrust Descartes’ reasoning. What is needed is simply that I trust “the usual rules” more than I trust Descartes; and for further clarification on this point you need merely re-read what I wrote above about “crutches”.
Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?
The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.
I believe my above arguments suffice to answer this objection.
[...] I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not.
Suppose a program is not, in fact, running. How do you propose that the program in question detect this state of affairs?
I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).
But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…
If the only possible validation of Descartes’ claim to exist is anthropic in nature, then this is tantamount to saying that his cogito is untenable. After all, “I think, therefore I am” is semantically quite different from “I assert that I am, and this assertion is anthropically valid because you will only hear me say it in worlds where it happens to be true.”
In fact, I suspect that Descartes would agree with me on this point, and complain that—to the extent you are reducing his claim to a mere instance of anthropic reasoning—you are immeasurably weakening it. To quote from an earlier comment of mine:
It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program’s output were to be interpreted as having meaning, then it seems obvious that the statement in question (“I exist”) is correct, since the program does in fact exist and was run.
But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a “meaningful” statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it captures the spirit of Descartes’ cogito; I suspect Descartes himself would be quite unsatisfied with the notion that such a program outputs the statement for the same reasons he does.
Sorry if this comes off as pedantic, but I don’t know what this means. The philosopher in me keeps saying “I think we’re playing a language game,” so I’d like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely?
We’re all philosophers here, this is a safe space for pedantry. :)
Below, I’ll use the words ‘phenomenal property’ and ‘quale’ interchangeably.
An example of a phenomenal property is the particular redness of a particular red thing in my visual field.
Geoff would say he’s certain, while he’s experiencing it, that this property is instantiated.
I would say that there’s no such property, though there is a highly similar property that serves all the same behavioral/cognitive/functional roles (and just lacks that extra ‘particular redness’, and perhaps that extra ‘inwardness / inner-light-ness / interiority / subjectivity / perspectivalness’—basically, lacks whatever aspects make the hard problem seem vastly harder than the ‘easy’ problems of reducing other mental states to physical ones).
This, of course, is a crazy-sounding view on my part. It’s weird that I even think Geoff and I have a meaningful, substantive disagreement. Like, if I don’t think that Geoff’s brain really instantiates qualia, then what do I think Geoff even means by ‘qualia’? How does Geoff successfully refer to “qualia, if he doesn’t have them? Why not just say that ‘qualia’ refers to something functional?
Two reasons:
I think hard-problem intuitions are grounded in a quasi-perceptual illusion, not a free-floating delusion.
If views like Geoff’s and David Chalmers’ were grounded in a free-floating delusion, then we would just say ‘they have a false belief about their experiences’ and stop there.
If we’re instead positing that there’s something analogous to an optical illusion happening in people’s basic perception of their own experiences, then it makes structural sense to draw some distinction between ‘the thing that’s really there’ and ‘the thing that’s not really there, but seems to be there when we fall for the illusion’.
I may not think that the latter concept really and truly has the full phenomenal richness that Geoff / Chalmers / etc. think it does (for the same reason it’s hard to imagine a p-zombie having a full and correct conception of ‘what red looks like’). But I’m still perfectly happy to use the word ‘qualia’ to refer to it, keeping in mind that I think our concept of ‘qualia’ is more like ‘a promissory note for “the kind of thing we’d need to instantiate in order to justify hard-problem arguments”’—it’s a p-zombie’s notion of qualia, though the p-zombie may not realize it.
I think the hard-problem reasoning is correct, in that if we instantiated properties like those we (illusorily) appear to have, then physicalism would be false, there would be ‘further facts’ over and above the physics facts (that aren’t logically entailed/constrained by physics), etc.
Basically, I’m saying that a p-zombie’s concept of ‘phenomenal consciousness’ (or we can call it ‘blenomenal consciousness’ or something, if we want to say that p-zombies lack the ‘full’ concept) is distinct from the p-zombie’s concept of ‘the closest functional/reducible analog of phenomenal consciousness’. I think this isn’t a weird view. The crazy part is when I take the further step of asserting that we’re p-zombies. :)
I don’t know Geoff’s view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful).
Sorry if this comes off as pedantic, but I don’t know what this means
It doesn’t have to mean anything strange or remarkable. It’s basically ordinary waking consciousness. If you are walking around noticing sounds and colours smells ,that’s phenomenal consciousness. As opposed to things that actually are strange , like blindsight or sleepwalking.
But it can be overloaded with other, more controversial, ideas, such as the idea that it is incorrigible (how we got on to the subject), or necessarily non-physical.
I think it can be reasonable to have 100% confidence in beliefs where the negation of the belief would invalidate the ability to reason, or to benefit from reason. Though with humans, I think it always makes sense to leave an epsilon for errors of reason.
[Disclaimer: not Rob, may not share Rob’s views, etc. The reason I’m writing this comment nonetheless is that I think I share enough of Rob’s relevant views here (not least because I think Rob’s views on this topic are mostly consonant with the LW “canon” view) to explain. Depending on how much you care about Rob’s view specifically versus the LW “canon” view, you can choose to regard or disregard this comment as you see fit.]
I don’t think people should be certain of anything
What about this claim itself?
I don’t think this is the gotcha [I think] you think it is. I think it is consistent to hold that (1) people should not place infinite certainty in any beliefs, including meta-beliefs about the normative best way to construct beliefs, and that (2) since (1) is itself a meta-belief, it too should not be afforded infinite certainty.
Of course, this conjunction has the interesting quality of feeling somewhat paradoxical, but I think this feeling doesn’t stand up to scrutiny. There doesn’t seem to me to be any actual contradiction you can derive from the conjunction of (1) and (2); the first seems simply to be a statement of a paradigm that one currently believes to be normative, and the second is a note that, just because one currently believes a paradigm to be normative, does not necessarily mean that that paradigm is normative. The fact that this second note can be construed as coming from the paradigm itself does not undermine it in my eyes; I think it is perfectly fine for paradigms to exist that fail to assert their own correctness.
I think, incidentally, that there are many people who [implicitly?] hold the negation of the above claim, i.e. they hold that (3)a valid paradigm must be one that has faith in its own validity. The paradigm may still turn out to be false, but this ought not be a possibility that is endorsed from inside the paradigm; just as individuals cannot consistently assert themselves to be mistaken about something (even if they are in fact mistaken), the inside of a paradigm ought not be the kind of thing that can undermine itself. If you hold something like (3) to be the case, then and only then does your quoted question become a gotcha.
Naturally, I think (3) is mistaken. Moreover, I not only think (3) is mistaken, I think it is unreasonable, i.e. I think there is no good reason to want (3) to be the case. I think the relevant paradox here is not Moore’s, but the lottery paradox, which I assert is not a paradox at all (though admittedly counterintuitive if one is not used to thinking in probabilities rather than certainties).
[There is also a resemblance here to Godel’s (second) incompleteness theorem, which asserts that sufficiently powerful formal systems cannot prove their own consistency unless they are actually inconsistent. I think this resemblance is more surface-level than deep, but it may provide at least an intuition that (1) there exist at least some “belief systems” that cannot “trust” themselves, and that (2) this is okay.]
for panpsychism and emergent dualism alike, for property and substance and ‘aspect’ dualism alike
If you want to claim some definitive disproof of aspect dualism, a minimal requirement would be to engage with it. I’ ve tried talking to you about it several times, and each time you cut off the conversation at your end.
I don’t know to what extent you still endorse the quoted reasoning (as an accurate model of the mistakes being made by the sorts of people you describe), but: it seems clear to me that the big error is in step 2… and it also seems to me that step 2 is a “rookie-level” error, an error that a careful thinker shouldn’t ever make (and, indeed, that people like e.g. David Chalmers do not in fact make).
That is, the Hard Problem shouldn’t lead us to conclude that consciousness isn’t reducible to physics—only that we haven’t reduced it, and that in fact there remains an open (and hard!) problem to solve. But reasoning from the Hard Problem to a positive belief in extra-physical phenomena is surely a mistake…
Now, hold on: your phrasing seems to suggest that panpsychism either is the same thing as, or entails, thinking that “phenomenal consciousness isn’t fully reducible to third-person descriptions”. But… that’s not the case, as far as I can tell. Did I misunderstand you?
He’s the kind of panpsychist who holds that view because he thinks consciousness isn’t fully reducible / third-person-describable. I think this is by far the best reason to be a panpsychist, and it’s the only type of panpsychism I’ve heard endorsed by analytic philosophers working in academia.
I think Brian Tomasik endorses a different kind of panpsychism, which asserts that phenomenal consciousness is eliminable rather than fundamental? So I wouldn’t assume that arbitrary rationalist panpsychists are in the Chalmers camp; but Chalmers certainly is!
Hmm. Ok, I think I sort-of see in what direction to head to resolve the disagreement/confusion we’ve got here (and I am very unsure whether I am more confused, of the two of us, or you are, though maybe we both are)… but I don’t think that I can devote the time / mental effort to this discussion at this time. Perhaps we can come back to it another time? (Or not; it’s not terribly important, I don’t think…)
He’s the kind of panpsychist who holds that view because he thinks consciousness isn’t fully reducible / third-person-describable.
He’s a property dualist because he thinks consciousness isn’t fully reducible / third-person-describable. He also has a commitment to the idea that phenomemal consciousness supervenes on information processing and to the idea that human and biological information processing are not privileged , which all add up to something like panpsychism.
That is, the Hard Problem shouldn’t lead us to conclude that consciousness isn’t reducible to physics—only that we haven’t reduced it, and that in fact there remains an open (and hard!) problem to solve. But reasoning from the Hard Problem to a positive belief in extra-physical phenomena is surely a mistake
Don’t say “surely”, prove it.
It’s not unreasonable to say that a problem that has remained unsolved for an extended period of time, is insoluble...but it’s not necessarily the case either. Your opponents are making a subjective judgement call, and so are you.
Saying that two extremes are both unreasonable is not the same as saying that those extremes are both reasonabe.
Said (if I am reading him right) is saying that it is unreasonable (i.e. unjustified) to claim that just because a problem hasn’t been solved for an extended period of time, it is therefore insoluble.
To which you (seemed to me to) reply “don’t just declare that [the original claim] is unreasonable. Prove that [the original claim] is unreasonable.”
To which Said (seemed to me to) answer “no, I think that there’s a strong prior here that the extreme statement isn’t one worth making.”
My own stance: a problem remaining unsolved for a long time is weak evidence that it’s fundamentally insoluble, but you really need a model of why it’s insoluble before making a strong claim there.
Said (if I am reading him right) is saying that it is unreasonable (i.e. unjustified) to claim that just because a problem hasn’t been solved for an extended period of time, it is therefore insoluble.
Which would be true if “reasonable” and “justified” were synonyms, but they are not.
no, I think that there’s a strong prior here that the extreme statement isn’t one worth making.”
Which statement is the one that is extreme? Is it not extreme to claim an unsolved problem will definitely be solved?
My own stance: a problem remaining unsolved for a long time is weak evidence that it’s fundamentally insoluble,
It’s weak evidence, in that it’s not justification, but it’s some evidence , in that it’s reasonable. Who are you disagreeing with?
The chat log above mentions stuff like:
And the transcript mentions:
This is referring to the same incident, where (at the 2014 EA Summit, which was much larger and more public than previous Leverage-hosted EA events) Anna and Geoff were on a scheduled panel discussion. My recollection from being in the audience was that Anna unexpectedly asked Geoff if he believed that shoving a pole through someone’s brain could change their beliefs (other than via sensory perception), and Geoff reluctantly said ‘no’. I don’t think he elaborated on why, but I took his view to be that various things about the mind are extraphysical and don’t depend on the brain’s state.
I had a couple of conversations with Geoff in 2014 about the hard problem of consciousness, where I endorsed “eliminativism about phenomenal consciousness” or “phenomenal anti-realism” (nowadays I’d use the more specific term “illusionism”, following Keith Frankish), as opposed to “phenomenal reductionism” (phenomenal consciousness exists, and “every positive fact about experience is logically entailed by the physical facts”) and “phenomenal fundamentalism” (it exists, and “some positive facts about experience aren’t logically entailed by the physical facts”).
Geoff never told me his full view, but he said he thought phenomenal fundamentalism was true, and he said he was completely certain that phenomental anti-realism is false.
(cw: things I consider epistemic traps and mistaken ways of thinking about experience)
I’m the person in the chat who admitted to ‘writing Geoff off on philosophical grounds’ pretty early on. To quote a pair of emails I wrote Geoff after the Twitch stream, elaborating on what I meant by ‘writing off’ and why I ‘wrote him off’ (in that sense) in 2014:
And:
Rob: Where does the reasoning chain from 1 to 3a/3b go wrong in your view? I get that you think it goes wrong in that the conclusions aren’t true, but what is your view about which premise is wrong or why the conclusion doesn’t follow from the premises?
In particular, I’d be really interested in an argument against the claim “It seems like the one thing I can know for sure is that I’m having these experiences.”
I think that the place the reasoning goes wrong is at 1 (“It seems like the one thing I can know for sure is that I’m having these experiences.”). I think this is an incredibly intuitive view, and a cornerstone of a large portion of philosophical thought going back centuries. But I think it’s wrong.
(At least, it’s wrong—and traplike—when it’s articulated as “know for sure”. I have no objection to having a rather high prior probability that one’s experiences are real, as long as a reasonably large pile of evidence to the contrary could change your mind. But from a Descartes-ish perspective, ‘my experiences might not be real’ is just as absurd as ‘my experiences aren’t real’; the whole point is that we’re supposed to have certainty in our experiences.)
Here’s how I would try to motivate ‘illusionism is at least possibly true’ today, and more generally ‘there’s no way for a brain to (rationally) know with certainty that any of its faculties are infallible’:
_________________________________________________
First, to be clear: I share the visceral impression that my own consciousness is infallibly manifest to me, that I couldn’t possibly not be having this experience.
Even if all my beliefs are unreliable, the orange quale itself is no belief, and can’t be ‘wrong’. Sure, it could bear no resemblance to the external world—it could be a hallucination. But the existence of hallucinations can’t be a hallucination, trivially. If it merely ‘seems to me’, perceptually, as though I’m seeing orange—well, that perceptual seeming is the orange quale!
In some sense, it feels as though there’s no ‘gap’ between the ‘knower’ and the ‘known’. It feels as though I’m seeing the qualia, not some stand-in representation for qualia that could be mistaken.
All of that feels right to me, even after 10+ years of being an illusionist. But when I poke at it sufficiently, I think it doesn’t actually make sense.
Intuition pump 1: How would my physical brain, hands, etc. know any of this? For a brain to accurately represent some complex, logically contingent fact, it has to causally interact (at least indirectly / at some remove) with that fact. (Cf. The Second Law of Thermodynamics, and Engines of Cognition.)
Somehow I must have just written this comment. So some causal chain began in one part of my physical brain, which changed things about other parts of my brain, which changed things about how I moved my fingers and hands, which changed things about the contents of this comment.
What, even in principle, would it look like for one part of a brain to have infallible, “direct” epistemic access to a thing, and to then transmit this fact to some other part of the brain?
It’s easy to see how this works with, e.g., ‘my brain has (fallible, indirect) knowledge of how loud my refrigerator is’. We could build that causal model, showing how the refrigerator’s workings change things about the air in just the right way, to change things about my ears in just the right way, to change things about my brain in just the right way, to let me output accurate statements about the fridge’s loudness.
It’s even easy to see how this works with a lot of introspective facts, as long as we don’t demand infallibility or ‘directness’. One part of my brain can detect whether another part of my brain is in some state.
But what would it look like, even in principle, for one set of neurons that ‘has immediate infallible epistemic access to X’ to transmit that fact to another set of neurons in the brain? What would it look like to infallibly transmit it, such that a gamma ray couldn’t randomly strike your brain to make things go differently (since if it’s epistemically possible that a gamma ray could do that, you can’t retain certainty across transmissions-between-parts-of-your-brain)? What would it look like to not only infallibly transmit X, but infallibly transmit the (true, justified) knowledge of that very infallibility?
This is an impossible enough problem, AFAICT, but it’s just a warm-up for:
Intuition pump 2: What would it look like for even one part of a brain to have ‘infallible’ ‘direct’ access to something ‘manifest’?
If we accepted, from intuition pump 1, that you can’t transmit ‘infallible manifestness’ across different parts of the brain (even potentially quite small parts), we would still maybe be able to say:
‘I am not my brain. I am a sufficiently small part of my brain that is experiencing this thing. I may be helpless to transmit any of that to my hands, or even to any other portion of my brain. But that doesn’t change the fact that I have this knowledge—I, the momentarily-existing locked-in entity with no causal ability to transmit this knowledge to the verbal loop thinking these thoughts, the hands writing these sentences, or to my memory, or even to my own future self a millisecond from now.’
OK, let’s grant all that.
… But how could even that work?
Like, how do you build a part of a brain, or a part of a computer, to have infallible access to its own state and to rationally know that it’s infallible in this regard? How would you design a part of an AI to satisfy that property, such that it’s logically impossible for a gamma ray (or whatever) to make that-part-of-the-AI wrong? What would the gears and neural spike patterns underlying that knowing/perceiving/manifestness look like?
It’s one thing to say ‘there’s something it’s like to be that algorithm’; it’s quite another to say ‘there’s something it’s like to be that algorithm, and the algorithm has knowably infallible epistemic access to that what-it’s-like’. How do you design an algorithm like that, even in principle?
I think this is the big argument. I want to see a diagram of what this ‘manifestness’ thing could look like, in real life. I think there’s no good substitute for the process of actually trying to diagram it out.
Intuition Pump 3: The reliability of an organism’s introspection vs. its sensory observation is a contingent empirical fact.
We can imagine building a DescartesBot that has incredibly unreliable access to its external environment, but has really quite accurate (though maybe not infallible) access to its internal state. E.g., its sensors suck, but its brain is able to represent tons of facts about its own brain with high reliability (though perhaps not infallibility), and to form valid reasoning chains incorporating those facts. If humans are like DescartesBot, then we should at least be extremely wary of letting our scientific knowledge trump our phenomenological knowledge, when the two seem to conflict.
But humanity’s track record is the opposite of DescartesBot’s—we seem way better at sensing properties of our external environment, and drawing valid inferences about those properties, than at doing the same for our own introspected mental states. E.g., people are frequently wrong about their own motives and the causes of their behavior, but they’re rarely wrong about how big a given chair is.
This isn’t a knock-down argument, but it’s a sort of ‘take a step back’ argument that asks whether we should expect that we’d be the sorts of evolved organisms that have anything remotely approaching introspective certitude about various states of our brain. Does that seem like the genre-savvy view, the view that rhymes more with the history of science to date, the view that matches the apparent character of the rest of our knowledge of the world?
I think some sort of ‘taste for what’s genre-savvy’ is a surprisingly important component of how LW has avoided this epistemic trap. Even when folks here don’t know how to articulate their intuitions or turn them into explicit arguments, they’ve picked up on some important things about how this stuff tends to work.
If you want something that’s more philosopher-ish, and a bit further from how I think about the topic today, here’s what I said to Geoff in 2014 (in part):
It seems to me that you’re arguing against a view in the family of claims that include “It seems like the one thing I can know for sure is that I’m having these experiences” but I’m having trouble determining the precise claim you are refuting. I think this is because I’m not sure which claims that are meant precisely and which are meant rhetorically or directionally.
Since this is a complex topic which lots of potential distinctions to be made, it might be useful to determine your views on a few different claims in the family of “It seems like the one thing I can know for sure is that I’m having these experiences” to determine where the disagreement lies.
Below are some claims in this family. Can you pinpoint which you think are fallible and which you think are infallible (if any)? Assuming that many or most of them are fallible can you give me a sense of something like “how susceptible to fallibility” you think they are? (Also if you don’t mind, it might be useful to distinguish your views from what your-model-of-Geoff thinks to help pinpoint disagreements.) Feel free to add additional claims if they seem like they would do a better job of pinpointing the disagreement.
I am, I exist (i.e., the Cartesian cogito).
I am thinking.
I am having an experience.
I am experiencing X.
I experienced X.
I am experiencing X because there is an X-producing thing in the world.
I believe X.
I am having the experience of believing X.
Edit: Wrote this before seeing this comment, so apologies if this doesn’t interact with the content there.
I don’t think people should be certain of anything; see How to Convince Me That 2 + 2 = 3; Infinite Certainty; and 0 and 1 Are Not Probabilities.
We can build software agents that live in virtual environments we’ve constructed, and we can program the agents to never make certain kinds of mistakes (e.g., never make an invalid reasoning step, or never misperceive the state of tiles they’re near). So in that sense, there’s nothing wrong with positing ‘faculties that always get the right answer in practice’, though I expect these to be much harder to evolve than to design.
But a software agent in that environment shouldn’t be able to arrive at 100% certainty that one of its faculties is infallible, if it’s a smart Bayesian. Even we, the programmers, can’t be 100% certain that we programmed the agent correctly. Even an automated proof of correctness won’t get us to 100% certainty, because the theorem-prover’s source code could always have some error (or the hardware it’s running on could have been struck by a spare gamma ray, etc.)
It’s not clear what “I” means here, but it seems fine to say that there’s some persistent psychological entity roughly corresponding to the phrase “Rob Bensinger”. :)
I’m likewise happy to say that “thinking”, “experience”, etc. can be interpreted in (possibly non-joint-carving) ways that will make them pick out real things.
Oh, sorry, this was a quote from Descartes that is the closest thing that actually appears in Descartes to “I think therefore I am” (which doesn’t expressly appear in the Meditations).
Descartes’s idea doesn’t rely on any claims about persistent psychological entities (that would require the supposition of memory, which Descartes isn’t ready to accept yet!). Instead, he postulates an all-powerful entity that is specifically designed to deceive him and tries to determine whether anything at all can be known given that circumstance. He concludes that he can know that he exists because something has to do the thinking. Here is the relevant quote from the Second Meditation:
I find this pretty convincing personally. I’m interested in whether you think Descartes gets it wrong even here or whether you think his philosophical system gains its flaws later.
More generally, I’m still not quite sure what precise claims or what type of claim you predict you and Geoff would disagree about. My-model-of-Geoff suggests that he would agree with “it seems fine to say that there’s some persistent psychological entity roughly corresponding to the phrase “Rob Bensinger”.” and that “thinking”, “experience”, etc.” pick out “real” things (depending on what we mean by “real”).
Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?
‘Can a deceiver trick a thinker into falsely believing they’re a thinker?’ has relevantly the same structure as ‘Can you pick up a box that’s not a box?’—it deductively follows that ‘no’, because the thinker’s belief in this case wouldn’t be false.
(Though we’ve already established that I don’t believe in infinite certainty. I forgive Descartes for living 60 years before the birth of Thomas Bayes, however. :) And Bayes didn’t figure all this out either.)
Because the logical structure is trivial—Descartes might just as well have asked ‘could a deceiver make 2 + 2 not equal 4?’—I have to worry that Descartes is sneaking in more content that is in fact deducible here. For example, ‘a thought exists, therefore a thinker exists’ may not be deductively true, depending on what is meant by ‘thought’ and ‘thinker’. A lot of philosophers have commented that Descartes should have limited his conclusion to ‘a thought exists’ (or ‘a mental event exists’), rather than ‘a thinker exists’.
‘Phenomenal consciousness exists’.
I’d guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!
Sorry if this comes off as pedantic, but I don’t know what this means. The philosopher in me keeps saying “I think we’re playing a language game,” so I’d like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely?
I don’t know Geoff’s view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful). That’s not the same as “treating them as probabilistic statements,” but I think it’s functionally the same from your perspective.
The project of the Meditations is that Descartes starts by refusing to accept anything which can be doubted and then he tries to nevertheless build a system of knowledge from there. I don’t think Descartes would assign infinite certainty to any claim except, perhaps, the cogito.
My view of Descartes’ cogito is either that (A) it is a standard claim, in which case all the usual rules apply, including the one about infinite certainty not being allowed, or (B) it is not a standard claim, in which case the usual rules don’t apply, but also it becomes less clear that the cogito is actually a thing which can be “believed” in a meaningful sense to begin with.
I currently think (B) is much closer to being the case than (A). When I try to imagine grounding and/or operationalizing the cogito by e.g. designing a computer program that makes the same claim for the same psychological reasons, I run into a dead end fairly quickly, which in my experience is strong evidence that the initial concept was confused and/or incoherent. Here’s a quick sketch of my reasoning:
Suppose I have a computer program that, when run, prints “I exist” onto the screen. Moreover, suppose this computer program accomplishes this via means of a simple print statement; there is no internal logic, no if-then conditional structure, that modulates the execution of the print statement, merely the naked statement, which is executed every time the program runs. Then I ask: is there a meaningful sense in which the text the program outputs is correct?
It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program’s output were to be interpreted as having meaning, then it seems obvious that the statement in question (“I exist”) is correct, since the program does in fact exist and was run.
But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a “meaningful” statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it captures the spirit of Descartes’ cogito; I suspect Descartes himself would be quite unsatisfied with the notion that such a program outputs the statement for the same reasons he does.
But when I try to query my intuition, to ask it “Then what reasons are those, exactly?”, I find that I come up blank. It’s a qualitatively similar experience to asking what the truth-condition is for a tautology, e.g. 2 + 2 = 4, except even worse than that, since I could at the very least imagine a world in which 2 + 2 != 4, whereas I cannot even imagine an if-then conditional statement that would capture the (supposed) truth-condition of Descartes’ cogito. The closest (flawed) thing my intuition outputs looks like this:
Which is obvious nonsense. Obviously. (Though it does inspire an amusing idea for a mathematical horror story about an impossible computer program whose behavior when investigated using static analysis completely differs from its behavior when actually run, because at the beginning of the program is a metaphysical conditional statement that executes different code depending on whether it detects itself to be in static analysis versus actual execution.)
Anyway, the upshot of all this is that I don’t think Descartes’ statement is actually meaningful. I’m not particularly surprised by this; to me, it dovetails strongly with the heuristic “If you’re a dealing with a claim that seems to ignore the usual rules, it’s probably not a ‘claim’ in the usual sense”, which would have immediately flagged Descartes for the whole infinite certainty thing, without having to go through the whole “How would I write a computer program that exhibits this behavior for the same reason humans exhibit it?” song-and-dance.
(And for the record: there obviously is a reason humans find Descartes’ argument so intuitively compelling, just as there is a reason humans find the idea of qualia so intuitively compelling. I just think that, as with qualia, the actual psychological reason—of the kind that can be implemented in a real computer program, not a program with weird impossible metaphysical conditional statements—is going to look very different from humans’ stated justifications for the claims in question.)
I think this is quite a wrongheaded way to think about Descartes’ cogito. Consider this, for instance:
But precisely the point is that Descartes has set aside “all the usual rules”, has set aside “philosophical scaffolding”, epistemological paradigms, and so on, and has started with (as much as possible) the bare minimum that he could manage: naive notions of perception and knowledge, and pretty much nothing else. He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.
To put it another way: consider what Descartes might say, if you put your criticisms to him. He might say something like:
“Whoa, now, hold on. Rules about infinite certainty? Probability theory? Philosophical commitments about the nature of beliefs and claims? You’re getting ahead of me, friend. We haven’t gotten there yet. I don’t know about any of those things; or maybe I did, but then I started doubting them all. The only thing I know right now is, I exist. I don’t even know that you exist! I certainly do not propose to assent to all these ‘rules’ and ‘standards’ you’re talking about—at least, not yet. Maybe after I’ve built my epistemology up, we’ll get back to all that stuff. But for now, I don’t find any of the things you’re saying to have any power to convince me of anything, and I decline to acknowledge the validity of your analysis. Build it all up for me, from the cogito on up, and then we’ll talk.”
Descartes, in other words, was doing something very basic, philosophically speaking—something that is very much prior to talking about “the usual rules” about infinite certainty and all that sort of thing.
Separately from all that, what you say about the hypothetical computer program (with the
print
statement) isn’t true. There is a check that’s being run: namely, the ability of the program to execute. Conditional on successfully being able to execute theprint
statement, it prints something. A program that runs, definitionally exists; its existence claim is satisfied thereby.I initially wanted to preface my response here with something like “to put it delicately”, but then I realized that Descartes is dead and cannot take offense to anything I say here, and so I will be indelicate in my response:
I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules. The rules governing correct cognition are clear, comprehensible, and causally justifiable; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.
Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.
In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.
Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.
At risk of hammering in the point too many times: “prior” does not correspond to “better”. Indeed, it is hard to see why one would take this attitude (that “prior” knowledge is somehow more trustworthy than models built on actual reasoning) with respect to a certain subset of questions classed as “philosophical” questions, when virtually every other human endeavor has shown the opposite to be the case: learning more, and knowing more, causes one to make fewer mistakes in one’s reasoning and conclusions. If Descartes wants to discount a certain class of reasoning in his quest for truth, I submit that he has chosen to discount the wrong class.
A key difference here: what you describe is not a check that is being run by the program, which is important because it is the program that finds itself in an analogous situation to Descartes.
What you say is, of course, true to any outside observer; I, seeing the program execute, can certainly be assured of its existence. But then, I can also say the same of Descartes: if I were to run into him in the street, I would not hesitate to conclude that he exists, and he need not even assert his existence aloud for me to conclude this. Moreover, since I (unlike Descartes) am not interested in the project of “doubting everything”, I can quite confidently proclaim that this is good enough for me.
Ironically enough, it is Descartes himself who considers this insufficient. He does not consider it satisfactory for a program to merely execute; he wants the program to know that it is being executed. For this it is not sufficient to simply assert “The program is being run; that is itself the check on its existence”; what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.
And of course, what is sauce for the goose is sauce for the gander; if a program cannot run such a check even in principle, then what reason do I have to believe that Descartes’ brain is running some analogous check when he asserts his famous “Cogito, ergo sum”? Far more reasonable, I claim, to suspect that his brain is not running any such check, and that his resulting statement is meaningless at best, and incoherent at worst.
But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?
And there the answer is not so obvious. After all, it’s your own brain that stores the rules, your own brain that implements them, your own brain that was convinced of their validity in the first place…
What Descartes is doing, then, is seeing if he can re-generate “the usual rules”, with his own brain (and how else?), having first set them aside. In other words, he is attempting to check whether said rules are “truly part of him”, or whether they are, so to speak, foreign agents who have sneaked into his brain illicitly (through unexamined habit, indoctrination, deception, etc.).
Thus, when you say:
… Descartes may answer:
“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”
Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)
And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.
Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t. I consider it quite reasonable to be more impressed with his approach than with yours. If you object, merely consider that someone had to come up with “the usual rules” in the first place—and they did not have said rules to help them.
Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?
The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.
Now, in this paragraph I think you have some strange confusion. I am not quite sure what claim or point of mine you take this to be countering.
Hmm, I think it doesn’t go without saying, actually; I think it needs to be said, and then defended. I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not. I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).
But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…
I certainly do! I have observed the fallibility of my own brain on numerous past occasions, and any temptation I might have had to consider myself a perfect reasoner has been well and truly quashed by those past observations. Indeed, the very project we call “rationality” is premised on the notion that our naive faculties are woefully inadequate; after all, one cannot have aspirations of “increasing” one’s rationality without believing that one’s initial starting point is one of imperfect rationality.
Indeed, I am fallible, and for this reason I cannot rule out the possibility that I have misapprehended the rules, and that my misapprehensions are perhaps fatal. However, regardless of however much my fallibility reduces my confidence in the rules, it inevitably reduces my confidence in my ability to perform without rules by an equal or greater amount; and this seems to me to be right, and good.
...Or, to put it another way: perhaps I am blind, and in my blindness I have fumbled my way to a set of (what seem to me to be) crutches. Should I then discard those crutches and attempt to make my way unassisted, on the grounds that I may be mistaken about whether they are, in fact, crutches? But surely I will do no better on my own, than I will by holding on to the crutches for the time being; for then at least the possibility exists that I am not mistaken, and the objects I hold are in fact crutches. Any argument that might lead me to make the opposite choice is quite wrongheaded indeed, in my view.
It is perhaps worth noting that the sense in which “parallel lines are not parallel” which you cite is quite different from the sense in which our brains misinterpret the café wall illusion. And in light of this, it is perhaps also notable that the eventual development of non-Euclidean geometries was not spurred by this or similar optical illusions.
Which is to say: our understanding of things may be flawed or incomplete in certain ways. But we do not achieve a corrected understanding of those things by discarding our present tools wholesale (especially on such flimsy evidence as naive perception); we achieve a corrected understanding by poking and prodding at our current understanding, until such time as our efforts bear fruit.
(In the “crutch” analogy: perhaps there exists a better set of crutches, somewhere out there for us to find. This nonetheless does not imply that we ought discard our current crutches in anticipation of the better set; we will stand a far better chance of making our way to the better crutches, if we rely on the crutches we have in the meantime.)
Certainly not; but fortunately this rather strong condition is not needed for me to distrust Descartes’ reasoning. What is needed is simply that I trust “the usual rules” more than I trust Descartes; and for further clarification on this point you need merely re-read what I wrote above about “crutches”.
I believe my above arguments suffice to answer this objection.
Suppose a program is not, in fact, running. How do you propose that the program in question detect this state of affairs?
If the only possible validation of Descartes’ claim to exist is anthropic in nature, then this is tantamount to saying that his cogito is untenable. After all, “I think, therefore I am” is semantically quite different from “I assert that I am, and this assertion is anthropically valid because you will only hear me say it in worlds where it happens to be true.”
In fact, I suspect that Descartes would agree with me on this point, and complain that—to the extent you are reducing his claim to a mere instance of anthropic reasoning—you are immeasurably weakening it. To quote from an earlier comment of mine:
We’re all philosophers here, this is a safe space for pedantry. :)
Below, I’ll use the words ‘phenomenal property’ and ‘quale’ interchangeably.
An example of a phenomenal property is the particular redness of a particular red thing in my visual field.
Geoff would say he’s certain, while he’s experiencing it, that this property is instantiated.
I would say that there’s no such property, though there is a highly similar property that serves all the same behavioral/cognitive/functional roles (and just lacks that extra ‘particular redness’, and perhaps that extra ‘inwardness / inner-light-ness / interiority / subjectivity / perspectivalness’—basically, lacks whatever aspects make the hard problem seem vastly harder than the ‘easy’ problems of reducing other mental states to physical ones).
This, of course, is a crazy-sounding view on my part. It’s weird that I even think Geoff and I have a meaningful, substantive disagreement. Like, if I don’t think that Geoff’s brain really instantiates qualia, then what do I think Geoff even means by ‘qualia’? How does Geoff successfully refer to “qualia, if he doesn’t have them? Why not just say that ‘qualia’ refers to something functional?
Two reasons:
I think hard-problem intuitions are grounded in a quasi-perceptual illusion, not a free-floating delusion.
If views like Geoff’s and David Chalmers’ were grounded in a free-floating delusion, then we would just say ‘they have a false belief about their experiences’ and stop there.
If we’re instead positing that there’s something analogous to an optical illusion happening in people’s basic perception of their own experiences, then it makes structural sense to draw some distinction between ‘the thing that’s really there’ and ‘the thing that’s not really there, but seems to be there when we fall for the illusion’.
I may not think that the latter concept really and truly has the full phenomenal richness that Geoff / Chalmers / etc. think it does (for the same reason it’s hard to imagine a p-zombie having a full and correct conception of ‘what red looks like’). But I’m still perfectly happy to use the word ‘qualia’ to refer to it, keeping in mind that I think our concept of ‘qualia’ is more like ‘a promissory note for “the kind of thing we’d need to instantiate in order to justify hard-problem arguments”’—it’s a p-zombie’s notion of qualia, though the p-zombie may not realize it.
I think the hard-problem reasoning is correct, in that if we instantiated properties like those we (illusorily) appear to have, then physicalism would be false, there would be ‘further facts’ over and above the physics facts (that aren’t logically entailed/constrained by physics), etc.
Basically, I’m saying that a p-zombie’s concept of ‘phenomenal consciousness’ (or we can call it ‘blenomenal consciousness’ or something, if we want to say that p-zombies lack the ‘full’ concept) is distinct from the p-zombie’s concept of ‘the closest functional/reducible analog of phenomenal consciousness’. I think this isn’t a weird view. The crazy part is when I take the further step of asserting that we’re p-zombies. :)
Interesting!
It doesn’t have to mean anything strange or remarkable. It’s basically ordinary waking consciousness. If you are walking around noticing sounds and colours smells ,that’s phenomenal consciousness. As opposed to things that actually are strange , like blindsight or sleepwalking.
But it can be overloaded with other, more controversial, ideas, such as the idea that it is incorrigible (how we got on to the subject), or necessarily non-physical.
I think it can be reasonable to have 100% confidence in beliefs where the negation of the belief would invalidate the ability to reason, or to benefit from reason. Though with humans, I think it always makes sense to leave an epsilon for errors of reason.
What about this claim itself?
[Disclaimer: not Rob, may not share Rob’s views, etc. The reason I’m writing this comment nonetheless is that I think I share enough of Rob’s relevant views here (not least because I think Rob’s views on this topic are mostly consonant with the LW “canon” view) to explain. Depending on how much you care about Rob’s view specifically versus the LW “canon” view, you can choose to regard or disregard this comment as you see fit.]
I don’t think this is the gotcha [I think] you think it is. I think it is consistent to hold that (1) people should not place infinite certainty in any beliefs, including meta-beliefs about the normative best way to construct beliefs, and that (2) since (1) is itself a meta-belief, it too should not be afforded infinite certainty.
Of course, this conjunction has the interesting quality of feeling somewhat paradoxical, but I think this feeling doesn’t stand up to scrutiny. There doesn’t seem to me to be any actual contradiction you can derive from the conjunction of (1) and (2); the first seems simply to be a statement of a paradigm that one currently believes to be normative, and the second is a note that, just because one currently believes a paradigm to be normative, does not necessarily mean that that paradigm is normative. The fact that this second note can be construed as coming from the paradigm itself does not undermine it in my eyes; I think it is perfectly fine for paradigms to exist that fail to assert their own correctness.
I think, incidentally, that there are many people who [implicitly?] hold the negation of the above claim, i.e. they hold that (3) a valid paradigm must be one that has faith in its own validity. The paradigm may still turn out to be false, but this ought not be a possibility that is endorsed from inside the paradigm; just as individuals cannot consistently assert themselves to be mistaken about something (even if they are in fact mistaken), the inside of a paradigm ought not be the kind of thing that can undermine itself. If you hold something like (3) to be the case, then and only then does your quoted question become a gotcha.
Naturally, I think (3) is mistaken. Moreover, I not only think (3) is mistaken, I think it is unreasonable, i.e. I think there is no good reason to want (3) to be the case. I think the relevant paradox here is not Moore’s, but the lottery paradox, which I assert is not a paradox at all (though admittedly counterintuitive if one is not used to thinking in probabilities rather than certainties).
[There is also a resemblance here to Godel’s (second) incompleteness theorem, which asserts that sufficiently powerful formal systems cannot prove their own consistency unless they are actually inconsistent. I think this resemblance is more surface-level than deep, but it may provide at least an intuition that (1) there exist at least some “belief systems” that cannot “trust” themselves, and that (2) this is okay.]
On reflection, it seems right to me that there may not be a contradiction here. I’ll post something later if I conclude otherwise.
(I think I got a bit too excited about a chance to use the old philosopher’s move of “what about that claim itself.”)
:) Yeah, it is an interesting case but I’m perfectly happy to say I’m not-maximally-certain about this.
If you want to claim some definitive disproof of aspect dualism, a minimal requirement would be to engage with it. I’ ve tried talking to you about it several times, and each time you cut off the conversation at your end.
I don’t know to what extent you still endorse the quoted reasoning (as an accurate model of the mistakes being made by the sorts of people you describe), but: it seems clear to me that the big error is in step 2… and it also seems to me that step 2 is a “rookie-level” error, an error that a careful thinker shouldn’t ever make (and, indeed, that people like e.g. David Chalmers do not in fact make).
That is, the Hard Problem shouldn’t lead us to conclude that consciousness isn’t reducible to physics—only that we haven’t reduced it, and that in fact there remains an open (and hard!) problem to solve. But reasoning from the Hard Problem to a positive belief in extra-physical phenomena is surely a mistake…
? Chalmers is a panpsychist. He totally thinks phenomenal consciousness isn’t fully reducible to third-person descriptions.
(I also think you’re just wrong, but maybe poking at the Chalmers part will clarify things.)
Now, hold on: your phrasing seems to suggest that panpsychism either is the same thing as, or entails, thinking that “phenomenal consciousness isn’t fully reducible to third-person descriptions”. But… that’s not the case, as far as I can tell. Did I misunderstand you?
He’s the kind of panpsychist who holds that view because he thinks consciousness isn’t fully reducible / third-person-describable. I think this is by far the best reason to be a panpsychist, and it’s the only type of panpsychism I’ve heard endorsed by analytic philosophers working in academia.
I think Brian Tomasik endorses a different kind of panpsychism, which asserts that phenomenal consciousness is eliminable rather than fundamental? So I wouldn’t assume that arbitrary rationalist panpsychists are in the Chalmers camp; but Chalmers certainly is!
Hmm. Ok, I think I sort-of see in what direction to head to resolve the disagreement/confusion we’ve got here (and I am very unsure whether I am more confused, of the two of us, or you are, though maybe we both are)… but I don’t think that I can devote the time / mental effort to this discussion at this time. Perhaps we can come back to it another time? (Or not; it’s not terribly important, I don’t think…)
He’s a property dualist because he thinks consciousness isn’t fully reducible / third-person-describable. He also has a commitment to the idea that phenomemal consciousness supervenes on information processing and to the idea that human and biological information processing are not privileged , which all add up to something like panpsychism.
Don’t say “surely”, prove it.
It’s not unreasonable to say that a problem that has remained unsolved for an extended period of time, is insoluble...but it’s not necessarily the case either. Your opponents are making a subjective judgement call, and so are you.
No, I’d say it’s pretty unreasonable, actually.
Don’t say it’s unreasonable, prove it.
Prove that a problem is not insoluble? Why don’t you prove that it is insoluble?
The only reasonable stance in this situation is “we don’t have any very good basis for either stance”.
So both stances are reasonable, which is what I said, but not what you said.
Nnnno, I think you’re missing Said.
Saying that two extremes are both unreasonable is not the same as saying that those extremes are both reasonabe.
Said (if I am reading him right) is saying that it is unreasonable (i.e. unjustified) to claim that just because a problem hasn’t been solved for an extended period of time, it is therefore insoluble.
To which you (seemed to me to) reply “don’t just declare that [the original claim] is unreasonable. Prove that [the original claim] is unreasonable.”
To which Said (seemed to me to) answer “no, I think that there’s a strong prior here that the extreme statement isn’t one worth making.”
My own stance: a problem remaining unsolved for a long time is weak evidence that it’s fundamentally insoluble, but you really need a model of why it’s insoluble before making a strong claim there.
This is a reasonably accurate reading of my comments, yes.
Which would be true if “reasonable” and “justified” were synonyms, but they are not.
Which statement is the one that is extreme? Is it not extreme to claim an unsolved problem will definitely be solved?
It’s weak evidence, in that it’s not justification, but it’s some evidence , in that it’s reasonable. Who are you disagreeing with?
So both stances are reasonable, which is what I said, but not what you said.