a lot of 20th-century psychologists made a habit of saying things like ‘minds don’t exist, only behaviors’;
It seems like you might be referring to Eliminativism. If you are, this isn’t a fair account of it.
Eliminativism isn’t opposed to realism. It’s just a rejection of the assumption that the labels we apply to people’s mental states (wants, believes, loves, etc) are a reflection of the underlying reality. People have been thinking about minds in terms of those concepts for a really long time, but nobody had bothered to sit down and demonstrate that these are an accurate model.
From wiki:
Proponents of this view, such as B.F. Skinner, often made parallels to previous superseded scientific theories (such as that of the four humours, the phlogiston theory of combustion, and the vital force theory of life) that have all been successfully eliminated in attempting to establish their thesis about the nature of the mental. In these cases, science has not produced more detailed versions or reductions of these theories, but rejected them altogether as obsolete. Radical behaviorists, such as Skinner, argued that folk psychology is already obsolete and should be replaced by descriptions of histories of reinforcement and punishment.
Upvoted! My discussion of a bunch of these things above is very breezy, and I approve of replacing the vague claims with more specific historical ones. To clarify, here are four things I’m not criticizing:
1. Eliminativism about particular mental states, of the form ‘we used to think that this psychological term (e.g., “belief”) mapped reasonably well onto reality, but now we understand the brain well enough to see it’s really doing [description] instead, and our previous term is a misleading way of gesturing at this (or any other) mental process.’
I’m an eliminativist (or better, an illusionist) about subjectivity and phenomenal consciousness myself. (Though I think the arguments favoring that view are complicated and non-obvious, and there’s no remotely intellectually satisfying illusionist account of what the things we call “conscious” really consist in.)
2. In cases where the evidence for an eliminativist hypothesis isn’t strong, the practice of having some research communities evaluate eliminativism or try eliminativism out and see if it leads in any productive directions. Importantly, a community doing this should treat the eliminativist view as an interesting hypothesis or an exploratory research program, not in any way as settled science (or pre-scientific axiom!).
3. Demanding evidence for claims, and being relatively skeptical of varieties of evidence that have a poor track record, even if they “feel compelling”.
4. Demanding that high-level terms be in principle reducible to lower-level physical terms (given our justified confidence in physicalism and reductionism).
In the case of psychology, I am criticizing (and claiming really happened, though I agree that these views weren’t as universal, unquestioned, and extreme as is sometimes suggested):
Skinner’s and other behaviorists’ greedy reductionism; i.e., their tendency to act like they’d reduced or explained more than they actually had. Scientists should go out of their way to emphasize the limitations and holes in their current models, and be very careful (and fully explicit about why they believe this) when it comes to claims of the form ‘we can explain literally everything in [domain] using only [method].’
Rushing to achieve closure, dismiss open questions, forbid any expressions of confusion or uncertainty, and treat blank parts of your map as though they must correspond to a blank (or unimportant) territory. Quoting Watson (1928):
With the advent of behaviorism in 1913 the mind-body problem disappeared — not because ostrich-like its devotees hid their heads in the sand but because they would take no account of phenomena which they could not observe. The behaviorist finds no mind in his laboratory — sees it nowhere in his subjects. Would he not be unscientific if he lingered by the wayside and idly speculated upon it; just as unscientific as the biologists would be if they lingered over the contemplation of entelechies, engrams and the like. Their world and the world of the behaviorist are filled with facts — with data which can be accumulated and verified by observation — with phenomena which can be predicted and controlled.
If the behaviorists are right in their contention that there is no observable mind-body problem and no observable separate entity called mind — then there can be no such thing as consciousness and its subdivision. Freud’s concept borrowed from somatic pathology breaks down. There can be no festering spot in the substratum of the mind — in the unconscious —because there is no mind.
More generally: overconfidence in cool new ideas, and exaggeration of what they can do.
Over-centralizing around an eliminativist hypothesis or research program in a way that pushes out brainstorming, hypothesis-generation, etc. that isn’t easy to fit into that frame. I quote Hempel (1935) here:
[Behaviorism’s] principal methodological postulate is that a scientific psychology should limit itself to the study of the bodily behavior with which man and the animals respond to changes in their physical environment, and should proscribe as nonscientific any descriptive or explanatory step which makes use of terms from introspective or ‘understanding’ psychology, such as ‘feeling’, ‘lived experience’, ‘idea’, ‘will’, ‘intention’, ‘goal’, ‘disposition’, ‘represension’. We find in behaviorism, consequently, an attempt to construct a scientific psychology[.]
Simply put: getting the wrong answer. Some errors are more excusable than others, but even if my narrative about why they got it wrong is itself wrong, it would still be important to emphasize that they got it wrong, and could have done much better.
The general idea that introspection is never admissible as evidence. It’s fine if you want to verbally categorize introspective evidence as ‘unscientific’ in order to distinguish it from other kinds of evidence, and there are some reasonable grounds for skepticism about how strong many kinds of introspective evidence are. But evidence is still evidence; a Bayesian shouldn’t discard evidence just because it’s hard to share with other agents.
The rejection of folk-psychology language, introspective evidence, or anything else for science-as-attire reasons.
Idealism emphasized some useful truths (like ‘our perceptions and thoughts are all shaped by our mind’s contingent architecture’) but ended up in a ‘wow it feels great to make minds more and more important’ death spiral.
Behaviorism too emphasized some useful truths (like ‘folk psychology presupposes a bunch of falsifiable things about minds that haven’t all been demonstrated very well’, ‘it’s possible for introspection to radically mislead us in lots of ways’, and ‘it might benefit psychology to import and emphasize methods from other scientific fields that have a better track record’) but seems to me to have fallen into a ‘wow it feels great to more and more fully feel like I’m playing the role of a True Scientist and being properly skeptical and cynical and unromantic about humans’ trap.
The general idea that introspection is never admissible as evidence. It’s fine if you want to verbally categorize introspective evidence as ‘unscientific’ in order to distinguish it from other kinds of evidence, and there are some reasonable grounds for skepticism about how strong many kinds of introspective evidence are. But evidence is still evidence; a Bayesian shouldn’t discard evidence just because it’s hard to share with other agents.
I find that Dennett’s heterophenomenology squares this circle, fully as much as it can be squared in the absence of actual telepathy (or comparable tech).
“Heterophenomenology” might be fine as a meme for encouraging certain kinds of interesting research projects, but there are several things I dislike about how Dennett uses the idea.
Mainly, it’s leaning on the social standards of scientific practice, and on a definition of what “real science” or “good science” is, to argue against propositions like “any given scientist studying consciousness should take into account their own introspective data—e.g., the apparent character of their own visual field—in addition to verbal descriptions, as an additional fact to explain.” This is meant to serve as a cudgel and bulwark against philosophers like David Chalmers, who claim that introspection reveals further facts (/data/explananda) not strictly translatable into verbal reports.
This is framing the issue as one of social-acceptability-to-the-norms-of-scientists or conformity-with-a-definition-of-”science”, whereas correct versions of the argument are Bayesian. (And it’s logically rude to not make the Bayesianness super explicit and clear, given the opportunity; it obscures your premises while making your argument feel more authoritative via its association with “science”.)
We can imagine a weird alien race (or alien AI) that has extremely flawed sensory faculties, and very good introspection. A race like that might be able to bootstrap to good science, via leveraging their introspection to spot systematic ways in which their sensory faculties fail, and sift out the few bits of reliable information about their environments.
Humans are plausibly the opposite: as an accident of evolution, we have much more reliable sensory faculties than introspective faculties. This is a generalization from the history of science and philosophy, and from the psychology literature. Moreover, humans have a track record of being bad at distinguishing cases where their introspection is reliable from cases where it’s unreliable; so it’s hard to be confident of any lines we could draw between the “good introspection” and the “bad introspection”. All of this is good reason to require extra standards of evidence before humanity “takes introspection at face value” and admits it into its canon of Established Knowledge.
Personally, I think consciousness is (in a certain not-clarified-here sense) an illusion, and I’m happy to express confidence that Chalmers’ view is wrong. But I think Dennett has been uniquely bad at articulating the reasons Chalmers is probably wrong, often defaulting to dismissing them or trying to emphasize their social illegitimacy (as “unscientific”).
The “heterophenomenology” meme strikes me as part of that project, whereas a more honest approach would say “yeah, in principle introspective arguments are totally admissible, they just have to do a bit more work than usual because we’re giving them a lower prior (for reasons X, Y, Z)” and “here are specific reasons A, B, C that Chalmers’ arguments don’t meet the evidential bar that’s required for us to take the ‘autophenomenological’ data at face value in this particular case”.
We can imagine a weird alien race (or alien AI) that has extremely flawed sensory faculties, and very good introspection. A race like that might be able to bootstrap to good science, via leveraging their introspection to spot systematic ways in which their sensory faculties fail, and sift out the few bits of reliable information about their environments.
I don’t think I can imagine this, actually. It seems to me to be somewhat incoherent. How exactly would this race “spot systematic ways in which their sensory faculties fail”? After all, introspection does no good when it comes to correcting errors of perception of the external world…
A simple toy example would be: “You have perfect introspective access to everything about how your brain works, including how your sensory organs work. This allows you to deduce that your external sensory organs provide noise data most of the time, but provide accurate data about the environment anytime you wear blue sunglasses at night.”
I don’t read Dennett as referring to social acceptability or “norms of science” (except insofar as those norms are taken to constitute epistemic best practices from a personal standpoint, which I think Dennett does assume to some degree—but no more than is, in my view, warranted).
a more honest approach would say “yeah, in principle introspective arguments are totally admissible, they just have to do a bit more work than usual because we’re giving them a lower prior (for reasons X, Y, Z)”
Sure. Heterophenomenology is that “more work”. Introspective arguments are admissible; they’re admissible as heterophenomenological evidence.
It is indisputably the case that Chalmers, for instance, makes arguments along the lines of “there are further facts revealed by introspection that can’t be translated into words”. But it is not only not indisputably the case, but indeed can’t ever (without telepathy etc., or maybe not even then) be shown to another person, or perceived by another person, to be the case, that there are further facts revealed by introspection that can’t be translated into words.
Indeed it’s not even clear how you’d demonstrate to yourself that what your introspection reveals is real. Certainly you’re welcome to “take introspection’s word for it”—but then you don’t need science of any kind. That I experience what I experience, seems to me to need no demonstration or proof; how can it be false, after all? Even in principle? But then what use is arguing whether a Bayesian approach to demonstrating this not-in-need-of-demonstration fact is best, or some other approach? Clearly, whatever heterophenomenology (or any other method of investigation) might be concerned with, it’s not that.
But now I’m just reiterating Dennett’s arguments. I guess what I’m saying is, I think your responses to Dennett are mostly mis-aimed. I think the rebuttals are already contained in what he’s written on the subject.
It is indisputably the case that Chalmers, for instance, makes arguments along the lines of “there are further facts revealed by introspection that can’t be translated into words”. But it is not only not indisputably the case
What does “indisputably” mean here in Bayesian terms? A Bayesian’s epistemology is grounded in what evidence that individual has access to, not in what disputes they can win. When Chalmers claims to have “direct” epistemic access to certain facts, the proper response is to provide the arguments for doubting that claim, not to play a verbal sleight-of-hand like Dennett’s (1991, emphasis added):
You are not authoritative about what is happening in you, but only about what seems to be happening in you, and we are giving you total, dictatorial authority over the account of how it seems to you, about what it is like to be you. And if you complain that some parts of how it seems to you are ineffable, we heterophenomenologists will grant that too. What better grounds could we have for believing that you are unable to describe something than that (1) you don’t describe it, and (2) confess thatyou cannot? Of course you might be lying, but we’ll give you the benefit of the doubt.
It’s intellectually dishonest of Dennett to use the word “ineffable” here to slide between the propositions “I’m unable to describe my experience” and “my experience isn’t translatable in principle”, as it is to slide between Nagel’s term of art “what it’s like to be you” and “how it seems to you”.
Again, I agree with Dennett that Chalmers is factually wrong about his experience (and therefore lacks a certain degree of epistemic “authority” with me, though that’s such a terrible way of phrasing it!). There are good Bayesian arguments against trusting autophenomenology enough for Chalmers’ view to win the day (though Dennett isn’t describing any of them here), and it obviously is possible to take philosophers’ verbal propositions as data to study (cf. also the meta-problem of consciousness), but it’s logically rude to conceal your cruxes, pretend that your method is perfectly neutral and ecumenical, and let the “scientificness” of your proposed methodology do the rhetorical pushing and pulling.
but indeed can’t ever (without telepathy etc., or maybe not even then) be shown to another person, or perceived by another person, to be the case, that there are further facts revealed by introspection that can’t be translated into words.
There’s a version of this claim I agree with (since I’m a physicalist), but the version here is too strong. First, I want to note again that this is equating group epistemology with individual epistemology. But even from a group’s perspective, it’s perfectly possible for “facts revealed by introspection that can’t be translated into words” to be transmitted between people; just provide someone with the verbal prompts (or other environmental stimuli) that will cause them to experience and notice the same introspective data in their own brains.
If that’s too vague, consider this scenario as an analogy: Our universe is a (computable) simulation, running in a larger universe that’s uncomputable. Humans are “dualistic” in the sense that they’re Cartesian agents outside the simulation whose brains contain uncomputable subprocesses, but their sensory experiences and communication with other agents is all via the computable simulation. We could then imagine scenarios where the agents have introspective access to evidence that they’re performing computations too powerful to run in the laws of physics (as they know them), but don’t have output channels expressive enough to demonstrate this fact to others in-simulation; instead, they prompt the other agents to perform the relevant introspective feat themselves.
The other agents can then infer that their minds are plausibly all running on physics that’s stronger than the simulated world’s physics, even though they haven’t found a directly demonstrate this (e.g., via neurosurgery on the in-simulation pseudo-brain).
Indeed it’s not even clear how you’d demonstrate to yourself that what your introspection reveals is real.
You can update upward or downward about the reliability of your introspection (either in general, or in particular respects), in the same way you can update upward or downward about the reliability of your sensory perception. E.g., different introspective experiences or faculties can contradict each other, suggest their own unreliability (“I’m introspecting that this all feels like bullshit...”), or contradict other evidence sources.
What does “indisputably” mean here in Bayesian terms? A Bayesian’s epistemology is grounded in what evidence that individual has access to, not in what disputes they can win.
Ok… before I respond with anything else, I want to note that this is hardly a reasonable response. “Indisputably” is a word that has several related usages, and while indeed one of them is something sort of like “you won’t actually win any actual debates if you try to take the opposite position”, do you really think the most plausible way to interpret what I said is to assume that that is the usage I had in mind? Especially after I wrote:
I don’t read Dennett as referring to social acceptability or “norms of science” (except insofar as those norms are taken to constitute epistemic best practices from a personal standpoint, which I think Dennett does assume to some degree—but no more than is, in my view, warranted).
So it should be clear that I’m not talking about winning debates, or social acceptability, or any such peripheral nonsense. I am, and have been throughout this discussion, talking about epistemology. Do I really need to scrupulously eschew such (in theory ambiguous but in practice straightforward) turns of phrase like “indisputably”, lest I be treated to a lecture on Bayesian epistemology?
If you really don’t like “indisputably”, substitute any of the following, according to preference:
When Chalmers claims to have “direct” epistemic access to certain facts, the proper response is to provide the arguments for doubting that claim, not to play a verbal sleight-of-hand like Dennett’s (1991, emphasis added):
You are not authoritative about what is happening in you, but only about what seems to be happening in you, and we are giving you total, dictatorial authority over the account of how it seems to you, about what it is like to be you. And if you complain that some parts of how it seems to you are ineffable, we heterophenomenologists will grant that too. What better grounds could we have for believing that you are unable to describe something than that (1) you don’t describe it, and (2) confess that you cannot? Of course you might be lying, but we’ll give you the benefit of the doubt.
It’s intellectually dishonest of Dennett to use the word “ineffable” here to slide between the propositions “I’m unable to describe my experience” and “my experience isn’t translatable in principle”, as it is to slide between Nagel’s term of art “what it’s like to be you” and “how it seems to you”.
First of all, how in the world could you possibly know that your experience isn’t translatable in principle? That you can’t describe it—that you of course can know. But what additional meaning can it even have, to say that you can’t describe it, and on top of that, it “isn’t translatable in principle”? What does that even mean?
As far as I can tell, Dennett isn’t sliding between anything. There’s just the one meaning: you can’t describe some experience you’re having.
Secondly, it’s not clear that this paragraph is a response to claims about having “‘direct’ epistemic access to certain facts”. (I’d have to reread Consciousness Explained to see the context, but as quoted it seems a bit of a non sequitur.)
… it’s logically rude to conceal your cruxes, pretend that your method is perfectly neutral and ecumenical, and let the “scientificness” of your proposed methodology do the rhetorical pushing and pulling.
I confess I don’t really have much idea what you’re saying here. What’s Dennett concealing, exactly…?
but indeed can’t ever (without telepathy etc., or maybe not even then) be shown to another person, or perceived by another person, to be the case, that there are further facts revealed by introspection that can’t be translated into words.
There’s a version of this claim I agree with (since I’m a physicalist), but the version here is too strong. First, I want to note again that this is equating group epistemology with individual epistemology.
I wasn’t talking about group epistemology here at all, much less equating it with anything.
But even from a group’s perspective, it’s perfectly possible for “facts revealed by introspection that can’t be translated into words” to be transmitted between people; just provide someone with the verbal prompts (or other environmental stimuli) that will cause them to experience and notice the same introspective data in their own brains.
This clearly won’t do; how will you ever know that the verbal prompts (or etc.) are causing the other person to experience, much less to notice, the same “introspective data” in their brain as you experienced and noticed in yours? (How exactly do you even guarantee comparability? What does “same” even mean, across individuals? People vary, you know; and it seems fairly likely even from what we know now, that capacity to experience certain things is present to widely varying degrees in people…)
Why, there are entire reams of philosophy dedicated to precisely this very thorny challenge! (Google “spectrum inversion” sometime…) And in fact I once saw this principle play out in my own life. A musically inclined friend of mine was attempting to teach me the basics of music theory. When his initial explanations got nowhere, we opened someone’s laptop and loaded up a website where you could click buttons and play certain chords or combinations of tones. My friend clicked some buttons, played some chords, and asked me to describe what I heard, which I did… only to see my friend react with astonishment, because what I heard and what he heard turned out to be quite different. (As we later discovered, I have some interesting deficiencies/abnormalities in auditory processing, having to do, inter alia, with ability to perceive pitch.)
Now, how do you propose to cause me to experience “the same introspective data” that my friend experiences when he hears the tones and chords in question—or vice versa? What stimuli, exactly, shall you use—and how would you discover what they might be? What function, precisely, reliably maps arbitrary (stimulus X, individual A) pairs to (stimulus Y, individual B) pairs, such that the “introspective data” that is experienced (and noticed) as a result is the “same” in both cases of a set? And having on hand a candidate such function, how exactly would you ever verify that it is really the desired thing?
If that’s too vague, consider this scenario as an analogy: …
I find such fanciful analogies almost uniformly uninformative, and this one, I’m afraid, is no exception. Even if I were to stretch my brain to imagine this sort of scenario (which is not easy), and carefully consider its implications (which is quite challenging), and take the further step of drawing a conclusion about whether the given hypothetical would indeed work as you say (in which I would have quite low confidence), nevertheless it would still be entirely unclear whether, and how, the analogy mapped back to our actual world, and whether any of the reasoning and the conclusion still held. Best to avoid such things.
Indeed it’s not even clear how you’d demonstrate to yourself that what your introspection reveals is real.
You can update upward or downward about the reliability of your introspection (either in general, or in particular respects), in the same way you can update upward or downward about the reliability of your sensory perception. E.g., different introspective experiences or faculties can contradict each other, suggest their own unreliability (“I’m introspecting that this all feels like bullshit...”), or contradict other evidence sources.
What if there is no “contradiction”, as such? Surely it’s possible for introspection to be deficient or entirely misleading even so? In any case, if introspection is corrigible by comparison with “other evidence sources” (by which you presumably mean, sense data, and experimental and various other observational information acquired via sense data, etc.), then you can hardly be said to have “‘direct’ epistemic access” to anything via said introspection…
When Chalmers claims to have “direct” epistemic access to certain facts, the proper response is to provide the arguments for doubting that claim, not to play a verbal sleight-of-hand like Dennett’s (1991, emphasis added):
Chalmers’ The Conscious Mind was written in 1996, so this is wrong. The wrongness doesn’t seem important to me. (Jackson and Nagel were 1979/1982, and Dennett re-endorsed this passage in 2003.)
It seems like you might be referring to Eliminativism. If you are, this isn’t a fair account of it.
Eliminativism isn’t opposed to realism. It’s just a rejection of the assumption that the labels we apply to people’s mental states (wants, believes, loves, etc) are a reflection of the underlying reality. People have been thinking about minds in terms of those concepts for a really long time, but nobody had bothered to sit down and demonstrate that these are an accurate model.
From wiki:
Upvoted! My discussion of a bunch of these things above is very breezy, and I approve of replacing the vague claims with more specific historical ones. To clarify, here are four things I’m not criticizing:
1. Eliminativism about particular mental states, of the form ‘we used to think that this psychological term (e.g., “belief”) mapped reasonably well onto reality, but now we understand the brain well enough to see it’s really doing [description] instead, and our previous term is a misleading way of gesturing at this (or any other) mental process.’
I’m an eliminativist (or better, an illusionist) about subjectivity and phenomenal consciousness myself. (Though I think the arguments favoring that view are complicated and non-obvious, and there’s no remotely intellectually satisfying illusionist account of what the things we call “conscious” really consist in.)
2. In cases where the evidence for an eliminativist hypothesis isn’t strong, the practice of having some research communities evaluate eliminativism or try eliminativism out and see if it leads in any productive directions. Importantly, a community doing this should treat the eliminativist view as an interesting hypothesis or an exploratory research program, not in any way as settled science (or pre-scientific axiom!).
3. Demanding evidence for claims, and being relatively skeptical of varieties of evidence that have a poor track record, even if they “feel compelling”.
4. Demanding that high-level terms be in principle reducible to lower-level physical terms (given our justified confidence in physicalism and reductionism).
In the case of psychology, I am criticizing (and claiming really happened, though I agree that these views weren’t as universal, unquestioned, and extreme as is sometimes suggested):
Skinner’s and other behaviorists’ greedy reductionism; i.e., their tendency to act like they’d reduced or explained more than they actually had. Scientists should go out of their way to emphasize the limitations and holes in their current models, and be very careful (and fully explicit about why they believe this) when it comes to claims of the form ‘we can explain literally everything in [domain] using only [method].’
Rushing to achieve closure, dismiss open questions, forbid any expressions of confusion or uncertainty, and treat blank parts of your map as though they must correspond to a blank (or unimportant) territory. Quoting Watson (1928):
More generally: overconfidence in cool new ideas, and exaggeration of what they can do.
Over-centralizing around an eliminativist hypothesis or research program in a way that pushes out brainstorming, hypothesis-generation, etc. that isn’t easy to fit into that frame. I quote Hempel (1935) here:
Simply put: getting the wrong answer. Some errors are more excusable than others, but even if my narrative about why they got it wrong is itself wrong, it would still be important to emphasize that they got it wrong, and could have done much better.
The general idea that introspection is never admissible as evidence. It’s fine if you want to verbally categorize introspective evidence as ‘unscientific’ in order to distinguish it from other kinds of evidence, and there are some reasonable grounds for skepticism about how strong many kinds of introspective evidence are. But evidence is still evidence; a Bayesian shouldn’t discard evidence just because it’s hard to share with other agents.
The rejection of folk-psychology language, introspective evidence, or anything else for science-as-attire reasons.
Idealism emphasized some useful truths (like ‘our perceptions and thoughts are all shaped by our mind’s contingent architecture’) but ended up in a ‘wow it feels great to make minds more and more important’ death spiral.
Behaviorism too emphasized some useful truths (like ‘folk psychology presupposes a bunch of falsifiable things about minds that haven’t all been demonstrated very well’, ‘it’s possible for introspection to radically mislead us in lots of ways’, and ‘it might benefit psychology to import and emphasize methods from other scientific fields that have a better track record’) but seems to me to have fallen into a ‘wow it feels great to more and more fully feel like I’m playing the role of a True Scientist and being properly skeptical and cynical and unromantic about humans’ trap.
I find that Dennett’s heterophenomenology squares this circle, fully as much as it can be squared in the absence of actual telepathy (or comparable tech).
“Heterophenomenology” might be fine as a meme for encouraging certain kinds of interesting research projects, but there are several things I dislike about how Dennett uses the idea.
Mainly, it’s leaning on the social standards of scientific practice, and on a definition of what “real science” or “good science” is, to argue against propositions like “any given scientist studying consciousness should take into account their own introspective data—e.g., the apparent character of their own visual field—in addition to verbal descriptions, as an additional fact to explain.” This is meant to serve as a cudgel and bulwark against philosophers like David Chalmers, who claim that introspection reveals further facts (/data/explananda) not strictly translatable into verbal reports.
This is framing the issue as one of social-acceptability-to-the-norms-of-scientists or conformity-with-a-definition-of-”science”, whereas correct versions of the argument are Bayesian. (And it’s logically rude to not make the Bayesianness super explicit and clear, given the opportunity; it obscures your premises while making your argument feel more authoritative via its association with “science”.)
We can imagine a weird alien race (or alien AI) that has extremely flawed sensory faculties, and very good introspection. A race like that might be able to bootstrap to good science, via leveraging their introspection to spot systematic ways in which their sensory faculties fail, and sift out the few bits of reliable information about their environments.
Humans are plausibly the opposite: as an accident of evolution, we have much more reliable sensory faculties than introspective faculties. This is a generalization from the history of science and philosophy, and from the psychology literature. Moreover, humans have a track record of being bad at distinguishing cases where their introspection is reliable from cases where it’s unreliable; so it’s hard to be confident of any lines we could draw between the “good introspection” and the “bad introspection”. All of this is good reason to require extra standards of evidence before humanity “takes introspection at face value” and admits it into its canon of Established Knowledge.
Personally, I think consciousness is (in a certain not-clarified-here sense) an illusion, and I’m happy to express confidence that Chalmers’ view is wrong. But I think Dennett has been uniquely bad at articulating the reasons Chalmers is probably wrong, often defaulting to dismissing them or trying to emphasize their social illegitimacy (as “unscientific”).
The “heterophenomenology” meme strikes me as part of that project, whereas a more honest approach would say “yeah, in principle introspective arguments are totally admissible, they just have to do a bit more work than usual because we’re giving them a lower prior (for reasons X, Y, Z)” and “here are specific reasons A, B, C that Chalmers’ arguments don’t meet the evidential bar that’s required for us to take the ‘autophenomenological’ data at face value in this particular case”.
I don’t think I can imagine this, actually. It seems to me to be somewhat incoherent. How exactly would this race “spot systematic ways in which their sensory faculties fail”? After all, introspection does no good when it comes to correcting errors of perception of the external world…
Or am I misunderstanding your point…?
A simple toy example would be: “You have perfect introspective access to everything about how your brain works, including how your sensory organs work. This allows you to deduce that your external sensory organs provide noise data most of the time, but provide accurate data about the environment anytime you wear blue sunglasses at night.”
I confess I have trouble imagining this, but it doesn’t seem contradictory, so, fair enough, I take your point.
I don’t read Dennett as referring to social acceptability or “norms of science” (except insofar as those norms are taken to constitute epistemic best practices from a personal standpoint, which I think Dennett does assume to some degree—but no more than is, in my view, warranted).
Sure. Heterophenomenology is that “more work”. Introspective arguments are admissible; they’re admissible as heterophenomenological evidence.
It is indisputably the case that Chalmers, for instance, makes arguments along the lines of “there are further facts revealed by introspection that can’t be translated into words”. But it is not only not indisputably the case, but indeed can’t ever (without telepathy etc., or maybe not even then) be shown to another person, or perceived by another person, to be the case, that there are further facts revealed by introspection that can’t be translated into words.
Indeed it’s not even clear how you’d demonstrate to yourself that what your introspection reveals is real. Certainly you’re welcome to “take introspection’s word for it”—but then you don’t need science of any kind. That I experience what I experience, seems to me to need no demonstration or proof; how can it be false, after all? Even in principle? But then what use is arguing whether a Bayesian approach to demonstrating this not-in-need-of-demonstration fact is best, or some other approach? Clearly, whatever heterophenomenology (or any other method of investigation) might be concerned with, it’s not that.
But now I’m just reiterating Dennett’s arguments. I guess what I’m saying is, I think your responses to Dennett are mostly mis-aimed. I think the rebuttals are already contained in what he’s written on the subject.
What does “indisputably” mean here in Bayesian terms? A Bayesian’s epistemology is grounded in what evidence that individual has access to, not in what disputes they can win. When Chalmers claims to have “direct” epistemic access to certain facts, the proper response is to provide the arguments for doubting that claim, not to play a verbal sleight-of-hand like Dennett’s (1991, emphasis added):
It’s intellectually dishonest of Dennett to use the word “ineffable” here to slide between the propositions “I’m unable to describe my experience” and “my experience isn’t translatable in principle”, as it is to slide between Nagel’s term of art “what it’s like to be you” and “how it seems to you”.
Again, I agree with Dennett that Chalmers is factually wrong about his experience (and therefore lacks a certain degree of epistemic “authority” with me, though that’s such a terrible way of phrasing it!). There are good Bayesian arguments against trusting autophenomenology enough for Chalmers’ view to win the day (though Dennett isn’t describing any of them here), and it obviously is possible to take philosophers’ verbal propositions as data to study (cf. also the meta-problem of consciousness), but it’s logically rude to conceal your cruxes, pretend that your method is perfectly neutral and ecumenical, and let the “scientificness” of your proposed methodology do the rhetorical pushing and pulling.
There’s a version of this claim I agree with (since I’m a physicalist), but the version here is too strong. First, I want to note again that this is equating group epistemology with individual epistemology. But even from a group’s perspective, it’s perfectly possible for “facts revealed by introspection that can’t be translated into words” to be transmitted between people; just provide someone with the verbal prompts (or other environmental stimuli) that will cause them to experience and notice the same introspective data in their own brains.
If that’s too vague, consider this scenario as an analogy: Our universe is a (computable) simulation, running in a larger universe that’s uncomputable. Humans are “dualistic” in the sense that they’re Cartesian agents outside the simulation whose brains contain uncomputable subprocesses, but their sensory experiences and communication with other agents is all via the computable simulation. We could then imagine scenarios where the agents have introspective access to evidence that they’re performing computations too powerful to run in the laws of physics (as they know them), but don’t have output channels expressive enough to demonstrate this fact to others in-simulation; instead, they prompt the other agents to perform the relevant introspective feat themselves.
The other agents can then infer that their minds are plausibly all running on physics that’s stronger than the simulated world’s physics, even though they haven’t found a directly demonstrate this (e.g., via neurosurgery on the in-simulation pseudo-brain).
You can update upward or downward about the reliability of your introspection (either in general, or in particular respects), in the same way you can update upward or downward about the reliability of your sensory perception. E.g., different introspective experiences or faculties can contradict each other, suggest their own unreliability (“I’m introspecting that this all feels like bullshit...”), or contradict other evidence sources.
Ok… before I respond with anything else, I want to note that this is hardly a reasonable response. “Indisputably” is a word that has several related usages, and while indeed one of them is something sort of like “you won’t actually win any actual debates if you try to take the opposite position”, do you really think the most plausible way to interpret what I said is to assume that that is the usage I had in mind? Especially after I wrote:
So it should be clear that I’m not talking about winning debates, or social acceptability, or any such peripheral nonsense. I am, and have been throughout this discussion, talking about epistemology. Do I really need to scrupulously eschew such (in theory ambiguous but in practice straightforward) turns of phrase like “indisputably”, lest I be treated to a lecture on Bayesian epistemology?
If you really don’t like “indisputably”, substitute any of the following, according to preference:
plainly
manifestly
obviously
clearly
certainly
indubitably
incontrovertibly
with nigh-perfect certainty
… etc., etc.
And now, a substantive response:
First of all, how in the world could you possibly know that your experience isn’t translatable in principle? That you can’t describe it—that you of course can know. But what additional meaning can it even have, to say that you can’t describe it, and on top of that, it “isn’t translatable in principle”? What does that even mean?
As far as I can tell, Dennett isn’t sliding between anything. There’s just the one meaning: you can’t describe some experience you’re having.
Secondly, it’s not clear that this paragraph is a response to claims about having “‘direct’ epistemic access to certain facts”. (I’d have to reread Consciousness Explained to see the context, but as quoted it seems a bit of a non sequitur.)
I confess I don’t really have much idea what you’re saying here. What’s Dennett concealing, exactly…?
I wasn’t talking about group epistemology here at all, much less equating it with anything.
This clearly won’t do; how will you ever know that the verbal prompts (or etc.) are causing the other person to experience, much less to notice, the same “introspective data” in their brain as you experienced and noticed in yours? (How exactly do you even guarantee comparability? What does “same” even mean, across individuals? People vary, you know; and it seems fairly likely even from what we know now, that capacity to experience certain things is present to widely varying degrees in people…)
Why, there are entire reams of philosophy dedicated to precisely this very thorny challenge! (Google “spectrum inversion” sometime…) And in fact I once saw this principle play out in my own life. A musically inclined friend of mine was attempting to teach me the basics of music theory. When his initial explanations got nowhere, we opened someone’s laptop and loaded up a website where you could click buttons and play certain chords or combinations of tones. My friend clicked some buttons, played some chords, and asked me to describe what I heard, which I did… only to see my friend react with astonishment, because what I heard and what he heard turned out to be quite different. (As we later discovered, I have some interesting deficiencies/abnormalities in auditory processing, having to do, inter alia, with ability to perceive pitch.)
Now, how do you propose to cause me to experience “the same introspective data” that my friend experiences when he hears the tones and chords in question—or vice versa? What stimuli, exactly, shall you use—and how would you discover what they might be? What function, precisely, reliably maps arbitrary
(stimulus X, individual A)
pairs to(stimulus Y, individual B)
pairs, such that the “introspective data” that is experienced (and noticed) as a result is the “same” in both cases of a set? And having on hand a candidate such function, how exactly would you ever verify that it is really the desired thing?I find such fanciful analogies almost uniformly uninformative, and this one, I’m afraid, is no exception. Even if I were to stretch my brain to imagine this sort of scenario (which is not easy), and carefully consider its implications (which is quite challenging), and take the further step of drawing a conclusion about whether the given hypothetical would indeed work as you say (in which I would have quite low confidence), nevertheless it would still be entirely unclear whether, and how, the analogy mapped back to our actual world, and whether any of the reasoning and the conclusion still held. Best to avoid such things.
What if there is no “contradiction”, as such? Surely it’s possible for introspection to be deficient or entirely misleading even so? In any case, if introspection is corrigible by comparison with “other evidence sources” (by which you presumably mean, sense data, and experimental and various other observational information acquired via sense data, etc.), then you can hardly be said to have “‘direct’ epistemic access” to anything via said introspection…
Chalmers’ The Conscious Mind was written in 1996, so this is wrong. The wrongness doesn’t seem important to me. (Jackson and Nagel were 1979/1982, and Dennett re-endorsed this passage in 2003.)