Learn to recognize that the parts of your brain that handle text generation and output are no more “you” than the parts of your brain that handle motor reflex control.
I’d certainly call them much more significant to my identity than a e.g. my deltoid muscle, or some motor function parts of my brain.
I’d certainly call them much more significant to my identity than a e.g. my deltoid muscle, or some motor function parts of my brain.
It may be useful to recognize that this is a choice, rather than an innate principle of identity. The parts that speak are just modules, just like the parts that handle motor control. They can (and often do) run autonomously, and then the module that handles generating a coherent narrative stitches together an explanation of why you “decided” to cause whatever they happened to generate.
This sounds like a theory of identity as epiphenomenal homunculus. A module whose job is to sit there weaving a narrative, but which has no effect on anything outside itself (except to make the speech module utter its narrative from time to time). “Mr Volition”, as Greg Egan calls it in one of his stories. Is that your view?
More or less, yes. It does have some effect on things outside itself, of course, in that its ‘narrative’ tends to influence our emotional investment in situations, which in turn influences our reactions.
It seems to me that the Mr. Volition theory suffers from the same logical flaw as p-zombies. How would a non-conscious entity, a p-zombie, come to talk about consciousness? And how does an epiphenomenon come to think it’s in charge, how does it even arrive at the very idea of “being in charge”, if it was never in charge of anything?
An illusion has to be an illusion of something real. Fake gold can exist only because there is such a thing as real gold. There is no such thing as fake mithril, because there is no such thing as real mithril.
By that analogy, then, fake gods can exist only because there is such a thing as real gods; fake ghosts can only exist because there is such a thing as real ghosts; fake magic can only exist because there is such a thing as real magic.
It’s perfectly possible to be ontologically mistaken about the nature of one’s world.
By that analogy, then, fake gods can exist only because there is such a thing as real gods; fake ghosts can only exist because there is such a thing as real ghosts; fake magic can only exist because there is such a thing as real magic.
It’s perfectly possible to be ontologically mistaken about the nature of one’s world.
Indeed. There is real agency, so people have imagined really big agents that created and rule the world. People’s consciousness persists, even after the interruptions of sleep, and they imagine it persists even after death. People’s actions appear to happen purely by their intention, and they imagine doing arbitrary things purely by intention. These are the real things that the fakes, pretences, or errors are based on.
But how do the p-zombie and the homunculus even get to the point of having their mistaken ontology?
The p-zombie doesn’t, because the p-zombie is not a logically consistent concept. Imagine if there was a word that meant “four-sided triangle”—that’s the level of absurdity that the ‘p-zombie’ idea represents.
On the other hand, the epiphenomenal consciousness (for which I’ll accept the appelature ‘homunculus’ until a more consistent and accurate one occurs to me) is simply mistaken in that it is drawing too large a boundary in some respects, and too small a boundary in others. It’s drawing a line around certain phenomena and ascribing a causal relationship between those and its own so-called ‘agency’, while excluding others. The algorithm that draws those lines doesn’t have a particularly strong map-territory correlation; it just happens to be one of those evo-psych things that developed and self-reinforced because it worked in the ancestral environment.
Note that I never claimed that “agency” and “volition” are nonexistent on the whole; merely that the vast majority of what people internally consider “agency” and “volition”, aren’t.
EDIT: And I see that you’ve added some to the comment I’m replying to, here. In particular, this stood out:
People’s consciousness persists, even after the interruptions of sleep, and they imagine it persists even after death.
I don’t believe that “my” consciousness persists after sleep. I believe that a new consciousness generates itself upon waking, and pieces itself together using the memories it has access to as a consequence of being generated by “my” brain; but I don’t think that the creature that will wake up tomorrow is “me” in the same way that I am. I continue to use words like “me” and “I” for two reasons:
Social convenience—it’s damn hard to get along with other hominids without at least pretending to share their cultural assumptions
It is, admittedly, an incredibly persistent illusion. However, it is a logically incoherent illusion, and I have upon occasion pierced it and seen others pierce it, so I’m not entirely inclined to give it ontological reality with p=1.0 anymore.
Do you believe that the creature you are now (as you read this parenthetical expression) is “you” in the same way as the creature you are now (as you read this parenthetical expression)?
Yes(ish), on the basis that the change between me(expr1) and me(expr2) is small enough that assigning them a single consistent identity is more convenient than acknowledging the differences.
But if I’m operating in a more rigorous context, then no; under most circumstances that appear to require epistemological rigor, it seems better to taboo concepts like “I” and “is” altogether.
I share something like this attitude, but in normal non-rigorous contexts I treat me-before-sleep and me-after-sleep as equally me in much the same way as you do me(expr1) and me(expr2).
More generally, my non-rigorous standard for “me” is such that all of my remembered states when I wasn’t sleeping, delirious, or younger than 16 or so unambiguously qualify for “me”dom, despite varying rather broadly amongst themselves. This is mostly because the maximum variation along salient parameters among that set of states seems significantly smaller than the minimum variations between that set and the various other sets of states I observe others demonstrating. (If I lived in a community seeded by copies of myself-as-of-five-minutes ago who could transfer memories among one another, I can imagine my notion of “I” changing radically.)
More generally, my non-rigorous standard for “me” is such that all of my remembered states when I wasn’t sleeping, delirious, or younger than 16 or so unambiguously qualify for “me”dom, despite varying rather broadly amongst themselves. This is mostly because the maximum variation along salient parameters among that set of states seems significantly smaller than the minimum variations between that set and the various other sets of states I observe others demonstrating. (If I lived in a community seeded by copies of myself-as-of-five-minutes ago who could transfer memories among one another, I can imagine my notion of “I” changing radically.)
Nice! I like that reasoning.
I personally experience a somewhat less coherent sense of self, and what sense of self I do experience seems particularly maladaptive to my environment, so we definitely seem to have different epistemological and pragmatic goals—but I think we’re applying very similar reasoning to arrive at our premises.
Jobs are a particularly egregious case where tabooing “is” seems like a good idea—do you find the idea that people “are” their jobs a particularly useful encapsulation of the human experience? Do you, personally find your self fully encapsulated by the ritualized economic actions you perform?
But if ‘I’ differ day to day, then doesn’t this body differ day to day too?
Certainly. How far do you want to go? Maps are not territories, but some maps provide useful representations of territories for certain contexts and purposes.
The danger represented by “I” and “is” come from their tendency to blow away the map-territory relation, and convince the reader that an identity exists between a particular concept and a particular phenomenon.
An illusion has to be an illusion of something real. Fake gold can exist only because there is such a thing as real gold. There is no such thing as fake mithril, because there is no such thing as real mithril.
Suppose I am standing next to a wall so high that I am left with the subjective impression that it just goes on forever and ever, with no upper bound. Or next to a chasm so deep that I am left with the subjective impression that it’s bottomless.
Would you say these subjective impressions are impossible? If possible, would you say they aren’t illusory?
My own answer would be that such subjective impressions are both illusory and possible, but that this is not evidence of the existence of such things as real bottomless pits and infinitely tall walls. Rather, they are indications that my imagination is capable of creating synthetic/composite data structures.
Setting aside the question of whether this is fake iron man armor, or a real costume of the fake iron man, or a fake costume designed after the fake iron man portrayed by special effects artists in the movies, I think an illusion can be anything that triggers a category recognition by matching some of the features strongly enough to trigger the recognition, while failing to match on a significant amount of the other features that are harder to detect at first.
I think it’s not an epiphenomenon, it’s just wired in more circuitously than people believe. It has effects; it just doesn’t have some effects that we tend to ascribe to it, like decisionmaking and highlevel thought.
The epiphenomenal homunculus theory claims that there’s nothing but p-zombies, so there are no conscious beings for them to be functionally equivalent to. After all, as the alien that has just materialised on my monitor has pointed out to me, no humans have zardlequeep (approximate transcription), and they don’t go around insisting that they do. They don’t even have the concept to talk about.
The theory that there is nothing but zombies runs into the difficulty of explaining why many of them would believe they are non-zombies. The standard p-zombie argument, that you can have qualia-less functional duplicates of non-zombies does not have that problem.
The theory that there is nothing but zombies runs into the much bigger difficulty of explaining to myself why I’m a zombie. When I poke myself with a needle, I sure as hell have the qualia of pain.
And don’t tell me it’s an illusion—any illusion is a qualia by itself.
The standard p-zombie argument still has a problem explaining why p-zombies claim to be conscious. It leaves no role for consciousness in explaining why conscious humans talk of being conscious. It’s a short road (for a philosopher) to then argue that consciousness plays no role, and we’re back with consciousness as either an epiphenomenon or non-existent, and the problem of why—especially when consciousness is conceded to exist, but cause nothing—the non-conscious system claims to be conscious.
Even worse, the question of how the word “conscious” can possibly even refer to this thing that is claimed to be epiphenomenal, since the word can’t have been invented in response to the existence or observations of consciousness (since there aren’t any observations). And in fact there is nothing to allow a human to distinguish between this thing, and every other thing that has never been observed, so in a way the claim that a person is “conscious” is perfectly empty.
ETA: Well, of course one can argue that it is defined intensionally, like “a unicorn is a horse with a single horn extending from its head, and [various magical properties]” which does define a meaningful predicate even if a unicorn has never been seen. But in that case any human’s claim to have a consciousness is perfectly evidence-free, since there are no observations of it with which to verify that it (to the extent that you can even refer to a particular unobservable thing) has the relevant properties.
The standard p-zombie argument still has a problem explaining why p-zombies claim to be conscious. It leaves no role for consciousness in explaining why conscious humans talk of being conscious.
Yes. Thats the standard epiphenomenalism objection.
. It’s a short road (for a philosopher) to then argue that consciousness plays no role,
Maybe I’m being unnecessarily cryptic. My point is that when you say that something is “talking about consciousness,” you’re assigning meaning to what is ultimately a particular sequence of vibrations of the air (or a particular pattern of pigment on a rock, or a particular sequence of ASCII characters on a screen). I don’t need a soul to “talk about souls,” and I don’t need to be conscious to “talk about consciousness”: it just needs to happen to be the case that my mouth emits a particular sequence of vibrations in the air that you’re inclined to interpret in a particular way (but that interpretation is in your map, not the territory).
In other words, I’m trying to dissolve the question you’re asking. Am I making sense?
In other words, I’m trying to dissolve the question you’re asking. Am I making sense?
Not yet. I really think you need to read the GLUT post that nsheppard linked to.
I don’t need a soul to “talk about souls,” and I don’t need to be conscious to “talk about consciousness”
You do need to have those concepts, though, and concepts cannot arise without there being something that gave rise to them. That something may not have all the properties one ascribes to it (e.g. magical powers), but discovering that that one was mistaken about some aspects does not allow one to conclude that there is no such thing. One still has to discover what the right account of it is.
If consciousness is an illusion, what experiences the illusion?
it just needs to happen to be the case that my mouth emits a particular sequence of vibrations in the air
This falls foul of the GAZP v. GLUT thing. It cannot “just happen to be the case”. When you pull out for attention the case where a random process generates something that appears to be about consciousness, out of all the other random strings, you’ve used your own concept of consciousness to do that.
I think so; at least, I have now. (I don’t know why someone would downvote your comment, it wasn’t me.) So, something went wrong in his head, to the point that asking “was he, or was he not, conscious” is too abstract a question to ask. Nowadays, we’d want to do science to someone like that, to try to find out what was physically going on.
I don’t need to be conscious to “talk about consciousness”:
That is not obvious. You do need to be a langue-user to use language, you do need to know English to communicate in English, and so on. If consciousness involves things like self-reflection and volition, you do need to be conscious to interntionally use language to express your reflections on your own consciousness.
I write a computer program that outputs every possible sequence of 16 characters to a different monitor. Is the monitor which outputs ‘I am conscious’ talking about consciousness in the same way the rock is? Whose attempt at communication is it a medium for?
Your decision to point out the particular monitor displaying this message as an example of something imparts information about your mental state in exactly the same way that your decision to pick a particular sequence of 16 characters out of platonia to engrave on a rock does.
Why would we have these modules that seem quite complex, and likely to negatively effect fitness (thinking’s expensive), if they don’t do anything? What are the odds of this becoming a prevalent without a favourable selection pressure?
Sometimes you get spandrels, and sometimes you get systems built on foundations that are no longer what we would call “adaptive”, but that can’t be removed without crashing systems that are adaptive.
Here’s one: it turns out that ascribing consistent identity to nominal entities is a side-effect of one of the most easily constructed implementations of “predict the behavior of my environment.” Predicting the behavior of my environment is enormously useful, so the first mutant to construct this implementation had a huge advantage. Pretty soon everyone was doing it, and competing for who could do it best, and we had foreclosed the evolutionary paths that allowed environmental prediction without identity-ascribing. So the selection pressure for environmental prediction also produced (as an incidental side-effect) selection pressure for identity-ascribing, despite the identity-ascribing itself being basically useless, and here we are.
I have no idea if that story is true or not; I’m not sure what I’d expect to see differentially were it true or false. My point is more that I’m skeptical of “why would our brains do this if it weren’t a useful thing to do?” as a reason for believing that everything my brain does is useful.
I’d certainly call them much more significant to my identity than a e.g. my deltoid muscle, or some motor function parts of my brain.
It may be useful to recognize that this is a choice, rather than an innate principle of identity. The parts that speak are just modules, just like the parts that handle motor control. They can (and often do) run autonomously, and then the module that handles generating a coherent narrative stitches together an explanation of why you “decided” to cause whatever they happened to generate.
This sounds like a theory of identity as epiphenomenal homunculus. A module whose job is to sit there weaving a narrative, but which has no effect on anything outside itself (except to make the speech module utter its narrative from time to time). “Mr Volition”, as Greg Egan calls it in one of his stories. Is that your view?
More or less, yes. It does have some effect on things outside itself, of course, in that its ‘narrative’ tends to influence our emotional investment in situations, which in turn influences our reactions.
It seems to me that the Mr. Volition theory suffers from the same logical flaw as p-zombies. How would a non-conscious entity, a p-zombie, come to talk about consciousness? And how does an epiphenomenon come to think it’s in charge, how does it even arrive at the very idea of “being in charge”, if it was never in charge of anything?
An illusion has to be an illusion of something real. Fake gold can exist only because there is such a thing as real gold. There is no such thing as fake mithril, because there is no such thing as real mithril.
By that analogy, then, fake gods can exist only because there is such a thing as real gods; fake ghosts can only exist because there is such a thing as real ghosts; fake magic can only exist because there is such a thing as real magic.
It’s perfectly possible to be ontologically mistaken about the nature of one’s world.
Indeed. There is real agency, so people have imagined really big agents that created and rule the world. People’s consciousness persists, even after the interruptions of sleep, and they imagine it persists even after death. People’s actions appear to happen purely by their intention, and they imagine doing arbitrary things purely by intention. These are the real things that the fakes, pretences, or errors are based on.
But how do the p-zombie and the homunculus even get to the point of having their mistaken ontology?
The p-zombie doesn’t, because the p-zombie is not a logically consistent concept. Imagine if there was a word that meant “four-sided triangle”—that’s the level of absurdity that the ‘p-zombie’ idea represents.
On the other hand, the epiphenomenal consciousness (for which I’ll accept the appelature ‘homunculus’ until a more consistent and accurate one occurs to me) is simply mistaken in that it is drawing too large a boundary in some respects, and too small a boundary in others. It’s drawing a line around certain phenomena and ascribing a causal relationship between those and its own so-called ‘agency’, while excluding others. The algorithm that draws those lines doesn’t have a particularly strong map-territory correlation; it just happens to be one of those evo-psych things that developed and self-reinforced because it worked in the ancestral environment.
Note that I never claimed that “agency” and “volition” are nonexistent on the whole; merely that the vast majority of what people internally consider “agency” and “volition”, aren’t.
EDIT: And I see that you’ve added some to the comment I’m replying to, here. In particular, this stood out:
I don’t believe that “my” consciousness persists after sleep. I believe that a new consciousness generates itself upon waking, and pieces itself together using the memories it has access to as a consequence of being generated by “my” brain; but I don’t think that the creature that will wake up tomorrow is “me” in the same way that I am. I continue to use words like “me” and “I” for two reasons:
Social convenience—it’s damn hard to get along with other hominids without at least pretending to share their cultural assumptions
It is, admittedly, an incredibly persistent illusion. However, it is a logically incoherent illusion, and I have upon occasion pierced it and seen others pierce it, so I’m not entirely inclined to give it ontological reality with p=1.0 anymore.
Do you believe that the creature you are now (as you read this parenthetical expression) is “you” in the same way as the creature you are now (as you read this parenthetical expression)?
If so, on what basis?
Yes(ish), on the basis that the change between me(expr1) and me(expr2) is small enough that assigning them a single consistent identity is more convenient than acknowledging the differences.
But if I’m operating in a more rigorous context, then no; under most circumstances that appear to require epistemological rigor, it seems better to taboo concepts like “I” and “is” altogether.
(nods) Fair enough.
I share something like this attitude, but in normal non-rigorous contexts I treat me-before-sleep and me-after-sleep as equally me in much the same way as you do me(expr1) and me(expr2).
More generally, my non-rigorous standard for “me” is such that all of my remembered states when I wasn’t sleeping, delirious, or younger than 16 or so unambiguously qualify for “me”dom, despite varying rather broadly amongst themselves. This is mostly because the maximum variation along salient parameters among that set of states seems significantly smaller than the minimum variations between that set and the various other sets of states I observe others demonstrating. (If I lived in a community seeded by copies of myself-as-of-five-minutes ago who could transfer memories among one another, I can imagine my notion of “I” changing radically.)
Nice! I like that reasoning.
I personally experience a somewhat less coherent sense of self, and what sense of self I do experience seems particularly maladaptive to my environment, so we definitely seem to have different epistemological and pragmatic goals—but I think we’re applying very similar reasoning to arrive at our premises.
So in the following sentence...
“I am a construction worker”
Can you taboo ‘I’ and “am’ for me?
This body works construction.
Jobs are a particularly egregious case where tabooing “is” seems like a good idea—do you find the idea that people “are” their jobs a particularly useful encapsulation of the human experience? Do you, personally find your self fully encapsulated by the ritualized economic actions you perform?
But if ‘I’ differ day to day, then doesn’t this body differ day to day too?
I am fully and happily encapsulated by my job, though I think I may have the only job where this really possible.
Certainly. How far do you want to go? Maps are not territories, but some maps provide useful representations of territories for certain contexts and purposes.
The danger represented by “I” and “is” come from their tendency to blow away the map-territory relation, and convince the reader that an identity exists between a particular concept and a particular phenomenon.
Is the camel’s nose the same thing as his tail? Are the nose and the tail parts of the same thing? What needs tabooing is “same” and “thing”.
I have also found that process useful (although like ‘I’, there are contexts where it is very cumbersome to get around using them).
Suppose I am standing next to a wall so high that I am left with the subjective impression that it just goes on forever and ever, with no upper bound. Or next to a chasm so deep that I am left with the subjective impression that it’s bottomless.
Would you say these subjective impressions are impossible?
If possible, would you say they aren’t illusory?
My own answer would be that such subjective impressions are both illusory and possible, but that this is not evidence of the existence of such things as real bottomless pits and infinitely tall walls. Rather, they are indications that my imagination is capable of creating synthetic/composite data structures.
Mesh mail “mithril” vest, $335.
Setting aside the question of whether this is fake iron man armor, or a real costume of the fake iron man, or a fake costume designed after the fake iron man portrayed by special effects artists in the movies, I think an illusion can be anything that triggers a category recognition by matching some of the features strongly enough to trigger the recognition, while failing to match on a significant amount of the other features that are harder to detect at first.
That’s not fake mithril, it’s pretend mithril.
To have the recognotion, there must have already been a category to recognise.
A tape recorder is a non-conscious entity. I can get a tape recorder to talk about consciousness quite easily.
Or are you asking how it would decide to talk about consciousness? It’s a bit ambiguous.
I think it’s not an epiphenomenon, it’s just wired in more circuitously than people believe. It has effects; it just doesn’t have some effects that we tend to ascribe to it, like decisionmaking and highlevel thought.
.> How would a non-conscious entity, a p-zombie, come to talk about consciousness?
By functional equivalence. A zombie Chalmers is bound to will utter sentences asserting its possession of qualia, a zombie Dennett will utter sentences denying the same.
The only getout is to claim that it is not really talking at all.
The epiphenomenal homunculus theory claims that there’s nothing but p-zombies, so there are no conscious beings for them to be functionally equivalent to. After all, as the alien that has just materialised on my monitor has pointed out to me, no humans have zardlequeep (approximate transcription), and they don’t go around insisting that they do. They don’t even have the concept to talk about.
The theory that there is nothing but zombies runs into the difficulty of explaining why many of them would believe they are non-zombies. The standard p-zombie argument, that you can have qualia-less functional duplicates of non-zombies does not have that problem.
The theory that there is nothing but zombies runs into the much bigger difficulty of explaining to myself why I’m a zombie. When I poke myself with a needle, I sure as hell have the qualia of pain.
And don’t tell me it’s an illusion—any illusion is a qualia by itself.
Don’t tell me tell Dennett
The standard p-zombie argument still has a problem explaining why p-zombies claim to be conscious. It leaves no role for consciousness in explaining why conscious humans talk of being conscious. It’s a short road (for a philosopher) to then argue that consciousness plays no role, and we’re back with consciousness as either an epiphenomenon or non-existent, and the problem of why—especially when consciousness is conceded to exist, but cause nothing—the non-conscious system claims to be conscious.
Even worse, the question of how the word “conscious” can possibly even refer to this thing that is claimed to be epiphenomenal, since the word can’t have been invented in response to the existence or observations of consciousness (since there aren’t any observations). And in fact there is nothing to allow a human to distinguish between this thing, and every other thing that has never been observed, so in a way the claim that a person is “conscious” is perfectly empty.
ETA: Well, of course one can argue that it is defined intensionally, like “a unicorn is a horse with a single horn extending from its head, and [various magical properties]” which does define a meaningful predicate even if a unicorn has never been seen. But in that case any human’s claim to have a consciousness is perfectly evidence-free, since there are no observations of it with which to verify that it (to the extent that you can even refer to a particular unobservable thing) has the relevant properties.
Yes. Thats the standard epiphenomenalism objection.
Often a bit too short.
I scrawl on a rock “I am conscious.” Is the rock talking about consciousness?
No, you are.
I run a program that randomly outputs strings. One day it outputs the string “I am conscious.” Is the program talking about consciousness? Am I?
No, see nsheppard’s comment.
Maybe I’m being unnecessarily cryptic. My point is that when you say that something is “talking about consciousness,” you’re assigning meaning to what is ultimately a particular sequence of vibrations of the air (or a particular pattern of pigment on a rock, or a particular sequence of ASCII characters on a screen). I don’t need a soul to “talk about souls,” and I don’t need to be conscious to “talk about consciousness”: it just needs to happen to be the case that my mouth emits a particular sequence of vibrations in the air that you’re inclined to interpret in a particular way (but that interpretation is in your map, not the territory).
In other words, I’m trying to dissolve the question you’re asking. Am I making sense?
Not yet. I really think you need to read the GLUT post that nsheppard linked to.
You do need to have those concepts, though, and concepts cannot arise without there being something that gave rise to them. That something may not have all the properties one ascribes to it (e.g. magical powers), but discovering that that one was mistaken about some aspects does not allow one to conclude that there is no such thing. One still has to discover what the right account of it is.
If consciousness is an illusion, what experiences the illusion?
This falls foul of the GAZP v. GLUT thing. It cannot “just happen to be the case”. When you pull out for attention the case where a random process generates something that appears to be about consciousness, out of all the other random strings, you’ve used your own concept of consciousness to do that.
I’ve read GLUT. Have you read The Zombie Preacher of Somerset?
I think so; at least, I have now. (I don’t know why someone would downvote your comment, it wasn’t me.) So, something went wrong in his head, to the point that asking “was he, or was he not, conscious” is too abstract a question to ask. Nowadays, we’d want to do science to someone like that, to try to find out what was physically going on.
Sure, I’m happy with that interpretation.
That is not obvious. You do need to be a langue-user to use language, you do need to know English to communicate in English, and so on. If consciousness involves things like self-reflection and volition, you do need to be conscious to interntionally use language to express your reflections on your own consciousness.
In the same way that a philosophy paper does… yes. Of course, the rock is just a medium for your attempt at communication.
I write a computer program that outputs every possible sequence of 16 characters to a different monitor. Is the monitor which outputs ‘I am conscious’ talking about consciousness in the same way the rock is? Whose attempt at communication is it a medium for?
Your decision to point out the particular monitor displaying this message as an example of something imparts information about your mental state in exactly the same way that your decision to pick a particular sequence of 16 characters out of platonia to engrave on a rock does.
See also: on GLUTs.
The reader’s. Paradolia is a signal-processing system’s attempt to find a signal.
On a long enough timeline, all random noise generators become hidden word puzzles.
Why would we have these modules that seem quite complex, and likely to negatively effect fitness (thinking’s expensive), if they don’t do anything? What are the odds of this becoming a prevalent without a favourable selection pressure?
High, if they happen to be foundational.
Sometimes you get spandrels, and sometimes you get systems built on foundations that are no longer what we would call “adaptive”, but that can’t be removed without crashing systems that are adaptive.
Evo-psych just-so stories are cheap.
Here’s one: it turns out that ascribing consistent identity to nominal entities is a side-effect of one of the most easily constructed implementations of “predict the behavior of my environment.” Predicting the behavior of my environment is enormously useful, so the first mutant to construct this implementation had a huge advantage. Pretty soon everyone was doing it, and competing for who could do it best, and we had foreclosed the evolutionary paths that allowed environmental prediction without identity-ascribing. So the selection pressure for environmental prediction also produced (as an incidental side-effect) selection pressure for identity-ascribing, despite the identity-ascribing itself being basically useless, and here we are.
I have no idea if that story is true or not; I’m not sure what I’d expect to see differentially were it true or false. My point is more that I’m skeptical of “why would our brains do this if it weren’t a useful thing to do?” as a reason for believing that everything my brain does is useful.