Easy :P Just build a language module for the Bayesian model that, when asked about inner experience, starts using words to describe the postprocessed sense data it uses to reason about the world.
Of course this is a joke, hahaha, because humans have Real Experience that is totally different from just piping processed sense data out to be described in response to a verbal cue.
Fair enough, my only defense is that I thought you’d find it funny.
A more serious answer to the homunculus problem as stated is simply levels of description—one of our ways of talking about and modeling humans in terms of experience (particularly inaccurate experience) tends to reserve a spot for an inner experiencer, and our way of talking about humans in terms of calcium channels tends not to. Neither model is strictly necessary for navigating the world, it’s just that by the very question “Why does any explanation of subjective experience invoke an experiencer” you have baked the answer into the question by asking it in a way of talking that saves a spot for the experiencer. If we used a world-model without homunculi, the phrase “subjective experience” would mean something other than what it does in that question.
There is no way out of this from within—as Sean Carroll likes to point out, “why” questions have answers (are valid questions) within a causal model of the world, but aren’t valid about the entire causal model itself. If we want to answer questions about why a way of talking about and modeling the world is the way it is, we can only do that within a broader model of the world that contains the first (edit: not actually true always, but seems true in this case), and the very necessity of linking the smaller model to a broader one means the words don’t mean quite the same things in the broader way of talking about the world. Nor can we answer “why is the world just like my model says it is?” without either tautology or recourse to a bigger model.
We might as well use the Hard Problem Of Consciousness here. Start with a model of the world that has consciousness explicitly in it. Then ask why the world is like it is in the model. If the explanation stays inside the original model, it is a tautology, and if it uses a different model, it’s not answering the original question because all the terms mean different things. The neuroscientists’ hard problem of consciousness, as you call it, is in the second camp—it says “skip that original question. We’ll be more satisfied by answering a different question.”
This homunculus problem seems to be a similar sort of thing, one layer out. The provocative version has an unsatisfying, non-explaining answer, but we might be satisfied by asking related yet different questions like “why is talking about the world in terms of experience and experiencer a good idea for human-like creatures?”
Then ask why the world is like it is in the model. If the explanation stays inside the original model, it is a tautology, and if it uses a different model, it’s not answering the original question because all the terms mean different things
There’s two huge assumptions there.
One is that everything within a model is tautologous in a deprecatory sense, a sense that renders it worthless.
The other is that any model features a unique semantics, incomparable with any other model.
The axioms of a system are tautologies, and assuming something as an axiom is widely regarded as a low value, as not really explaining it. The theorems or proofs within a system can also be regarded as tautologies, but it can take a lot of work to derive them, and their subjective value is correspondingly higher. So, a derivation of facts about subjective experience from accepted principles of physics would count as both an explanation of phenomenality and a solution to the hard problem of consciousness...but a mere assumption that qualia exist would not.
Its much more standard to assume that
there is some semantic continuity between different theories than none. That’s straightforwardly demonstrated by the fact that people tend to say Einstein had a better theory of gravity than Newton, and so on.
The specific case here is why-questions about bits of a model of the world (because I’m making the move to say it’s important that certain why-questions about mental stuff aren’t just raw data, they are asked about pieces of a model of mental phenomena). For example, suppose I think that the sky is literally a big sphere around the world, and it has the property of being blue in the day and starry in the night. If I wonder why the sky is blue, this pretty obviously isn’t going to be a logical consequence of some other part of the model. If I had a more modern model of the sky, its blueness might be a logical consequence of other things, but I wouldn’t mean quite the same thing by “sky.”
So my claim about different semantics isn’t that you can’t have any different models with overlapping semantics, it’s specifically about going from a model where some datum (e.g. looking up and seeing blue) is a trivial consequence to one where it’s a nontrivial consequence. I’m sure it’s not totally impossible for the meanings to be absolutely identical before and after, but I think it’s somewhere between exponentially unlikely and measure zero.
I’m sure it’s not totally impossible for the meanings to be absolutely identical before and after, but I think it’s somewhere between exponentially unlikely and measure zero.
Why? You seem to appealing to a theory of meaning that you haven’t made explicit.
Edit:
I should have paid more attention to your “absolutely”. I don’t have any way of guaranting that meanings are absolutely stable across theories , but I don’t think they change completely, either. Finding the right compromise is an unsolved problem.
Because there is no fixed and settled theory of meaning.
Right. Rather than having a particular definition of meaning, I’m more thinking about the social aspects of explanation. If someone could say “There are two ways of talking about this same part of the world, and both ways use the same word, but these two ways of using the word actually mean different things” and not get laughed out of the room, then that means something interesting is going on if I try to answer a question posed in one way of talking by making recourse to the other.
If I had a more modern model of the sky, its blueness might be a logical consequence of other things, but I wouldn’t mean quite the same thing by “sky.”
Yet it would be an alternative theory of the sky,not a theory of something different.
And note that what a theory asserts about a term doesn’t have to be part of the meaning of a term.
Somewhat along the lines of what TAG said, I would respond that this does seem pretty related to what is going on, but it’s not clear that all models with room for an experiencer make that experiencer out to be a homunculus in a problematic way.
If we make “experience” something like the output of our world-model, then it would seem necessarily non-physical, as it never interacts.
But we might find that we can give it other roles.
I mean, why? You just double-unjoke it and get “the way to talk about it is to just talk about piping processed sense data”. Which may be not that different from homunculi model, but then again I’m not sure how it problematic in visual illusion example.
Easy :P Just build a language module for the Bayesian model that, when asked about inner experience, starts using words to describe the postprocessed sense data it uses to reason about the world.
Of course this is a joke, hahaha, because humans have Real Experience that is totally different from just piping processed sense data out to be described in response to a verbal cue.
The flippant-joke-within-flippant-joke format makes this hard to reply to or get anything out of.
Fair enough, my only defense is that I thought you’d find it funny.
A more serious answer to the homunculus problem as stated is simply levels of description—one of our ways of talking about and modeling humans in terms of experience (particularly inaccurate experience) tends to reserve a spot for an inner experiencer, and our way of talking about humans in terms of calcium channels tends not to. Neither model is strictly necessary for navigating the world, it’s just that by the very question “Why does any explanation of subjective experience invoke an experiencer” you have baked the answer into the question by asking it in a way of talking that saves a spot for the experiencer. If we used a world-model without homunculi, the phrase “subjective experience” would mean something other than what it does in that question.
There is no way out of this from within—as Sean Carroll likes to point out, “why” questions have answers (are valid questions) within a causal model of the world, but aren’t valid about the entire causal model itself. If we want to answer questions about why a way of talking about and modeling the world is the way it is, we can only do that within a broader model of the world that contains the first (edit: not actually true always, but seems true in this case), and the very necessity of linking the smaller model to a broader one means the words don’t mean quite the same things in the broader way of talking about the world. Nor can we answer “why is the world just like my model says it is?” without either tautology or recourse to a bigger model.
We might as well use the Hard Problem Of Consciousness here. Start with a model of the world that has consciousness explicitly in it. Then ask why the world is like it is in the model. If the explanation stays inside the original model, it is a tautology, and if it uses a different model, it’s not answering the original question because all the terms mean different things. The neuroscientists’ hard problem of consciousness, as you call it, is in the second camp—it says “skip that original question. We’ll be more satisfied by answering a different question.”
This homunculus problem seems to be a similar sort of thing, one layer out. The provocative version has an unsatisfying, non-explaining answer, but we might be satisfied by asking related yet different questions like “why is talking about the world in terms of experience and experiencer a good idea for human-like creatures?”
There’s two huge assumptions there.
One is that everything within a model is tautologous in a deprecatory sense, a sense that renders it worthless.
The other is that any model features a unique semantics, incomparable with any other model.
The axioms of a system are tautologies, and assuming something as an axiom is widely regarded as a low value, as not really explaining it. The theorems or proofs within a system can also be regarded as tautologies, but it can take a lot of work to derive them, and their subjective value is correspondingly higher. So, a derivation of facts about subjective experience from accepted principles of physics would count as both an explanation of phenomenality and a solution to the hard problem of consciousness...but a mere assumption that qualia exist would not.
Its much more standard to assume that
there is some semantic continuity between different theories than none. That’s straightforwardly demonstrated by the fact that people tend to say Einstein had a better theory of gravity than Newton, and so on.
Good points!
The specific case here is why-questions about bits of a model of the world (because I’m making the move to say it’s important that certain why-questions about mental stuff aren’t just raw data, they are asked about pieces of a model of mental phenomena). For example, suppose I think that the sky is literally a big sphere around the world, and it has the property of being blue in the day and starry in the night. If I wonder why the sky is blue, this pretty obviously isn’t going to be a logical consequence of some other part of the model. If I had a more modern model of the sky, its blueness might be a logical consequence of other things, but I wouldn’t mean quite the same thing by “sky.”
So my claim about different semantics isn’t that you can’t have any different models with overlapping semantics, it’s specifically about going from a model where some datum (e.g. looking up and seeing blue) is a trivial consequence to one where it’s a nontrivial consequence. I’m sure it’s not totally impossible for the meanings to be absolutely identical before and after, but I think it’s somewhere between exponentially unlikely and measure zero.
Why? You seem to appealing to a theory of meaning that you haven’t made explicit.
Edit:
I should have paid more attention to your “absolutely”. I don’t have any way of guaranting that meanings are absolutely stable across theories , but I don’t think they change completely, either. Finding the right compromise is an unsolved problem.
Because there is no fixed and settled theory of meaning.
Right. Rather than having a particular definition of meaning, I’m more thinking about the social aspects of explanation. If someone could say “There are two ways of talking about this same part of the world, and both ways use the same word, but these two ways of using the word actually mean different things” and not get laughed out of the room, then that means something interesting is going on if I try to answer a question posed in one way of talking by making recourse to the other.
How does that apply to consciousness?
Yet it would be an alternative theory of the sky,not a theory of something different.
And note that what a theory asserts about a term doesn’t have to be part of the meaning of a term.
Somewhat along the lines of what TAG said, I would respond that this does seem pretty related to what is going on, but it’s not clear that all models with room for an experiencer make that experiencer out to be a homunculus in a problematic way.
If we make “experience” something like the output of our world-model, then it would seem necessarily non-physical, as it never interacts.
But we might find that we can give it other roles.
I mean, why? You just double-unjoke it and get “the way to talk about it is to just talk about piping processed sense data”. Which may be not that different from homunculi model, but then again I’m not sure how it problematic in visual illusion example.
The problem (for me) was the plausible deniability about which opinion was real (if any), together with the lack of any attempt to explain/justify.