Suppose that we show how certain physical processes play the role of qualia within an abstract model of human behavior. “This pattern of neural activities means we should think of this person as seeing the color red,” for instance.
David Chalmers might then say that we have merely solved an “easy problem,” and that what’s missing is whether we can predict that this person—this actual first-person point of view—is actually seeing red.
This is close to what I parody as “Human physical bodies are only approximate agents, so how does this generate the real Platonic agent I know I am inside?”
When I think of myself as an abstract agent in the abstract state of “seeing red,” this is not proof that I am actually an abstract Platonic Agent in the abstract state of seeing red. The person in the parody has been misled by their model of themselves—they model themselves as a real Platonic agent, and so they believe that’s what they have to be.
Once we have described the behavior of the approximate agents that are humans, we don’t need to go on to describe the state of the actual agents hiding inside the humans.
Suppose that we show how certain physical processes play the role of qualia within an abstract model of human behavior. “This pattern of neural activities means we should think of this person as seeing the color red,” for instance. [..]This is close to what I parody as “Human physical bodies are only approximate agents, so how does this generate the real Platonic agent I know I am inside?”
But we know that we do see red. Red is not an invisible spook inside someone else.
When I think of myself as an abstract agent in the abstract state of “seeing red,” this is not proof that I am actually an abstract Platonic Agent in the abstract state of seeing red. The person in the parody has been misled by their model of themselves—they model themselves as a real Platonic agent, and so they believe that’s what they have to be.
Once we have described the behavior of the approximate agents that are humans, we don’t need to go on to describe the state of the actual agents hiding inside the humans.
We don’t need to bring in agency at all. You are trying to hitch something you can be plausible eliminativist about to something you can’t.
I’m supposing that we’re conceptualizing people using a model that has internal states. “Agency” of humans is shorthand for “conforms to some complicated psychological model.”
I agree that I do see red. That is to say, the collection of atoms that is my body enters a state that plays the same role in the real world as “seeing red” plays in the folk-psychological model of me. If seeing red makes the psychological model more likely to remember camping as a child, exposure to a red stimulus makes the atoms more likely to go into a state that corresponds to remembering camping.
“No, no,” you say. “That’s not what seeing red is—you’re still disagreeing with me. I don’t mean that my atoms are merely in a correspondence with some state in an approximate model that I use to think about humans, I mean that I am actually in some difficult to describe state that actually has parts like the parts of that model.”
“Yes,” I say “—you’re definitely in a state that corresponds to the model.”
“Arrgh, no! I mean when I see red, I really see it!”
“When I see red, I really see it too.”
...
It might at this point be good for me to reiterate my claim from the post, that rather than taking things in our notional world and asking “what is the true essence of this thing?”, it’s more philosophically productive to ask “what approximate model of the world has this thing as a basic object?”
Models can omit things that are there as well as include things that aren’t there. That’s the whole problem.
I’m always in the exact state that I am in, and those states includes conscious experience. You can and have built a model which is purely functional and in which Red only featurrs as a functional role or behavioural disposition. But you don’t get to say that your model is in exact two way correspondence with my reality. You have to show that a model is exact, and and that is very difficult, you can’t just assert it.
it’s more philosophically productive to ask “what approximate model of the world has this thing as a basic object?”
Why can’t I ask “what does this approximate model leave anything out”?
If physicist A builds a model that leaves out friction, say, physicist B can validly object to it. And that has nothing whatever to do with “essences” or ontological fundamentalness. No one thinks friction or cow’s legs are fundamental. The rhetoric about essences is a red herring. (Or , if it is valid, surely you can use it to justify any model of any simplicity). I think the spherical cow model is inaccurate because every cow I have ever seen is squarish with a leg at each corner. Thats an observation, not a metaphysical claim.
I agree that I do see red. That is to say, the collection of atoms that is my body enters a state that plays the same role in the real world as “seeing red” plays in the folk-psychological model of me.
Seeing red is more than a role or disposition. That is what you have left out.
Seeing red is more than a role or disposition. That is what you have left out.
Suppose epiphenomenalism is true. We would still need two separate explanations—one explanation of your epiphenomenal activity in terms of made-up epiphenomenology, and a different explanation for how your physical body thinks it’s really seeing red and types up these arguments on LessWrong, despite having no access to your epiphenomena.
The mere existence of that second explanation makes it wrong to have absolute confidence in your own epiphenomenal access. After all, we’ve just described approximate agents that think they have epiphenomenal access, and type and make facial expressions and release hormones as if they do, without needing any epiphenomena at all.
We can imagine the approximate agent made out of atoms, and imagine just what sort of mistake it’s making when it says “no, really, I see red in a special nonphysical way that you have yet to explain” even when it doesn’t have access to the epihpenomena. And then we can endeavor not to make that mistake.
If I, the person typing these words, can Really See Redness in a way that is independent or additional to a causal explanation of my thoughts and actions, my only honest course of action is to admit that I don’t know about it.
It’s wrong to have absolute confidence in anything. You can’t prove that you are not in a simulation, so you can’t have absolute confidence that there is any real physics.
Of course, I didn’t base anything on absolute confidence.
You can put forward a story where expressions of subjective experience are caused by atoms, and subjective experience itself isn’t mentioned.
I can put forward a story where ouches are caused by pains, and atoms aren’t explicitly mentioned.
Of course you now want to say that the atoms are still there and playing a causal role, but have gone out of focus because I am using high level descriptions. But then I could say that subjective states are identical to aggregates of atoms, and therefore have identical caudal powers.
Multiple explanations are always possible, but aren’t necessarily about rival ontologies
About 95%. Because philosophy is easy* and full of obvious confusions.
(* After all, anyone can do it well enough that they can’t see their own mistakes. And with a little more effort, you can’t even see your mistakes when they’re pointed out to you. That’s, like, the definition of easy, right?)
95% isn’t all that high a confidence, if we put aside “how dare you rate yourself so highly?” type arguments for a bit. I wouldn’t trust a parachute that had a 95% chance of opening. Most of the remaining 5% is not dualism being true or us needing a new kind of science, it’s just me having misunderstood something important.
Subjective experience can’t be demonstrated objectively. On the other hand, demanding objective evidence of subjectivity biases the discussion away from taking consciousness seriously.
I don’t have a way out if the impasse. The debate amongst professional philosophers is logjammed, so this one is as well. (However,this demonstrates a meta level truth: there is no neutral epistemology).
Suppose that we show how certain physical processes play the role of qualia within an abstract model of human behavior. “This pattern of neural activities means we should think of this person as seeing the color red,” for instance.
David Chalmers might then say that we have merely solved an “easy problem,” and that what’s missing is whether we can predict that this person—this actual first-person point of view—is actually seeing red.
This is close to what I parody as “Human physical bodies are only approximate agents, so how does this generate the real Platonic agent I know I am inside?”
When I think of myself as an abstract agent in the abstract state of “seeing red,” this is not proof that I am actually an abstract Platonic Agent in the abstract state of seeing red. The person in the parody has been misled by their model of themselves—they model themselves as a real Platonic agent, and so they believe that’s what they have to be.
Once we have described the behavior of the approximate agents that are humans, we don’t need to go on to describe the state of the actual agents hiding inside the humans.
But we know that we do see red. Red is not an invisible spook inside someone else.
We don’t need to bring in agency at all. You are trying to hitch something you can be plausible eliminativist about to something you can’t.
I’m supposing that we’re conceptualizing people using a model that has internal states. “Agency” of humans is shorthand for “conforms to some complicated psychological model.”
I agree that I do see red. That is to say, the collection of atoms that is my body enters a state that plays the same role in the real world as “seeing red” plays in the folk-psychological model of me. If seeing red makes the psychological model more likely to remember camping as a child, exposure to a red stimulus makes the atoms more likely to go into a state that corresponds to remembering camping.
“No, no,” you say. “That’s not what seeing red is—you’re still disagreeing with me. I don’t mean that my atoms are merely in a correspondence with some state in an approximate model that I use to think about humans, I mean that I am actually in some difficult to describe state that actually has parts like the parts of that model.”
“Yes,” I say “—you’re definitely in a state that corresponds to the model.”
“Arrgh, no! I mean when I see red, I really see it!”
“When I see red, I really see it too.”
...
It might at this point be good for me to reiterate my claim from the post, that rather than taking things in our notional world and asking “what is the true essence of this thing?”, it’s more philosophically productive to ask “what approximate model of the world has this thing as a basic object?”
Models can omit things that are there as well as include things that aren’t there. That’s the whole problem.
I’m always in the exact state that I am in, and those states includes conscious experience. You can and have built a model which is purely functional and in which Red only featurrs as a functional role or behavioural disposition. But you don’t get to say that your model is in exact two way correspondence with my reality. You have to show that a model is exact, and and that is very difficult, you can’t just assert it.
Why can’t I ask “what does this approximate model leave anything out”?
If physicist A builds a model that leaves out friction, say, physicist B can validly object to it. And that has nothing whatever to do with “essences” or ontological fundamentalness. No one thinks friction or cow’s legs are fundamental. The rhetoric about essences is a red herring. (Or , if it is valid, surely you can use it to justify any model of any simplicity). I think the spherical cow model is inaccurate because every cow I have ever seen is squarish with a leg at each corner. Thats an observation, not a metaphysical claim.
Seeing red is more than a role or disposition. That is what you have left out.
Suppose epiphenomenalism is true. We would still need two separate explanations—one explanation of your epiphenomenal activity in terms of made-up epiphenomenology, and a different explanation for how your physical body thinks it’s really seeing red and types up these arguments on LessWrong, despite having no access to your epiphenomena.
The mere existence of that second explanation makes it wrong to have absolute confidence in your own epiphenomenal access. After all, we’ve just described approximate agents that think they have epiphenomenal access, and type and make facial expressions and release hormones as if they do, without needing any epiphenomena at all.
We can imagine the approximate agent made out of atoms, and imagine just what sort of mistake it’s making when it says “no, really, I see red in a special nonphysical way that you have yet to explain” even when it doesn’t have access to the epihpenomena. And then we can endeavor not to make that mistake.
If I, the person typing these words, can Really See Redness in a way that is independent or additional to a causal explanation of my thoughts and actions, my only honest course of action is to admit that I don’t know about it.
It’s wrong to have absolute confidence in anything. You can’t prove that you are not in a simulation, so you can’t have absolute confidence that there is any real physics.
Of course, I didn’t base anything on absolute confidence.
You can put forward a story where expressions of subjective experience are caused by atoms, and subjective experience itself isn’t mentioned.
I can put forward a story where ouches are caused by pains, and atoms aren’t explicitly mentioned.
Of course you now want to say that the atoms are still there and playing a causal role, but have gone out of focus because I am using high level descriptions. But then I could say that subjective states are identical to aggregates of atoms, and therefore have identical caudal powers.
Multiple explanations are always possible, but aren’t necessarily about rival ontologies
Anyhow, I agree that we have long since been rehashing standard arguments here :P
How likely is it that you would have solved the Hard Problem? Why do people think philosophy is easy, or full of obvious confusions?
About 95%. Because philosophy is easy* and full of obvious confusions.
(* After all, anyone can do it well enough that they can’t see their own mistakes. And with a little more effort, you can’t even see your mistakes when they’re pointed out to you. That’s, like, the definition of easy, right?)
95% isn’t all that high a confidence, if we put aside “how dare you rate yourself so highly?” type arguments for a bit. I wouldn’t trust a parachute that had a 95% chance of opening. Most of the remaining 5% is not dualism being true or us needing a new kind of science, it’s just me having misunderstood something important.
Do you have any evidence for this claim, besides a subjective feeling of certainty?
Subjective experience can’t be demonstrated objectively. On the other hand, demanding objective evidence of subjectivity biases the discussion away from taking consciousness seriously.
I don’t have a way out if the impasse. The debate amongst professional philosophers is logjammed, so this one is as well. (However,this demonstrates a meta level truth: there is no neutral epistemology).