Carol puts her left hand in a bucket of hot water, and lets it acclimate for a few minutes. Meanwhile her right hand is acclimating to a bucket of ice water. Then she plunges both hands into a bucket of lukewarm water. The lukewarm water feels very different to her two hands. To the left hand, it feels very chilly. To the right hand, it feels very hot. When asked to tell the temperature of the lukewarm water without looking at the thermocouple readout, she doesn’t know. Asked to guess, she’s off by a considerable margin.
Next Carol flips the thermocouple readout to face her (as shown), and practices. Using different lukewarm water temperatures of 10-35 C, she gets a feel for how hot-adapted and cold-adapted hands respond to the various middling temperatures. Now she makes a guess—starting with a random hand, then moving the other one and revising the guess if necessary—each time before looking at the thermocouple. What will happen? I haven’t done the experiment, but human performance on similar perceptual learning tasks suggests that she will get quite good at it.
We bring Carol a bucket of 20 C water (without telling) and let her adapt her hands first as usual. “What do you think the temperature is?” we ask. She moves her cold hand first. “Feels like about 20,” she says. Hot hand follows. “Yup, feels like 20.”
“Wait,” we ask. “You said feels-like-20 for both hands. Does this mean the bucket no longer feels different to your two different hands, like it did when you started?”
”No!” she replies. “Are you crazy? It still feels very different subjectively; I’ve just learned to see past that to identify the actual temperature.”
In addition to reports on the external world, we perceive some internal states that typically (but not invariably) can serve as signals about our environment. Let’s tentatively call these states Subjectively Identified Aspects of Perception (SIAPs). Even though these states aren’t strictly necessary to know what’s going on in the environment—Carol’s example shows that the sensation felt by one hand isn’t necessary to know that the water is 20 C, because the other hand knows this via a different sensation—they still matter to us. As Eliezer notes:
If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I’d have to say no. I can’t think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
Subjectivity matters. (I am not implying that Eliezer would agree with anything else I say about subjectivity.)
Why would evolution build beings that sense their internal states? Why not just have the organism know the objective facts of survival and reproduction, and be done with it? One thought is that it is just easier to build a brain that does both, rather than one that focuses relentlessly on objective facts. But another is that this separation of sense-data into “subjective” and “objective” might help us learn to overcome certain sorts of perceptual illusion—as Carol does, above. And yet another is that some internal states might be extremely good indicators and promoters of survival or reproduction—like pain, or feelings of erotic love. This last hypothesis could explain why we value some subjective aspects so much, too.
Different SIAPs can lead to the same intelligent behavioral performance, such as identifying 20 degree C water. But that doesn’t mean Carol has to value the two routes to successful temperature-telling equally. And, if someone proposed to give her radically different, previously unknown, subjectively identifiable aspects of experience, as new routes to the kinds of knowledge she gets from perception, she might reasonably balk. Especially if this were to apply to all the senses. And if the subjectively identifiable aspects of desire and emotion (SIADs, SIAEs) were also to be replaced, she might reasonably balk much harder. She might reasonably doubt that the survivor of this process would be her, or even human, in any sense meaningful to her.
Would it be possible to have an intelligent being whose cognition of the world is mediated by no SIAPs? I suspect not, if that being is well-designed. See above on “why would evolution build beings that sense internal states.”
If you’ve read all 3 posts, you’ve probably gotten the point of the Gasoline Gal story by now. But let me go through some of the mappings from source to target in that analogy. A car that, when you take it on a tour, accelerates well, handles nicely, makes the right amount of noise, and so on—one that passes the touring test (groan) - is like a being that can identify objective facts in its environment. An internal combustion engine is like Carol’s subjective cold-sensation in her left hand—one way among others to bring about the externally-observable behavior. (By “externally observable” I mean “without looking under the hood”.) In Carol’s case, that behavior is identifying 20 C water. In the engine’s case, it’s the acceleration of the car. Note that in neither case is this internal factor causally inert. If you take it away and don’t replace it with anything, or even if you replace it with something that doesn’t fit, the useful external behavior will be severely impaired. The mere fact that you can, with a lot of other re-working, replace an internal combustion engine with a fuel cell, does not even begin to show that the engine does nothing.
And Gasoline Gal’s passion for internal combustion engines is like my—and I dare say most people’s—attachment to the subjective internal aspects of perception and emotion that we know and love. The words and concepts we use for these things—pain, passion, elation, for some easier examples—refer to the actual processes in human beings that drive the related behavior. (Regarding which, neurology has more to learn.) As I mentioned in my last post, a desire can form with a particular referent based on early experience, and remain focused on that event-type permanently. If one constructs radically different processes that achieve similar external results, analogous to the fuel cell driven car, one gets radically different subjectivity—which we can only denote by pointing simultaneously to both the “under the hood” construction of these new beings, and the behavior associated with their SIAPs, together.
Needless to say, this complicates uploading.
One more thing: are SIAPs qualia? A substantial minority of philosophers, or maybe a plurality, uses “qualia” in a sufficiently similar way that I could probably use that word here. But another substantial minority loads it with additional baggage. And that leads to pointless misunderstandings, pigeonholing, and straw men. Hence, “SIAPs”. But feel free to use “qualia” in the comments if you’re more comfortable with that term, bearing my caveats in mind.
On desiring subjective states (post 3 of 3)
Carol puts her left hand in a bucket of hot water, and lets it acclimate for a few minutes. Meanwhile her right hand is acclimating to a bucket of ice water. Then she plunges both hands into a bucket of lukewarm water. The lukewarm water feels very different to her two hands. To the left hand, it feels very chilly. To the right hand, it feels very hot. When asked to tell the temperature of the lukewarm water without looking at the thermocouple readout, she doesn’t know. Asked to guess, she’s off by a considerable margin.
Next Carol flips the thermocouple readout to face her (as shown), and practices. Using different lukewarm water temperatures of 10-35 C, she gets a feel for how hot-adapted and cold-adapted hands respond to the various middling temperatures. Now she makes a guess—starting with a random hand, then moving the other one and revising the guess if necessary—each time before looking at the thermocouple. What will happen? I haven’t done the experiment, but human performance on similar perceptual learning tasks suggests that she will get quite good at it.
We bring Carol a bucket of 20 C water (without telling) and let her adapt her hands first as usual. “What do you think the temperature is?” we ask. She moves her cold hand first. “Feels like about 20,” she says. Hot hand follows. “Yup, feels like 20.”
“Wait,” we ask. “You said feels-like-20 for both hands. Does this mean the bucket no longer feels different to your two different hands, like it did when you started?”
”No!” she replies. “Are you crazy? It still feels very different subjectively; I’ve just learned to see past that to identify the actual temperature.”
In addition to reports on the external world, we perceive some internal states that typically (but not invariably) can serve as signals about our environment. Let’s tentatively call these states Subjectively Identified Aspects of Perception (SIAPs). Even though these states aren’t strictly necessary to know what’s going on in the environment—Carol’s example shows that the sensation felt by one hand isn’t necessary to know that the water is 20 C, because the other hand knows this via a different sensation—they still matter to us. As Eliezer notes:
Subjectivity matters. (I am not implying that Eliezer would agree with anything else I say about subjectivity.)
Why would evolution build beings that sense their internal states? Why not just have the organism know the objective facts of survival and reproduction, and be done with it? One thought is that it is just easier to build a brain that does both, rather than one that focuses relentlessly on objective facts. But another is that this separation of sense-data into “subjective” and “objective” might help us learn to overcome certain sorts of perceptual illusion—as Carol does, above. And yet another is that some internal states might be extremely good indicators and promoters of survival or reproduction—like pain, or feelings of erotic love. This last hypothesis could explain why we value some subjective aspects so much, too.
Different SIAPs can lead to the same intelligent behavioral performance, such as identifying 20 degree C water. But that doesn’t mean Carol has to value the two routes to successful temperature-telling equally. And, if someone proposed to give her radically different, previously unknown, subjectively identifiable aspects of experience, as new routes to the kinds of knowledge she gets from perception, she might reasonably balk. Especially if this were to apply to all the senses. And if the subjectively identifiable aspects of desire and emotion (SIADs, SIAEs) were also to be replaced, she might reasonably balk much harder. She might reasonably doubt that the survivor of this process would be her, or even human, in any sense meaningful to her.
Would it be possible to have an intelligent being whose cognition of the world is mediated by no SIAPs? I suspect not, if that being is well-designed. See above on “why would evolution build beings that sense internal states.”
If you’ve read all 3 posts, you’ve probably gotten the point of the Gasoline Gal story by now. But let me go through some of the mappings from source to target in that analogy. A car that, when you take it on a tour, accelerates well, handles nicely, makes the right amount of noise, and so on—one that passes the touring test (groan) - is like a being that can identify objective facts in its environment. An internal combustion engine is like Carol’s subjective cold-sensation in her left hand—one way among others to bring about the externally-observable behavior. (By “externally observable” I mean “without looking under the hood”.) In Carol’s case, that behavior is identifying 20 C water. In the engine’s case, it’s the acceleration of the car. Note that in neither case is this internal factor causally inert. If you take it away and don’t replace it with anything, or even if you replace it with something that doesn’t fit, the useful external behavior will be severely impaired. The mere fact that you can, with a lot of other re-working, replace an internal combustion engine with a fuel cell, does not even begin to show that the engine does nothing.
And Gasoline Gal’s passion for internal combustion engines is like my—and I dare say most people’s—attachment to the subjective internal aspects of perception and emotion that we know and love. The words and concepts we use for these things—pain, passion, elation, for some easier examples—refer to the actual processes in human beings that drive the related behavior. (Regarding which, neurology has more to learn.) As I mentioned in my last post, a desire can form with a particular referent based on early experience, and remain focused on that event-type permanently. If one constructs radically different processes that achieve similar external results, analogous to the fuel cell driven car, one gets radically different subjectivity—which we can only denote by pointing simultaneously to both the “under the hood” construction of these new beings, and the behavior associated with their SIAPs, together.
Needless to say, this complicates uploading.
One more thing: are SIAPs qualia? A substantial minority of philosophers, or maybe a plurality, uses “qualia” in a sufficiently similar way that I could probably use that word here. But another substantial minority loads it with additional baggage. And that leads to pointless misunderstandings, pigeonholing, and straw men. Hence, “SIAPs”. But feel free to use “qualia” in the comments if you’re more comfortable with that term, bearing my caveats in mind.