On desiring subjective states (post 3 of 3)
Carol puts her left hand in a bucket of hot water, and lets it acclimate for a few minutes. Meanwhile her right hand is acclimating to a bucket of ice water. Then she plunges both hands into a bucket of lukewarm water. The lukewarm water feels very different to her two hands. To the left hand, it feels very chilly. To the right hand, it feels very hot. When asked to tell the temperature of the lukewarm water without looking at the thermocouple readout, she doesn’t know. Asked to guess, she’s off by a considerable margin.
Next Carol flips the thermocouple readout to face her (as shown), and practices. Using different lukewarm water temperatures of 10-35 C, she gets a feel for how hot-adapted and cold-adapted hands respond to the various middling temperatures. Now she makes a guess—starting with a random hand, then moving the other one and revising the guess if necessary—each time before looking at the thermocouple. What will happen? I haven’t done the experiment, but human performance on similar perceptual learning tasks suggests that she will get quite good at it.
We bring Carol a bucket of 20 C water (without telling) and let her adapt her hands first as usual. “What do you think the temperature is?” we ask. She moves her cold hand first. “Feels like about 20,” she says. Hot hand follows. “Yup, feels like 20.”
“Wait,” we ask. “You said feels-like-20 for both hands. Does this mean the bucket no longer feels different to your two different hands, like it did when you started?”
”No!” she replies. “Are you crazy? It still feels very different subjectively; I’ve just learned to see past that to identify the actual temperature.”
In addition to reports on the external world, we perceive some internal states that typically (but not invariably) can serve as signals about our environment. Let’s tentatively call these states Subjectively Identified Aspects of Perception (SIAPs). Even though these states aren’t strictly necessary to know what’s going on in the environment—Carol’s example shows that the sensation felt by one hand isn’t necessary to know that the water is 20 C, because the other hand knows this via a different sensation—they still matter to us. As Eliezer notes:
If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I’d have to say no. I can’t think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
Subjectivity matters. (I am not implying that Eliezer would agree with anything else I say about subjectivity.)
Why would evolution build beings that sense their internal states? Why not just have the organism know the objective facts of survival and reproduction, and be done with it? One thought is that it is just easier to build a brain that does both, rather than one that focuses relentlessly on objective facts. But another is that this separation of sense-data into “subjective” and “objective” might help us learn to overcome certain sorts of perceptual illusion—as Carol does, above. And yet another is that some internal states might be extremely good indicators and promoters of survival or reproduction—like pain, or feelings of erotic love. This last hypothesis could explain why we value some subjective aspects so much, too.
Different SIAPs can lead to the same intelligent behavioral performance, such as identifying 20 degree C water. But that doesn’t mean Carol has to value the two routes to successful temperature-telling equally. And, if someone proposed to give her radically different, previously unknown, subjectively identifiable aspects of experience, as new routes to the kinds of knowledge she gets from perception, she might reasonably balk. Especially if this were to apply to all the senses. And if the subjectively identifiable aspects of desire and emotion (SIADs, SIAEs) were also to be replaced, she might reasonably balk much harder. She might reasonably doubt that the survivor of this process would be her, or even human, in any sense meaningful to her.
Would it be possible to have an intelligent being whose cognition of the world is mediated by no SIAPs? I suspect not, if that being is well-designed. See above on “why would evolution build beings that sense internal states.”
If you’ve read all 3 posts, you’ve probably gotten the point of the Gasoline Gal story by now. But let me go through some of the mappings from source to target in that analogy. A car that, when you take it on a tour, accelerates well, handles nicely, makes the right amount of noise, and so on—one that passes the touring test (groan) - is like a being that can identify objective facts in its environment. An internal combustion engine is like Carol’s subjective cold-sensation in her left hand—one way among others to bring about the externally-observable behavior. (By “externally observable” I mean “without looking under the hood”.) In Carol’s case, that behavior is identifying 20 C water. In the engine’s case, it’s the acceleration of the car. Note that in neither case is this internal factor causally inert. If you take it away and don’t replace it with anything, or even if you replace it with something that doesn’t fit, the useful external behavior will be severely impaired. The mere fact that you can, with a lot of other re-working, replace an internal combustion engine with a fuel cell, does not even begin to show that the engine does nothing.
And Gasoline Gal’s passion for internal combustion engines is like my—and I dare say most people’s—attachment to the subjective internal aspects of perception and emotion that we know and love. The words and concepts we use for these things—pain, passion, elation, for some easier examples—refer to the actual processes in human beings that drive the related behavior. (Regarding which, neurology has more to learn.) As I mentioned in my last post, a desire can form with a particular referent based on early experience, and remain focused on that event-type permanently. If one constructs radically different processes that achieve similar external results, analogous to the fuel cell driven car, one gets radically different subjectivity—which we can only denote by pointing simultaneously to both the “under the hood” construction of these new beings, and the behavior associated with their SIAPs, together.
Needless to say, this complicates uploading.
One more thing: are SIAPs qualia? A substantial minority of philosophers, or maybe a plurality, uses “qualia” in a sufficiently similar way that I could probably use that word here. But another substantial minority loads it with additional baggage. And that leads to pointless misunderstandings, pigeonholing, and straw men. Hence, “SIAPs”. But feel free to use “qualia” in the comments if you’re more comfortable with that term, bearing my caveats in mind.
- 8 Sep 2016 2:21 UTC; 1 point) 's comment on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn’s recent article in Skeptic Magazine [Link] by (
Wait, are you claiming that Carol’s sight of the thermometer is objective knowledge about the world, compared to the hot-or-coldness of her hands which is subjective? That seems definitely wrong. Carol sees by interpreting some visual SIAPs to be about a thermometer, which she has learned is best explained as some function of an objective temperature, just like how her thermal SIAPs are best explained as function of the objective temperature.
I agree with you (I think; I would phrase it a bit differently). Sorry if I was unclear. The subjective aspects are especially readily identifiable in the hot hand / cold hand scenario, which is why I picked it to be my example.
I like this post a lot. It’s very clear and seems to be pointing to something. So did the first post. By contrast the second post felt more handwavey to me. That’s some indication that you may be missing a step in your chain of reasoning. You may want to mentally walk through the second post “showing your work” in more detail as a double check in case you missed something.
Thanks for the feedback. The second post was handwavey—or at least, link-wavey. I basically just gave some links and a brief discussion to indicate that a “causal theory of reference” should be the go-to way to try to understand the reference of desires.
All of which was to support and help interpret one sentence of my last post:
But maybe the two stories (Gasoline Gal, and Carol) do that better, than appealing to a causal theory of reference.
NooOOoo. holds up talisman, flings holy water
I suspect this is spirit vs. matter dualism implicitly creeping up on us again. (When people start feeling antsy about uploading, I think the reason has almost always involved either dualism or continuity concerns.)
There is no dichotomy whatsoever “internal=subjective” and “external=objective” feelings. There’s no particular difference between seeing red and feeling happy- both are “qualia”. Your self perception that you then go on to process that information into other forms which you mentally label as “objective” (I see red, therefore there are probably wavelengths of a certain frequency. I feel angry, I’m probably flooded with cortisol and my amygdala is probably active) is also qualia, it’s all qualia all the way down, and the distinction between “subjective” and “objective”, while not practically meaningless, is philosophically meaningless in this context.
Evolution needs organisms to respond to internal states for precisely the same reason we need to respond to external states—because both of these represent objective facts about the world which must be responded to.
Could you create a being which was behaviorally identical while being radically different “under the hood”″? I guess that depends what you mean by radically different. The important part of cars is that they move, not that they burn fuel, and that function is fulfilled. The important part of a human isn’t the neurons, but the thinking and feeling.
To me, if an object gives behavioral indications of thinking and feeling, that’s sufficient (but not necessary) to consider it a being. Maybe not the same as a second being which behaves identically but is designed differently: but then, in some sense I’m currently a slightly different being than I was this morning anyway, the qualia from morning-me is gone forever, clinging to a sense of continuity is hopeless, etc.
At the end of the day you do have to decide which parts of the car you value. As I said in your previous post, Gasoline Gal isn’t necessarily irrational for combustion things. It’s not wrong to intrinsically value the highly specific biology that drives us, but I just personally value the computational processes implemented by those cells.
I’m open to changing my mind about holding end behavior as an entirely sufficient standard as to whether the the processes I value are implemented (after all, we already have human deception as an example where internal states are not what is externally represented, at least temporarily), but if I open the hood and it’s performing basically equivalent computations, I’m not going to complain whether it does so via ion influx and phosphorylation or logic gates. I don’t believe there’s a fundamental difference between creating a being that percieves red and creating a being that perceives its own emotional states, if the former doesn’t need highly specified biological processes then why should the latter? Referring to the analogy, Gasoline Gal at least doesn’t care what kind of metal the engine is made from, so if we can at least agree that computations and information processes are the important thing, then it’s just a question of figuring out which ones are important / simulating them as closely as possible just to be sure.
I’ll grant that, like Gasoline Gal, we might prefer not to use the bits and bites over something more natural seeming because the more “continuity/similarity” there is the less unsettling the whole thing is. But I don’t want to grant that a being implemented on a radically different manner which nevertheless behaves like us doesn’t feel like us.
Hold the holy water, and please stop attributing views to me I don’t hold and didn’t imply. There’s no dichotomy. “Subjective” can just mean “in your head”; that’s consistent with there being objective facts about it.
I lean heavily toward the view that information processes are the important thing. That’s why I made Gasoline Gal not care about the metals. Note, information processes are algorithms, not just functions. For uploading, that means whole brain emulation. In my underinformed opinion, whole brain emulation is not the Way to AGI if you just want AGI. At some point, then, AGI will be available while WBE systems will be way behind; and so, uploaders will at least temporarily face a deeply serious choice on this issue.
Are you suggesting that mind uploading to a non-WBE platform will be available before WBE? I don’t think this is a common belief; uploading almost exclusively refers to WBE. See, for instance, Sandberg and Bostrom (2008), who don’t distinguish between WBE and uploading:
I think it is indeed a common belief that AGI may come before WBE, but as far as I know, it is not commonly believed that AGI will provide an alternative route to WBE, because human minds will likely not be feasibly translatable to the AGI architectures that we come up with.
Good question, thanks. Yes, I do think that “mind uploading”, suitably loosely defined, will be available first on a non-WBE platform. I’m assuming that human-level AGI relatively quickly becomes superhuman-level, to the point where imitating the behavior of a specific individual becomes a possibility.
I see. In GAZP vs. GLUT, Eliezer argues that the only way to feasibly create a perfect imitation of a specific human brain is to do computations that correspond in some way to the functional roles behind mental states, which will produce identical conscious experiences according to functionalism.
Sorry, I didn’t mean to suggest that you actually hold that view. What I did mean to suggest is that dualist intuitions have snuck into your ideas without announcing themselves as such to you. (Hence the holy water joke—I was trying to say that I’m being religiously paranoid about avoiding implicit dualism despite how you don’t even support that view).
Here, I’ll try to be more explicit as to why I think you’re implicitly expressing dualism:
What does that even mean? How can any system “access objective facts” about anything? All systems containing representations of the outside world must do so via modifications and manipulations of, and interactions between, internal components (hopefully in a manner which interacts with and corresponds to things “external” to the system). Divining the so-called “objective facts” from these internal states is a complicated and always imperfect calculation.
You’ve framed the subjective/objective dichotomy as “The water may feel very cold, but I know it’s 20C”. As you said, an error correction is being performed: “Some of my indicators are giving signals ordinarily associated with cold, but given what my other indicators say, I’ve performed error correction processes and I know it’s actually not.”
All of which is fine. The “dualist” part is where you imply that it would be in any way possible to arrive at this 20C calculation without sensing internal states, to just know the objective facts of survival and reproduction and be done with it. It’s not possible to do that without getting a philosophical zombie.
Take a simple information process, such as a light-switch. Whether or not the circuit is connected “represents” the state of the switch, the behavioral output being the light bulb turning on. The circuit never gets objective facts about the switch, all it gets is the internal state of whether or not it is connected—a “subjective” experience.
Your main point: “I value the fact that my indicators gave me signals ordinarily associated with cold and then I had to go through an error correction process, rather than just immediately know it’s 20C”, is interesting, good, and correct.
I agree, it can’t be irrational to value things—you may put your locus of valued self-identity in your information processes (the combustion), or in your biology itself (the metal), or in your behavioral output (the movement of the vehicle). I’m sympathetic to the view of valuing information processes in addition to behavioral output myself: after all, coma patients are still people if they have various types of brain activity despite comatose behavior.
But here again, my sense of un-ease with implicit dualism flairs up. It’s all well and good to say that the survivor of this process isn’t her (she may draw her locus of identity wherever she likes, it need not be her behavior), but if the result of an information process is behaviorally identical to a person, there’s something very off about saying that these information processes do not meaningfully contain a person.
I use “person” here in the sense as “one who’s stated thoughts and feelings and apparent preferences are morally relevant and should be considered the same way we would ideally consider a natural human.
Of course, it’s still not irrational to not value things, and you might actually say that to count as a person you need certain information processes or certain biology—I just think both of those values are wrong. I have a dream that beings are judged not by their algorithm, but by their behavior (additional terms and restrictions apply).
Is that better?
The subjective cold-sensation in her left hand should be part of the observable behavior, surely? To mix the analogies, if it were my job to disguise the fuel cell as a combustion engine, I certainly would feel like I had to include this subjective cold-sensation part.
But I’m not familiar enough with the discussions about uploading to know to what extent people intend to make the fuel cell keep the subjectively observable properties of a combustion engine.
Hmmm this is somewhat more persoanlly relevant as I got the idea (from somewhat weakish evidence of a tv show) that humans can learn to echolocate and pursued the skill explictly to explore the SIAP I would aquire ie I wanted to know how it feels. While I would in theory be able to work with a computer and pen and paper to answer equivalent objective questions, that kind of route wouldn’t provide so much experimental excitement (ie I know already what desk paper work feels like and it is boring).
It’s also interesting to note that I could before hand know that my “beforehand” SIAP knowledge was insufficient to answer what I would feel like after. That is I couldn’t imagine how it would feel like. That would seem to be pretty close to equivalent to the thought experiment about the color-blind colorperception trained neuroscientist.
in the article the focus seems to be on SIAPs that are directed within the mind. Is “ordinary perception” supposed to be a different beast? Because while my brain didn’t receive any additional data (I had the same natural ears) I was in essence able to get a way more vivid phonetic map of my surroundings. It certainly felt like boosting perception of the objective world. I could do stuff like hear shapes around corners (something that you can’t do visually (well you can see past a greenhouses corner but it still feels pretty different)).
However if the article tries to argue that SIAP mechanics are somehow a valid “clinging target” I don’t think they are any more valid than beliefs. You could try to identify as a person that helds certain beliefs but it doesn’t appear as especially right way to identify. From what I got from “organically expanding my SIAP horizont” if somebody made me a quarantee that my uploaded mind would SIAP in the same way I would more likely see that as a lost opportunity. I want a 360 degree vision and a full ball vision could be great and experimenting in trying to see from multiple points of view at once would also be cool experiences I would be drawn to. But given the chance I would like to do the integration myself from the inside rather than doing it as a outside job (ie vanilla copy as the playroom start point is fine and cool).
I guess also that would be somewhat sad if the echolocation would be lost on the translation. But it feels that it would not be that big of a deal to rebuild it.
Gaining new aspects of experience is cool, especially if you gain new abilities to navigate the world too. It’s only losing others that I’m worried about.