Why would evolution build beings that sense their internal states? Why not just have the organism know the objective facts of survival and reproduction, and be done with it? One thought is that it is just easier to build a brain that does both, rather than one that focuses relentlessly on objective facts. But another is that this separation of sense-data into “subjective” and “objective” might help us learn to overcome certain sorts of perceptual illusion—as Carol does, above. And yet another is that some internal states might be extremely good indicators and promoters of survival or reproduction—like pain, or feelings of erotic love. This last hypothesis could explain why we value some subjective aspects so much, too.
NooOOoo. holds up talisman, flings holy water
I suspect this is spirit vs. matter dualism implicitly creeping up on us again. (When people start feeling antsy about uploading, I think the reason has almost always involved either dualism or continuity concerns.)
There is no dichotomy whatsoever “internal=subjective” and “external=objective” feelings. There’s no particular difference between seeing red and feeling happy- both are “qualia”. Your self perception that you then go on to process that information into other forms which you mentally label as “objective” (I see red, therefore there are probably wavelengths of a certain frequency. I feel angry, I’m probably flooded with cortisol and my amygdala is probably active) is also qualia, it’s all qualia all the way down, and the distinction between “subjective” and “objective”, while not practically meaningless, is philosophically meaningless in this context.
Evolution needs organisms to respond to internal states for precisely the same reason we need to respond to external states—because both of these represent objective facts about the world which must be responded to.
Could you create a being which was behaviorally identical while being radically different “under the hood”″? I guess that depends what you mean by radically different. The important part of cars is that they move, not that they burn fuel, and that function is fulfilled. The important part of a human isn’t the neurons, but the thinking and feeling.
To me, if an object gives behavioral indications of thinking and feeling, that’s sufficient (but not necessary) to consider it a being. Maybe not the same as a second being which behaves identically but is designed differently: but then, in some sense I’m currently a slightly different being than I was this morning anyway, the qualia from morning-me is gone forever, clinging to a sense of continuity is hopeless, etc.
At the end of the day you do have to decide which parts of the car you value. As I said in your previous post, Gasoline Gal isn’t necessarily irrational for combustion things. It’s not wrong to intrinsically value the highly specific biology that drives us, but I just personally value the computational processes implemented by those cells.
I’m open to changing my mind about holding end behavior as an entirely sufficient standard as to whether the the processes I value are implemented (after all, we already have human deception as an example where internal states are not what is externally represented, at least temporarily), but if I open the hood and it’s performing basically equivalent computations, I’m not going to complain whether it does so via ion influx and phosphorylation or logic gates. I don’t believe there’s a fundamental difference between creating a being that percieves red and creating a being that perceives its own emotional states, if the former doesn’t need highly specified biological processes then why should the latter? Referring to the analogy, Gasoline Gal at least doesn’t care what kind of metal the engine is made from, so if we can at least agree that computations and information processes are the important thing, then it’s just a question of figuring out which ones are important / simulating them as closely as possible just to be sure.
I’ll grant that, like Gasoline Gal, we might prefer not to use the bits and bites over something more natural seeming because the more “continuity/similarity” there is the less unsettling the whole thing is. But I don’t want to grant that a being implemented on a radically different manner which nevertheless behaves like us doesn’t feel like us.
Hold the holy water, and please stop attributing views to me I don’t hold and didn’t imply. There’s no dichotomy. “Subjective” can just mean “in your head”; that’s consistent with there being objective facts about it.
Referring to the analogy, Gasoline Gal at least doesn’t care what kind of metal the engine is made from, so if we can at least agree that computations and information processes are the important thing, then it’s just a question of figuring out which ones are important / simulating them as closely as possible just to be sure.
I lean heavily toward the view that information processes are the important thing. That’s why I made Gasoline Gal not care about the metals. Note, information processes are algorithms, not just functions. For uploading, that means whole brain emulation. In my underinformed opinion, whole brain emulation is not the Way to AGI if you just want AGI. At some point, then, AGI will be available while WBE systems will be way behind; and so, uploaders will at least temporarily face a deeply serious choice on this issue.
For uploading, that means whole brain emulation. In my underinformed opinion, whole brain emulation is not the Way to AGI if you just want AGI. At some point, then, AGI will be available while WBE systems will be way behind; and so, uploaders will at least temporarily face a deeply serious choice on this issue.
Are you suggesting that mind uploading to a non-WBE platform will be available before WBE? I don’t think this is a common belief; uploading almost exclusively refers to WBE. See, for instance, Sandberg and Bostrom (2008), who don’t distinguish between WBE and uploading:
Whole brain emulation, often informally called “uploading” or “downloading”, has been the subject of much science fiction and also some preliminary studies.
I think it is indeed a common belief that AGI may come before WBE, but as far as I know, it is not commonly believed that AGI will provide an alternative route to WBE, because human minds will likely not be feasibly translatable to the AGI architectures that we come up with.
Good question, thanks. Yes, I do think that “mind uploading”, suitably loosely defined, will be available first on a non-WBE platform. I’m assuming that human-level AGI relatively quickly becomes superhuman-level, to the point where imitating the behavior of a specific individual becomes a possibility.
I see. In GAZP vs. GLUT, Eliezer argues that the only way to feasibly create a perfect imitation of a specific human brain is to do computations that correspond in some way to the functional roles behind mental states, which will produce identical conscious experiences according to functionalism.
Sorry, I didn’t mean to suggest that you actually hold that view. What I did mean to suggest is that dualist intuitions have snuck into your ideas without announcing themselves as such to you. (Hence the holy water joke—I was trying to say that I’m being religiously paranoid about avoiding implicit dualism despite how you don’t even support that view).
Here, I’ll try to be more explicit as to why I think you’re implicitly expressing dualism:
Why not just have the organism know the objective facts of survival and reproduction, and be done with it?
What does that even mean? How can any system “access objective facts” about anything? All systems containing representations of the outside world must do so via modifications and manipulations of, and interactions between, internal components (hopefully in a manner which interacts with and corresponds to things “external” to the system). Divining the so-called “objective facts” from these internal states is a complicated and always imperfect calculation.
You’ve framed the subjective/objective dichotomy as “The water may feel very cold, but I know it’s 20C”. As you said, an error correction is being performed: “Some of my indicators are giving signals ordinarily associated with cold, but given what my other indicators say, I’ve performed error correction processes and I know it’s actually not.”
All of which is fine. The “dualist” part is where you imply that it would be in any way possible to arrive at this 20C calculation without sensing internal states, to just know the objective facts of survival and reproduction and be done with it. It’s not possible to do that without getting a philosophical zombie.
Take a simple information process, such as a light-switch. Whether or not the circuit is connected “represents” the state of the switch, the behavioral output being the light bulb turning on. The circuit
never gets objective facts about the switch, all it gets is the internal state of whether or not it is connected—a “subjective” experience.
Your main point: “I value the fact that my indicators gave me signals ordinarily associated with cold and then I had to go through an error correction process, rather than just immediately know it’s 20C”, is interesting, good, and correct.
I agree, it can’t be irrational to value things—you may put your locus of valued self-identity in your information processes (the combustion), or in your biology itself (the metal), or in your behavioral output (the movement of the vehicle). I’m sympathetic to the view of valuing information processes in addition to behavioral output myself: after all, coma patients are still people if they have various types of brain activity despite comatose behavior.
She might reasonably doubt that the survivor of this process would be...human, in any sense meaningful to her.
But here again, my sense of un-ease with implicit dualism flairs up. It’s all well and good to say that the survivor of this process isn’t her (she may draw her locus of identity wherever she likes, it need not be her behavior), but if the result of an information process is behaviorally identical to a person, there’s something very off about saying that these information processes do not meaningfully contain a person.
I use “person” here in the sense as “one who’s stated thoughts and feelings and apparent preferences are morally relevant and should be considered the same way we would ideally consider a natural human.
Of course, it’s still not irrational to not value things, and you might actually say that to count as a person you need certain information processes or certain biology—I just think both of those values are wrong. I have a dream that beings are judged not by their algorithm, but by their behavior (additional terms and restrictions apply).
NooOOoo. holds up talisman, flings holy water
I suspect this is spirit vs. matter dualism implicitly creeping up on us again. (When people start feeling antsy about uploading, I think the reason has almost always involved either dualism or continuity concerns.)
There is no dichotomy whatsoever “internal=subjective” and “external=objective” feelings. There’s no particular difference between seeing red and feeling happy- both are “qualia”. Your self perception that you then go on to process that information into other forms which you mentally label as “objective” (I see red, therefore there are probably wavelengths of a certain frequency. I feel angry, I’m probably flooded with cortisol and my amygdala is probably active) is also qualia, it’s all qualia all the way down, and the distinction between “subjective” and “objective”, while not practically meaningless, is philosophically meaningless in this context.
Evolution needs organisms to respond to internal states for precisely the same reason we need to respond to external states—because both of these represent objective facts about the world which must be responded to.
Could you create a being which was behaviorally identical while being radically different “under the hood”″? I guess that depends what you mean by radically different. The important part of cars is that they move, not that they burn fuel, and that function is fulfilled. The important part of a human isn’t the neurons, but the thinking and feeling.
To me, if an object gives behavioral indications of thinking and feeling, that’s sufficient (but not necessary) to consider it a being. Maybe not the same as a second being which behaves identically but is designed differently: but then, in some sense I’m currently a slightly different being than I was this morning anyway, the qualia from morning-me is gone forever, clinging to a sense of continuity is hopeless, etc.
At the end of the day you do have to decide which parts of the car you value. As I said in your previous post, Gasoline Gal isn’t necessarily irrational for combustion things. It’s not wrong to intrinsically value the highly specific biology that drives us, but I just personally value the computational processes implemented by those cells.
I’m open to changing my mind about holding end behavior as an entirely sufficient standard as to whether the the processes I value are implemented (after all, we already have human deception as an example where internal states are not what is externally represented, at least temporarily), but if I open the hood and it’s performing basically equivalent computations, I’m not going to complain whether it does so via ion influx and phosphorylation or logic gates. I don’t believe there’s a fundamental difference between creating a being that percieves red and creating a being that perceives its own emotional states, if the former doesn’t need highly specified biological processes then why should the latter? Referring to the analogy, Gasoline Gal at least doesn’t care what kind of metal the engine is made from, so if we can at least agree that computations and information processes are the important thing, then it’s just a question of figuring out which ones are important / simulating them as closely as possible just to be sure.
I’ll grant that, like Gasoline Gal, we might prefer not to use the bits and bites over something more natural seeming because the more “continuity/similarity” there is the less unsettling the whole thing is. But I don’t want to grant that a being implemented on a radically different manner which nevertheless behaves like us doesn’t feel like us.
Hold the holy water, and please stop attributing views to me I don’t hold and didn’t imply. There’s no dichotomy. “Subjective” can just mean “in your head”; that’s consistent with there being objective facts about it.
I lean heavily toward the view that information processes are the important thing. That’s why I made Gasoline Gal not care about the metals. Note, information processes are algorithms, not just functions. For uploading, that means whole brain emulation. In my underinformed opinion, whole brain emulation is not the Way to AGI if you just want AGI. At some point, then, AGI will be available while WBE systems will be way behind; and so, uploaders will at least temporarily face a deeply serious choice on this issue.
Are you suggesting that mind uploading to a non-WBE platform will be available before WBE? I don’t think this is a common belief; uploading almost exclusively refers to WBE. See, for instance, Sandberg and Bostrom (2008), who don’t distinguish between WBE and uploading:
I think it is indeed a common belief that AGI may come before WBE, but as far as I know, it is not commonly believed that AGI will provide an alternative route to WBE, because human minds will likely not be feasibly translatable to the AGI architectures that we come up with.
Good question, thanks. Yes, I do think that “mind uploading”, suitably loosely defined, will be available first on a non-WBE platform. I’m assuming that human-level AGI relatively quickly becomes superhuman-level, to the point where imitating the behavior of a specific individual becomes a possibility.
I see. In GAZP vs. GLUT, Eliezer argues that the only way to feasibly create a perfect imitation of a specific human brain is to do computations that correspond in some way to the functional roles behind mental states, which will produce identical conscious experiences according to functionalism.
Sorry, I didn’t mean to suggest that you actually hold that view. What I did mean to suggest is that dualist intuitions have snuck into your ideas without announcing themselves as such to you. (Hence the holy water joke—I was trying to say that I’m being religiously paranoid about avoiding implicit dualism despite how you don’t even support that view).
Here, I’ll try to be more explicit as to why I think you’re implicitly expressing dualism:
What does that even mean? How can any system “access objective facts” about anything? All systems containing representations of the outside world must do so via modifications and manipulations of, and interactions between, internal components (hopefully in a manner which interacts with and corresponds to things “external” to the system). Divining the so-called “objective facts” from these internal states is a complicated and always imperfect calculation.
You’ve framed the subjective/objective dichotomy as “The water may feel very cold, but I know it’s 20C”. As you said, an error correction is being performed: “Some of my indicators are giving signals ordinarily associated with cold, but given what my other indicators say, I’ve performed error correction processes and I know it’s actually not.”
All of which is fine. The “dualist” part is where you imply that it would be in any way possible to arrive at this 20C calculation without sensing internal states, to just know the objective facts of survival and reproduction and be done with it. It’s not possible to do that without getting a philosophical zombie.
Take a simple information process, such as a light-switch. Whether or not the circuit is connected “represents” the state of the switch, the behavioral output being the light bulb turning on. The circuit never gets objective facts about the switch, all it gets is the internal state of whether or not it is connected—a “subjective” experience.
Your main point: “I value the fact that my indicators gave me signals ordinarily associated with cold and then I had to go through an error correction process, rather than just immediately know it’s 20C”, is interesting, good, and correct.
I agree, it can’t be irrational to value things—you may put your locus of valued self-identity in your information processes (the combustion), or in your biology itself (the metal), or in your behavioral output (the movement of the vehicle). I’m sympathetic to the view of valuing information processes in addition to behavioral output myself: after all, coma patients are still people if they have various types of brain activity despite comatose behavior.
But here again, my sense of un-ease with implicit dualism flairs up. It’s all well and good to say that the survivor of this process isn’t her (she may draw her locus of identity wherever she likes, it need not be her behavior), but if the result of an information process is behaviorally identical to a person, there’s something very off about saying that these information processes do not meaningfully contain a person.
I use “person” here in the sense as “one who’s stated thoughts and feelings and apparent preferences are morally relevant and should be considered the same way we would ideally consider a natural human.
Of course, it’s still not irrational to not value things, and you might actually say that to count as a person you need certain information processes or certain biology—I just think both of those values are wrong. I have a dream that beings are judged not by their algorithm, but by their behavior (additional terms and restrictions apply).
Is that better?