This comment expands how you’d go about reprogramming someone in this way with another layer of granularity, which is certainly interesting on its own merits, but it doesn’t strongly support your assertion about what it would feel like to be that someone. What makes you think this is how qualia work? Have you been performing sinister experiments in your basement? Do you have magic counterfactual-luminosity-powers?
I think Eliezer is simply suggesting that qualia don’t in fact exist in a vacuum. Green feels the way it does partly because it’s the color of chlorophyll. In a universe where plants had picked a different color for chlorophyll (melanophyll, say), with everything else (per impossibile) held constant, we would associate an at least slightly different quale with green and with black, because part of how colors feel is that they subtly remind us of the things that are most often colored that way. Similarly, part of how ‘goodness’ feels is that it imperceptibly reminds us of the extension of good; if that extension were dramatically different, then the feeling would (barring any radical redesigns of how associative thought works) be different too. In a universe where the smallest birds were ten feet tall, thinking about ‘birdiness’ would involve a different quale for the same reason.
It sounds to me like you don’t think the answer had anything to do with the question. But to think that, you’d pretty much have to discard both the functionalist and physicalist theories of mind, and go full dualist/neutral monist; wouldn’t you?
I think I’ll go with this as my reply—“Well, imagine that you lived in a monist universe—things would pretty much have to work that way, wouldn’t they?”
Possibly (this is total speculation) Eliezer is talking about the feeling of one’s entire motivational system (or some large part of it), while you’re talking about the feeling of some much narrower system that you identify as computing morality; so his conception of a Clippified human wouldn’t share your terminal-ish drives to eat tasty food, be near friends, etc., and the qualia that correspond to wanting those things.
The Clippified human categorizes foods into a similar metric of similarity—still believes that fish tastes more like steak than like chocolate—but of course is not motivated to eat except insofar as staying alive helps to make more paperclips. They have taste, but not tastiness. Actually that might make a surprisingly good metaphor for a lot of the difficulty that some people have with comprehending how Clippy can understand your pain and not care—maybe I’ll try it on the other end of that Facebook conversation.
The metaphor seems like it could lose most of its effectiveness on people who have never applied the outside view to how taste and tastiness feel from inside—they’ve never realized that chocolate tastes good because their brain fires “good taste” when it perceives the experience “chocolate taste”. The obvious resulting cognitive dissonance (from “tastes bad for others”) predictions match my observations, so I suspect this would be common among non-rationalists. If the Facebook conversation you mention is with people who haven’t crossed that inferential gap yet, it might prove not that useful.
This comment expands how you’d go about reprogramming someone in this way with another layer of granularity, which is certainly interesting on its own merits, but it doesn’t strongly support your assertion about what it would feel like to be that someone. What makes you think this is how qualia work? Have you been performing sinister experiments in your basement? Do you have magic counterfactual-luminosity-powers?
I think Eliezer is simply suggesting that qualia don’t in fact exist in a vacuum. Green feels the way it does partly because it’s the color of chlorophyll. In a universe where plants had picked a different color for chlorophyll (melanophyll, say), with everything else (per impossibile) held constant, we would associate an at least slightly different quale with green and with black, because part of how colors feel is that they subtly remind us of the things that are most often colored that way. Similarly, part of how ‘goodness’ feels is that it imperceptibly reminds us of the extension of good; if that extension were dramatically different, then the feeling would (barring any radical redesigns of how associative thought works) be different too. In a universe where the smallest birds were ten feet tall, thinking about ‘birdiness’ would involve a different quale for the same reason.
It sounds to me like you don’t think the answer had anything to do with the question. But to think that, you’d pretty much have to discard both the functionalist and physicalist theories of mind, and go full dualist/neutral monist; wouldn’t you?
I think I’ll go with this as my reply—“Well, imagine that you lived in a monist universe—things would pretty much have to work that way, wouldn’t they?”
Possibly (this is total speculation) Eliezer is talking about the feeling of one’s entire motivational system (or some large part of it), while you’re talking about the feeling of some much narrower system that you identify as computing morality; so his conception of a Clippified human wouldn’t share your terminal-ish drives to eat tasty food, be near friends, etc., and the qualia that correspond to wanting those things.
The Clippified human categorizes foods into a similar metric of similarity—still believes that fish tastes more like steak than like chocolate—but of course is not motivated to eat except insofar as staying alive helps to make more paperclips. They have taste, but not tastiness. Actually that might make a surprisingly good metaphor for a lot of the difficulty that some people have with comprehending how Clippy can understand your pain and not care—maybe I’ll try it on the other end of that Facebook conversation.
The metaphor seems like it could lose most of its effectiveness on people who have never applied the outside view to how taste and tastiness feel from inside—they’ve never realized that chocolate tastes good because their brain fires “good taste” when it perceives the experience “chocolate taste”. The obvious resulting cognitive dissonance (from “tastes bad for others”) predictions match my observations, so I suspect this would be common among non-rationalists. If the Facebook conversation you mention is with people who haven’t crossed that inferential gap yet, it might prove not that useful.