Note: I think there’s also a specific philosophical reason to think consciousness is pretty ubiquitous and fundamental—the hard problem of consciousness. The ‘we’re investing too much metaphysical importance into our pet obsession’ thing isn’t the only reason anyone thinks consciousness (or very-consciousness-ish things) might be ubiquitous.
But per illusionism, I think this philosophical reason turns out to be wrong in the end, leaving us without a principled reason to anthropomorphize / piston-steam-engine-omorphize the universe like that.
It’s true (on your view and mine) that there’s a pervasive introspective, quasi-perceptual illusion humans suffer about consciousness.
But the functional properties of consciousness (or of ‘the consciousness-like thing we actually have’) are all still there, behind the illusion.
Swapping from the illusory view to the almost-functionally-identical non-illusory view, I strongly expect, will not cause us to stop caring about the underlying real things (thoughts, and feelings, and memories, and love, and friendship).
And if we still care about those real things, then our utility function is still (I claim) pretty obsessed with some very specific and complicated engines/computations. (Indeed, a lot more specific and complicated than real-world piston steam engines.)
I’d expect it to mostly look more like how our orientation to water and oars changes when we realize that the oar reflected in the water isn’t really broken.
I don’t expect the revelation to cause humanity to replace its values with such vague values that we reshape our lives around slightly adjusting the spatial configurations of rocks or electrons, because our new ‘generalized friendship’ concept treats some common pebble configurations as more or less ‘friend-like’, more or less ‘asleep’, more or less ‘annoyed’, etc.
(Maybe we’ll do a little of that, for fun, as a sort of aesthetic project / a way of making the world feel more beautiful. But that gets us closer to my version of ‘generalizing human values to apply to unconscious stuff’, not Brian’s version.)
Swapping from the illusory view to the almost-functionally-identical non-illusory view, I strongly expect, will not cause us to stop caring about the underlying real things (thoughts, and feelings, and memories, and love, and friendship).
Putting aside my other disagreements for now (and I appreciate the other things you said), I’d like to note that I see my own view as “rescuing the utility function” far more than a view which asserts that non-human animals are largely unconscious automatons.
To the extent that learning to be a reductionist shouldn’t radically reshape what we care about, it seems clear to me that we shouldn’t stop caring about non-human animals, especially larger ones like pigs. I think most people, including the majority of people who eat meat regularly, think that animals are conscious. And I wouldn’t expect that having a dog or a cat personally would substantially negatively correlate with believing that animals are conscious (which would be weakly expected if we think our naive impressions track truth, and non-human animals aren’t conscious).
There have been quite a few surveys about this, though I’m not quickly coming up any good ones right now (besides perhaps this survey which found that 47% of people supported a ban on slaughterhouses, a result which was replicated, though it’s perhaps only about one third when you subtract those who don’t know what a slaughterhouse is).
To the extent that learning to be a reductionist shouldn’t radically reshape what we care about, it seems clear to me that we shouldn’t stop caring about non-human animals, especially larger ones like pigs. I think most people, including the majority of people who eat meat regularly, think that animals are conscious.
This seems totally wrong to me.
I’m an illusionist, but that doesn’t mean I think that humans’ values are indifferent between the ‘entity with a point of view’ cluster in thingspace (e.g., typical adult humans), and the ‘entity with no point of view’ cluster in thingspace (e.g., braindead humans).
Just the opposite: I think there’s an overwhelmingly large and absolutely morally crucial difference between ‘automaton that acts sort of like it has morally relevant cognitive processes’ (say, a crude robot or a cartoon hand-designed to inspire people to anthropomorphize it), and ‘thing that actually has the morally relevant cognitive processes’.
It’s a wide-open empirical question whether, e.g., dogs are basically ‘automata that lack the morally relevant cognitive processes altogether’, versus ‘things with the morally relevant cognitive processes’. And I think ‘is there something it’s like to be that dog?’ is actually a totally fine intuition pump for imperfectly getting at the kind of difference that morally matters here, even though this concept starts to break when you put philosophical weight on it (because of the ‘hard problem’ lllusion) and needs to be replaced with a probably-highly-similar functional equivalent.
Like, the ‘is there something it’s like to be X?’ question is subject to an illusion in humans, and it’s a real messy folk concept that will surely need to be massively revised as we figure out what’s really going on. But it’s surely closer to asking the morally important question about dogs, compared to terrible, overwhelmingly morally unimportant questions like ‘can the external physical behaviors of this entity trick humans into anthropomorphizing the entity and feeling like it has a human-ish inner life’.
Tricking humans into anthropomorphizing things is so easy! What matters is what’s in the dog’s head!
Like, yes, when I say ‘the moral evaluation function takes the dog’s brain as an input, not the cuteness of its overt behaviors’, I am talking about a moral evaluation function that we have to extract from the human’s brain.
But the human moral evaluation function is a totally different function from the ‘does-this-thing-make-noises-and-facial-expressions-that-naturally-make-me-feel-sympathy-for-it-before-I-learn-any-neuroscience?’ function, even though both are located in the human brain.
Thinking (with very low confidence) about an idealized, heavily self-modified, reflectively consistent, CEV-ish version of me:
If it turns out that squirrels are totally unconscious automata, then I think Ideal Me would probably at least weakly prefer to not go around stepping on squirrels for fun. I think this would be for two reasons:
The kind of reverence-for-beauty that makes me not want to randomly shred flowers to pieces. Squirrels can be beautiful even if they have no moral value. Gorgeous sunsets plausibly deserve a similar kind of reverence.
The kind of disgust that makes me not want to draw pictures of mutilated humans. There may be nothing morally important about the cognitive algorithms in squirrels’ brains; but squirrels still have a lot of anatomical similarities to humans, and the visual resemblance between the two is reason enough to be grossed out by roadkill.
In both cases, these don’t seem like obviously bad values to me. (And I’m pretty conservative about getting rid of my values! Though a lot can and should change eventually, as humanity figures out all the risks and implications of various self-modifications. Indeed, I think the above descriptions would probably look totally wrong, quaint, and confused to a real CEV of mine; but it’s my best guess for now.)
In contrast, conflating the moral worth of genuinely-totally-conscious things (insofar as anything is genuinely conscious) with genuinely-totally-unconscious things seems… actively bad, to me? Not a value worth endorsing or protecting?
Like, maybe you think it’s implausible that squirrels, with all their behavioral complexity, could have ‘the lights be off’ in the way that a roomba with a cute face glued to it has ‘the lights off’. I disagree somewhat, but I find that view vastly less objectionable than ‘it doesn’t even matter what the squirrel’s mind is like, it just matters how uneducated humans naively emotionally respond to the squirrel’s overt behaviors’.
Maybe a way of gesturing at the thing is: Phenomenal consciousness is an illusion, but the illusion adds up to normality. It doesn’t add up to ‘therefore the difference between automata / cartoon characters and things-that-actually-have-the-relevant-mental-machinery-in-their-brains suddenly becomes unimportant (or even less important)’.
Note: I think there’s also a specific philosophical reason to think consciousness is pretty ubiquitous and fundamental—the hard problem of consciousness. The ‘we’re investing too much metaphysical importance into our pet obsession’ thing isn’t the only reason anyone thinks consciousness (or very-consciousness-ish things) might be ubiquitous.
But per illusionism, I think this philosophical reason turns out to be wrong in the end, leaving us without a principled reason to anthropomorphize / piston-steam-engine-omorphize the universe like that.
It’s true (on your view and mine) that there’s a pervasive introspective, quasi-perceptual illusion humans suffer about consciousness.
But the functional properties of consciousness (or of ‘the consciousness-like thing we actually have’) are all still there, behind the illusion.
Swapping from the illusory view to the almost-functionally-identical non-illusory view, I strongly expect, will not cause us to stop caring about the underlying real things (thoughts, and feelings, and memories, and love, and friendship).
And if we still care about those real things, then our utility function is still (I claim) pretty obsessed with some very specific and complicated engines/computations. (Indeed, a lot more specific and complicated than real-world piston steam engines.)
I’d expect it to mostly look more like how our orientation to water and oars changes when we realize that the oar reflected in the water isn’t really broken.
I don’t expect the revelation to cause humanity to replace its values with such vague values that we reshape our lives around slightly adjusting the spatial configurations of rocks or electrons, because our new ‘generalized friendship’ concept treats some common pebble configurations as more or less ‘friend-like’, more or less ‘asleep’, more or less ‘annoyed’, etc.
(Maybe we’ll do a little of that, for fun, as a sort of aesthetic project / a way of making the world feel more beautiful. But that gets us closer to my version of ‘generalizing human values to apply to unconscious stuff’, not Brian’s version.)
Putting aside my other disagreements for now (and I appreciate the other things you said), I’d like to note that I see my own view as “rescuing the utility function” far more than a view which asserts that non-human animals are largely unconscious automatons.
To the extent that learning to be a reductionist shouldn’t radically reshape what we care about, it seems clear to me that we shouldn’t stop caring about non-human animals, especially larger ones like pigs. I think most people, including the majority of people who eat meat regularly, think that animals are conscious. And I wouldn’t expect that having a dog or a cat personally would substantially negatively correlate with believing that animals are conscious (which would be weakly expected if we think our naive impressions track truth, and non-human animals aren’t conscious).
There have been quite a few surveys about this, though I’m not quickly coming up any good ones right now (besides perhaps this survey which found that 47% of people supported a ban on slaughterhouses, a result which was replicated, though it’s perhaps only about one third when you subtract those who don’t know what a slaughterhouse is).
This seems totally wrong to me.
I’m an illusionist, but that doesn’t mean I think that humans’ values are indifferent between the ‘entity with a point of view’ cluster in thingspace (e.g., typical adult humans), and the ‘entity with no point of view’ cluster in thingspace (e.g., braindead humans).
Just the opposite: I think there’s an overwhelmingly large and absolutely morally crucial difference between ‘automaton that acts sort of like it has morally relevant cognitive processes’ (say, a crude robot or a cartoon hand-designed to inspire people to anthropomorphize it), and ‘thing that actually has the morally relevant cognitive processes’.
It’s a wide-open empirical question whether, e.g., dogs are basically ‘automata that lack the morally relevant cognitive processes altogether’, versus ‘things with the morally relevant cognitive processes’. And I think ‘is there something it’s like to be that dog?’ is actually a totally fine intuition pump for imperfectly getting at the kind of difference that morally matters here, even though this concept starts to break when you put philosophical weight on it (because of the ‘hard problem’ lllusion) and needs to be replaced with a probably-highly-similar functional equivalent.
Like, the ‘is there something it’s like to be X?’ question is subject to an illusion in humans, and it’s a real messy folk concept that will surely need to be massively revised as we figure out what’s really going on. But it’s surely closer to asking the morally important question about dogs, compared to terrible, overwhelmingly morally unimportant questions like ‘can the external physical behaviors of this entity trick humans into anthropomorphizing the entity and feeling like it has a human-ish inner life’.
Tricking humans into anthropomorphizing things is so easy! What matters is what’s in the dog’s head!
Like, yes, when I say ‘the moral evaluation function takes the dog’s brain as an input, not the cuteness of its overt behaviors’, I am talking about a moral evaluation function that we have to extract from the human’s brain.
But the human moral evaluation function is a totally different function from the ‘does-this-thing-make-noises-and-facial-expressions-that-naturally-make-me-feel-sympathy-for-it-before-I-learn-any-neuroscience?’ function, even though both are located in the human brain.
Thinking (with very low confidence) about an idealized, heavily self-modified, reflectively consistent, CEV-ish version of me:
If it turns out that squirrels are totally unconscious automata, then I think Ideal Me would probably at least weakly prefer to not go around stepping on squirrels for fun. I think this would be for two reasons:
The kind of reverence-for-beauty that makes me not want to randomly shred flowers to pieces. Squirrels can be beautiful even if they have no moral value. Gorgeous sunsets plausibly deserve a similar kind of reverence.
The kind of disgust that makes me not want to draw pictures of mutilated humans. There may be nothing morally important about the cognitive algorithms in squirrels’ brains; but squirrels still have a lot of anatomical similarities to humans, and the visual resemblance between the two is reason enough to be grossed out by roadkill.
In both cases, these don’t seem like obviously bad values to me. (And I’m pretty conservative about getting rid of my values! Though a lot can and should change eventually, as humanity figures out all the risks and implications of various self-modifications. Indeed, I think the above descriptions would probably look totally wrong, quaint, and confused to a real CEV of mine; but it’s my best guess for now.)
In contrast, conflating the moral worth of genuinely-totally-conscious things (insofar as anything is genuinely conscious) with genuinely-totally-unconscious things seems… actively bad, to me? Not a value worth endorsing or protecting?
Like, maybe you think it’s implausible that squirrels, with all their behavioral complexity, could have ‘the lights be off’ in the way that a roomba with a cute face glued to it has ‘the lights off’. I disagree somewhat, but I find that view vastly less objectionable than ‘it doesn’t even matter what the squirrel’s mind is like, it just matters how uneducated humans naively emotionally respond to the squirrel’s overt behaviors’.
Maybe a way of gesturing at the thing is: Phenomenal consciousness is an illusion, but the illusion adds up to normality. It doesn’t add up to ‘therefore the difference between automata / cartoon characters and things-that-actually-have-the-relevant-mental-machinery-in-their-brains suddenly becomes unimportant (or even less important)’.