Moreover, it’s observably the case that consciousness-ascription is hyperactive. We readily see faces and minds in natural phenomena. We readily imagine simple stick-figures in comic strips experiencing rich mental lives.
A concern I have with the whole consciousness discussion in EA-adjacent circles is that people seem to consider their empathic response to be important evidence about the distribution of qualia in Nature, despite the obvious hyperactivity.
This post is the single most persuasive piece of writing that I have encountered with regard to talking me out of my veganism.
Particularly the idea that humans having conscious experience being a contingent fact of human evolution such that other, related, intelligent species in nearby counterfactuals don’t have anything that it is like to be them.
Considering that possibility, which seems hard to evaluate, given that I only have one datapoint (which is obviously influenced by anthropic considerations) makes it seem much more plausible that there’s nothing that it is like to be a cow, and gives me a sense of a planet earth that is much more dead and empty than the mental world I had been inhabiting 10 minutes ago.
If you cared to write up more of your understanding of “somebody being home”, I would read it with avid interest. It seems more likely than anything else that I can think of (aside from perhaps a similar post by Eliezer?) to change my mind with regards to veganism and how I should weigh the values of animals in factory farms in my philanthropic budget.
That said, I, at least, am not making this error, I think:
Another concern I have is that most people seem to neglect the difference between “exhibiting an external behavior in the same way that humans do, and for the same reasons we do”, and “having additional follow-on internal responses to that behavior”.
An example: If we suppose that it’s very morally important for people to internally subvocalize “I sneezed” after sneezing, and you do this whenever you sneeze, and all your (human) friends report that they do it too, it would nonetheless be a mistake to see a dog sneeze and say: “See! They did the morally relevant thing! It would be weird to suppose that they didn’t, when they’re sneezing for the same ancestral reasons as us!”
The ancestral reasons for the subvocalization are not the same as the ancestral reasons for the sneeze; and we already have an explanation for why animals sneeze, that doesn’t invoke any process that necessarily produces a follow-up subvocalization.
None of this rules out that dogs subvocalize in a dog-mental-language, on its own; but it does mean that drawing any strong inferences here requires us to have some model of why humans subvocalize.
Seeing a pig “scream in pain”, when you cut off its tail does not make it a foregone conclusion that the pig is experiencing anything at all or something like what pain means to me. But it does seem like a pretty good guess.
And I definitely don’t look at turtle doing any kind of planning at all and think “there must be an inner life in there!”
I’m real uncertain about what consciousness is and where it comes from, and there is an anthropic argument (which I don’t know how to think clearly about) that it is rare among animals. But from my state of knowledge, it seems like a better than even chance that many mammals have some kind of inner listener. And if they have an inner listener at all, pain seems like one of the simplest and most convergent experiences to have.
Which makes industrial factory farming an unconscionable atrocity, much worse than American slavery. It is not okay to treat conscious beings like that, no matter how dumb they are, or how little they narrativize about themselves.
My understanding is that (assuming animal consciousness), there are 100 billion experience-years in factory farms every year.
It seems to me that, in my state of uncertainty, it is extremely irresponsible to say “eh, whatever” to the possible moral atrocity. We should shut up and multiply. My uncertainty about animal consciousness only reduces the expected number of experience-years of torture by a factor of 2 or so.
An expected 50 billion moral patients getting tortured as a matter of course is the worst moral catastrophe perpetrated by humans ever (with the exception of our rush to destroy all the value of the future).
Even if someone has more philosophical clarity than I do, they have to be confident at at a level of around 100,000 to 1 that livestock animals are not experiencing beings, before the expected value of this moral catastrophe starts being of comparable scale to well-known moral catastrophes like the Holocaust, and American slavery, and the Mongol invasion of the world. Anything less than that, and the expected value of industrial meat production is beating every other moral catastrophe by orders of magnitude (again, with the exception of x-risk).
(Admittedly there are some assumptions here about the moral value of pain and fear, relative to other good and bad things that can happen to a person, which might influence how we weight the experiences of animals compared to people. But “pain and terror are really bad, and it is really bad for someone to persistently experience them” seems like a not-very-crazy assumption.)
Anyway, this is a digression from the point of this post, but I apparently had a rant in me, and I don’t want animal welfare considerations to be weak-maned. A concern for animal welfare isn’t fundamentally based on shoddy philosophy. It seems to me that it is a very natural starting point, given our state of philosophical confusion.
Assuming Yudkowsky’s position is quite similair to Nate’s, which it sounds like given what both have written, I’d recommend reading this debate Yud was in to get a better understanding of this model[1]. Follow up on the posts Yud, Luke and Rob mention if you’d care to know more. Personally, I’m closer to Luke’s position on the topic. He gives a clear and thorough exposition here.
Also, I anticipate that if Nate does have a fullly fleshed out model, he’d be reluctant to share it. I think Yud said he didn’t wish to give too many specifics as he was worried trolls might implement a maximally suffering entity. And, you know, 4chan exists. Plenty of people there would be inclined to do such a thing to signal disbelief or simply to upset others.
I think this kind of model would fall under the illusionism school of thought. “Consciousness is an illusion” is the motto. I parse it as “the concept you have of consciousness is an illusion, a persistent part of your map that doesn’t match the territory. Just as you may be convinced these tables are of different shapes, even after rotating and matching them onto one another, so too may you be convinced that you have this property known as consciousness.” That doesn’t mean the territory has nothing like consciousness in it, just that it doesn’t have the exact form you believed it to. You can understand on a deliberative level how the shapes are the same and the process that generates the illusion whilst still experiencing the illusion. EDIT: The same for your intuition that “consciousness has to be more than an algorithm” or “more than matter” or so on.
Huh. I’m a bit surprised. I guess I thought that since a lot of the stuff I’ve read by Eliezer seems heavily influenced by Dennet. And he’s also a physicalist. His approach also seems to be “explain our claims about consciousness”. Plus there’s all the stuff about self reflection, how an algorithm feels from the inside etc. I guess I was just bucketing that stuff together with (weak) illusionism. After writing that out, I can see how those points doesn’t imply illusionism. Does Eliezer think we can save the phenomena of consciousness and hence it calling it an illusion is a mistake? Or is there something else going on there?
I think Dennett’s argumentation about the hard problem of consciousness has usually been terrible, and I don’t see him as an important forerunner of illusionism, though he’s an example of someone who soldiered on for anti-realism about phenomenal consciousness for long stretches of time where the arguments were lacking.
I think I remember Eliezer saying somewhere that he also wasn’t impressed with Dennett’s takes on the hard problem, but I forget where?
His approach also seems to be “explain our claims about consciousness”.
There’s some similarity between heterophenomenology and the way Eliezer/Nate talk about consciousness, though I guess I think of Eliezer/Nate’s “let’s find a theory that makes sense of our claims about consciousness” as more “here’s a necessary feature of any account of consciousness, and a plausibly fruitful way to get insight into a lot of what’s going on”, not as an argument for otherwise ignoring all introspective data. Heterophenomenology IMO was always a somewhat silly and confused idea, because it’s proposing that we a priori reject introspective evidence but it’s not giving a clear argument for why.
(Or, worse, it’s arguing something orthogonal to whether we should care about introspective evidence, while winking and nudging that there’s something vaguely unrespectable about the introspective-evidence question.)
There are good arguments for being skeptical of introspection here, but “that doesn’t sound like it’s in the literary genre of science” should not be an argument that Bayesians find very compelling.
This post is the single most persuasive piece of writing that I have encountered with regard to talking me out of my veganism.
Particularly the idea that humans having conscious experience being a contingent fact of human evolution such that other, related, intelligent species in nearby counterfactuals don’t have anything that it is like to be them.
Considering that possibility, which seems hard to evaluate, given that I only have one datapoint (which is obviously influenced by anthropic considerations) makes it seem much more plausible that there’s nothing that it is like to be a cow, and gives me a sense of a planet earth that is much more dead and empty than the mental world I had been inhabiting 10 minutes ago.
If you cared to write up more of your understanding of “somebody being home”, I would read it with avid interest. It seems more likely than anything else that I can think of (aside from perhaps a similar post by Eliezer?) to change my mind with regards to veganism and how I should weigh the values of animals in factory farms in my philanthropic budget.
That said, I, at least, am not making this error, I think:
Seeing a pig “scream in pain”, when you cut off its tail does not make it a foregone conclusion that the pig is experiencing anything at all or something like what pain means to me. But it does seem like a pretty good guess.
And I definitely don’t look at turtle doing any kind of planning at all and think “there must be an inner life in there!”
I’m real uncertain about what consciousness is and where it comes from, and there is an anthropic argument (which I don’t know how to think clearly about) that it is rare among animals. But from my state of knowledge, it seems like a better than even chance that many mammals have some kind of inner listener. And if they have an inner listener at all, pain seems like one of the simplest and most convergent experiences to have.
Which makes industrial factory farming an unconscionable atrocity, much worse than American slavery. It is not okay to treat conscious beings like that, no matter how dumb they are, or how little they narrativize about themselves.
My understanding is that (assuming animal consciousness), there are 100 billion experience-years in factory farms every year.
It seems to me that, in my state of uncertainty, it is extremely irresponsible to say “eh, whatever” to the possible moral atrocity. We should shut up and multiply. My uncertainty about animal consciousness only reduces the expected number of experience-years of torture by a factor of 2 or so.
An expected 50 billion moral patients getting tortured as a matter of course is the worst moral catastrophe perpetrated by humans ever (with the exception of our rush to destroy all the value of the future).
Even if someone has more philosophical clarity than I do, they have to be confident at at a level of around 100,000 to 1 that livestock animals are not experiencing beings, before the expected value of this moral catastrophe starts being of comparable scale to well-known moral catastrophes like the Holocaust, and American slavery, and the Mongol invasion of the world. Anything less than that, and the expected value of industrial meat production is beating every other moral catastrophe by orders of magnitude (again, with the exception of x-risk).
(Admittedly there are some assumptions here about the moral value of pain and fear, relative to other good and bad things that can happen to a person, which might influence how we weight the experiences of animals compared to people. But “pain and terror are really bad, and it is really bad for someone to persistently experience them” seems like a not-very-crazy assumption.)
Anyway, this is a digression from the point of this post, but I apparently had a rant in me, and I don’t want animal welfare considerations to be weak-maned. A concern for animal welfare isn’t fundamentally based on shoddy philosophy. It seems to me that it is a very natural starting point, given our state of philosophical confusion.
EDIT: Added in the correct links.
Assuming Yudkowsky’s position is quite similair to Nate’s, which it sounds like given what both have written, I’d recommend reading this debate Yud was in to get a better understanding of this model[1]. Follow up on the posts Yud, Luke and Rob mention if you’d care to know more. Personally, I’m closer to Luke’s position on the topic. He gives a clear and thorough exposition here.
Also, I anticipate that if Nate does have a fullly fleshed out model, he’d be reluctant to share it. I think Yud said he didn’t wish to give too many specifics as he was worried trolls might implement a maximally suffering entity. And, you know, 4chan exists. Plenty of people there would be inclined to do such a thing to signal disbelief or simply to upset others.
I think this kind of model would fall under the illusionism school of thought. “Consciousness is an illusion” is the motto. I parse it as “the concept you have of consciousness is an illusion, a persistent part of your map that doesn’t match the territory. Just as you may be convinced these tables are of different shapes, even after rotating and matching them onto one another, so too may you be convinced that you have this property known as consciousness.” That doesn’t mean the territory has nothing like consciousness in it, just that it doesn’t have the exact form you believed it to. You can understand on a deliberative level how the shapes are the same and the process that generates the illusion whilst still experiencing the illusion. EDIT: The same for your intuition that “consciousness has to be more than an algorithm” or “more than matter” or so on.
Luke M and I are illusionists, but I don’t think Eliezer or Nate are illusionists.
Huh. I’m a bit surprised. I guess I thought that since a lot of the stuff I’ve read by Eliezer seems heavily influenced by Dennet. And he’s also a physicalist. His approach also seems to be “explain our claims about consciousness”. Plus there’s all the stuff about self reflection, how an algorithm feels from the inside etc. I guess I was just bucketing that stuff together with (weak) illusionism. After writing that out, I can see how those points doesn’t imply illusionism. Does Eliezer think we can save the phenomena of consciousness and hence it calling it an illusion is a mistake? Or is there something else going on there?
I think Dennett’s argumentation about the hard problem of consciousness has usually been terrible, and I don’t see him as an important forerunner of illusionism, though he’s an example of someone who soldiered on for anti-realism about phenomenal consciousness for long stretches of time where the arguments were lacking.
I think I remember Eliezer saying somewhere that he also wasn’t impressed with Dennett’s takes on the hard problem, but I forget where?
There’s some similarity between heterophenomenology and the way Eliezer/Nate talk about consciousness, though I guess I think of Eliezer/Nate’s “let’s find a theory that makes sense of our claims about consciousness” as more “here’s a necessary feature of any account of consciousness, and a plausibly fruitful way to get insight into a lot of what’s going on”, not as an argument for otherwise ignoring all introspective data. Heterophenomenology IMO was always a somewhat silly and confused idea, because it’s proposing that we a priori reject introspective evidence but it’s not giving a clear argument for why.
(Or, worse, it’s arguing something orthogonal to whether we should care about introspective evidence, while winking and nudging that there’s something vaguely unrespectable about the introspective-evidence question.)
There are good arguments for being skeptical of introspection here, but “that doesn’t sound like it’s in the literary genre of science” should not be an argument that Bayesians find very compelling.
Yeah. I’d already read the Yudkowsky piece. I hadn’t read the Muehlhauser one though!