I’d guess the most controversial part of this post will be the claim ‘it’s not incredibly obvious that factory-farmed animals (if conscious) have lives that are worse than nonexistence’?
But I don’t see why. It’s hard to be confident of any view on this, when we understand so little about consciousness, animal cognition, or morality. Combining three different mysteries doesn’t tend to create an environment for extreme confidence — rather, you end up even more uncertain in the combination than in each individual component.
And there are obvious (speciesist) reasons people would tend to put too much confidence in ‘factory-farmed animals have net-negative lives’.
E.g., when we imagine the Holocaust, we imagine relatively rich and diverse experiences, rather than reducing concentration camp victims to a very simple thing like ‘pain in the void’.
I would guess that humans’ nightmarish experience in concentration camps was usually better than nonexistence; and even if you suspect this is false, it seems easy to imagine how it could be true, because there’s a lot more to human experience than ‘pain, and beyond that pain, darkness’. It feels like a very open question in the human case.
But just because chickens lack some of the specific faculties humans have, doesn’t mean that (if conscious) chicken minds are ‘simple’, or simple in the particular ways people tend to assume. In particular, it’s far from obvious (and depends on contingent theories about consciousness and cognition) that you need human-style language or abstraction in order to have ‘rich’ experience that just has a lot of morally important stuff going on. A blank map doesn’t correspond to a blank territory; it corresponds to a thing we know very little about.
(For similar reasons, I think EAs in general worry far too little about whether chickens and other animals are utility monsters — this seems like a very live hypothesis to me, whether factory-farmed chickens have net-positive lives or net-negative ones.)
Pretty much all the writing I’ve read by Holocaust survivors says that this was not true, that the experience was unambiguously worse than being dead, and that the only thing that kept them going was the hope of being freed. (E.g. according to Victor Frankl in “Man’s Search for Meaning”, all the prisoners in his camp agreed that, not only was it worse than being dead, it was so bad that any good experiences after being freed could not make up for it how bad it was. Why they didn’t kill themselves is an interesting question that he explores a bit in the book.) Are there any Holocaust survivors who claim otherwise?
I would guess that humans’ nightmarish experience in concentration camps was usually better than nonexistence; and even if you suspect this is false, it seems easy to imagine how it could be true, because there’s a lot more to human experience than ‘pain, and beyond that pain, darkness’.
I can’t really imagine this – at least for people in extermination camps, who weren’t killed. I’d assume that, all else equal, the vast majority of prisoners would choose to skip that part of their life. But maybe I’m missing something or have unusual intuitions.
Entirely agree. There are certainly chunks of my life (as a privileged first-worlder) I’d prefer not to have experienced, and these generally these seem less bad than “an average period of the same duration as a Holocaust prisoner.” Given that animals are sentient, I’d put it at at ~98% that their lives are net negative.
Preferring not to experience something is not the same thing as it being net negative. You are comparing it to a baseline of your normal life (because not experiencing it is simply continuing to experience your usual utility level).
I think what was meant is that they’d rather experience nothing at all for the same duration, so they’re comparing the concentration camp to non-experience/non-existence, not their average experience.
I don’t think that that follows either, though. Because in practice temporarily not experiencing anything basically just means skipping to the next time you are experiencing something. So you may well intuit that you’d rather that any time the quality of your experience dips a lot.
For example, if you have a fine but mostly quite boring job, but your life outside of work is exceptionally blissful, you may well choose to ‘skip’ the work parts, to not experience them and just regain consciousness when you clock off to go live your life of luxury unendingly. That certainly doesn’t mean your time at work has negative value- it’s just nowhere near as good as the rest, so you’d rather stick to the bliss.
So I would say that no, actually this intuition merely proves that those experiences you’d prefer not to experience are below average, rather than below zero.
I think what you’re saying is coherent and could in principle explain some comparisons people make, although I think people can imagine what an experience with very little affective value, negative or positive, feels like, and then compare other experiences to that. For example, the vast majority of my experiences seem near neutral to me. We can also tell if something feels good or bad in absolute terms (or we have such judgements).
I also think your argument can prove too much: people would choose to skip all but their peak experiences in their lives, which collectively might make up a few days of life. So, I don’t think people are actually thinking about these tradeoffs the way you suggest (although I don’t think it’s implausible, either, just likely not most of the time, imo).
We also know that positive and negative affect correspond to different neural patterns using different regions of the brain, and (I think) we can tell through imaging when negative affect is absent. And more intense affect in either direction takes more of our attention. So, animals (including humans) are not physically shift-invariant with respect to affect, either.
Someone could still coherently think none of this matters morally, and what only matters is the average welfare in a life, but I think that doesn’t capture judgements we make that I do care about.
MichaelStJules is right about what I meant. While it’s true that preferring not to experience something doesn’t necessarily imply that the thing is net-negative, it seems to me very strong evidence in that direction.
Hi, instead of clogging up the thread I just thought I’d alert you that I responded to MichaelStJules, which should function equally as a response to your comment.
I would guess that humans’ nightmarish experience in concentration camps was usually better than nonexistence; and even if you suspect this is false, it seems easy to imagine how it could be true, because there’s a lot more to human experience than ‘pain, and beyond that pain, darkness’. It feels like a very open question in the human case.
When you say that it could be true, do you mean that it could be true that the person themselves would judge their experience as better than nonexistence?
(Your paragraph reads to me as implying that there could be some more objective answer to this separate from a person’s own judgment of it, but it’s hard for me to imagine what that would even mean.)
When I look at factory-farmed animals, I feel awful for them. So coming into this, I have some expectation that my eventual understanding of consciousness, animal cognition, and morality (C/A/M) will add up to normalcy (i.e. not net positive for many animals). But maybe my gut reaction isn’t that trustworthy—that’s often the case in ethical dilemmas. I do think that that gut reaction is important information, even though I don’t have a detailed model of C/A/M.
(I think the main way I end up changing my mind here is being persuaded that my gut reaction is balking at their bad quality of life, but not actually considering the net-positive/negative question)
When I look at factory-farmed animals, I feel awful for them. So coming into this, I have some expectation that my eventual understanding of consciousness, animal cognition, and morality (C/A/M) will add up to normalcy (i.e. not net positive for many animals).
But:
‘It all adds up to normality’ doesn’t mean ‘you should assume your initial intuitions and snap judgments are correct even in cases where there’s no evolutionary or physical reason for the intuition/judgment to be correct’. It means ‘reductive explanations generally have to recapture the phenomenon somehow’. Here, the phenomenon is a feeling of your brain, and ‘that feeling is just anthropomorphism’ recaptures the phenomenon perfectly, regardless of whether animals are conscious, what their inner life is like (if they’re conscious), etc.
I agree with the claim ‘my gut reaction is that factory-farmed pigs suffer a lot’. I disagree with the claim ‘my gut reaction is that factory-farmed pigs would be better off not existing’. I think that’s a super different claim, and builds in a lot more theory and deliberative reasoning (though it may feel obvious once it’s been cached long enough).
I do think that that gut reaction is important information
I just disagree. I think it’s not important at all, except insofar as it helps us notice the hypothesis that life might be terrible, net-negative, etc. for chickens in factory farms.
E.g., a lot of people seem to think that chickens are obviously conscious, but that ants aren’t obviously conscious (or even that they’re obviously not conscious). This seems like an obviously silly position to me, unless the person has a very detailed, well-supported, predictive model of consciousness that makes that prediction. In this case, I think that going through the imaginative exercise of anthropomorphizing ants could be quite epistemically useful, to make it more salient that this really is a live possibility.
But no, I don’t think the imaginative exercise actually gives us Bayesian evidence about what’s going on inside ants’ brains — it’s purely ‘helping correct for a bias that made us bizarrely neglect a hypothesis a superintelligence would never neglect’; the way the exercise plays out in one’s head doesn’t covary with ant consciousness across possible worlds. And exactly the same is true for chickens.
I’m confused why you wrote “It doesn’t mean ‘you should assume your initial intuitions and snap judgments are correct’” when in the very next sentence I said “But maybe my gut reaction isn’t that trustworthy—that’s often the case in ethical dilemmas.”?
I disagree with the claim ‘my gut reaction is that factory-farmed pigs would be better off not existing’
OK, but do you disagree with the claim ‘Turntrout’s gut reaction is that factory-farmed pigs would be better off not existing’? Because that’s true for me, at least on my first consideration of the issue.
[ETA: Removed superfluous reaction]
Attempted restatement of my point: My gut reaction is evidence about what my implicit C/A/M theories predict, which I should take seriously to the extent that I have been actually ingraining all the thought experiments I’ve considered. And just because the reaction isn’t subvocalized via a verbalized explicit theory, doesn’t mean it’s not important evidence.
Similarly: When considering an action, I may snap-judge it to be squidgy and bad, even though I didn’t yet run a full-blown game-theoretic analysis in my head.
(Let me know if I also seem to be sliding off of your point!)
I agree with the claim ‘my gut reaction is that factory-farmed pigs suffer a lot’. I disagree with the claim ‘my gut reaction is that factory-farmed pigs would be better off not existing’. I think that’s a super different claim, and builds in a lot more theory and deliberative reasoning (though it may feel obvious once it’s been cached long enough).
I avoid factory farmed pork because their existence seems net negative to me, but don’t do this for chickens. This is largely because I believe pigs have qualia similar enough to me that I don’t need to worry about the animal cognition part of c/a/m (I do want to note that you seem to be arguing from a perspective wherein pro-existence is the null, and so you need to reason yourself out of it to be anti-natalist for the animals). I find chickens difficult to model using the machinery I use for humans, but that machinery works okay on pigs (although this is largely through seeing videos of them instead of in person interaction, so it’s absolutely possible I’m mistaken).
I’m not sure how to handle the “consciousness” part, since they cannot advocate for themselves or express preferences for or against existence in ways that are legible to me.
I’d guess the most controversial part of this post will be the claim ‘it’s not incredibly obvious that factory-farmed animals (if conscious) have lives that are worse than nonexistence’?
But I don’t see why. It’s hard to be confident of any view on this, when we understand so little about consciousness, animal cognition, or morality. Combining three different mysteries doesn’t tend to create an environment for extreme confidence — rather, you end up even more uncertain in the combination than in each individual component.
And there are obvious (speciesist) reasons people would tend to put too much confidence in ‘factory-farmed animals have net-negative lives’.
E.g., when we imagine the Holocaust, we imagine relatively rich and diverse experiences, rather than reducing concentration camp victims to a very simple thing like ‘pain in the void’.
I would guess that humans’ nightmarish experience in concentration camps was usually better than nonexistence; and even if you suspect this is false, it seems easy to imagine how it could be true, because there’s a lot more to human experience than ‘pain, and beyond that pain, darkness’. It feels like a very open question in the human case.
But just because chickens lack some of the specific faculties humans have, doesn’t mean that (if conscious) chicken minds are ‘simple’, or simple in the particular ways people tend to assume. In particular, it’s far from obvious (and depends on contingent theories about consciousness and cognition) that you need human-style language or abstraction in order to have ‘rich’ experience that just has a lot of morally important stuff going on. A blank map doesn’t correspond to a blank territory; it corresponds to a thing we know very little about.
(For similar reasons, I think EAs in general worry far too little about whether chickens and other animals are utility monsters — this seems like a very live hypothesis to me, whether factory-farmed chickens have net-positive lives or net-negative ones.)
Pretty much all the writing I’ve read by Holocaust survivors says that this was not true, that the experience was unambiguously worse than being dead, and that the only thing that kept them going was the hope of being freed. (E.g. according to Victor Frankl in “Man’s Search for Meaning”, all the prisoners in his camp agreed that, not only was it worse than being dead, it was so bad that any good experiences after being freed could not make up for it how bad it was. Why they didn’t kill themselves is an interesting question that he explores a bit in the book.) Are there any Holocaust survivors who claim otherwise?
I can’t really imagine this – at least for people in extermination camps, who weren’t killed. I’d assume that, all else equal, the vast majority of prisoners would choose to skip that part of their life. But maybe I’m missing something or have unusual intuitions.
Entirely agree. There are certainly chunks of my life (as a privileged first-worlder) I’d prefer not to have experienced, and these generally these seem less bad than “an average period of the same duration as a Holocaust prisoner.” Given that animals are sentient, I’d put it at at ~98% that their lives are net negative.
Preferring not to experience something is not the same thing as it being net negative. You are comparing it to a baseline of your normal life (because not experiencing it is simply continuing to experience your usual utility level).
I think what was meant is that they’d rather experience nothing at all for the same duration, so they’re comparing the concentration camp to non-experience/non-existence, not their average experience.
In other words, the question is: Would you prefer to experience X, or spend the same amount of time in coma?
I don’t think that that follows either, though. Because in practice temporarily not experiencing anything basically just means skipping to the next time you are experiencing something. So you may well intuit that you’d rather that any time the quality of your experience dips a lot.
For example, if you have a fine but mostly quite boring job, but your life outside of work is exceptionally blissful, you may well choose to ‘skip’ the work parts, to not experience them and just regain consciousness when you clock off to go live your life of luxury unendingly. That certainly doesn’t mean your time at work has negative value- it’s just nowhere near as good as the rest, so you’d rather stick to the bliss.
So I would say that no, actually this intuition merely proves that those experiences you’d prefer not to experience are below average, rather than below zero.
I think what you’re saying is coherent and could in principle explain some comparisons people make, although I think people can imagine what an experience with very little affective value, negative or positive, feels like, and then compare other experiences to that. For example, the vast majority of my experiences seem near neutral to me. We can also tell if something feels good or bad in absolute terms (or we have such judgements).
I also think your argument can prove too much: people would choose to skip all but their peak experiences in their lives, which collectively might make up a few days of life. So, I don’t think people are actually thinking about these tradeoffs the way you suggest (although I don’t think it’s implausible, either, just likely not most of the time, imo).
We also know that positive and negative affect correspond to different neural patterns using different regions of the brain, and (I think) we can tell through imaging when negative affect is absent. And more intense affect in either direction takes more of our attention. So, animals (including humans) are not physically shift-invariant with respect to affect, either.
Someone could still coherently think none of this matters morally, and what only matters is the average welfare in a life, but I think that doesn’t capture judgements we make that I do care about.
MichaelStJules is right about what I meant. While it’s true that preferring not to experience something doesn’t necessarily imply that the thing is net-negative, it seems to me very strong evidence in that direction.
Hi, instead of clogging up the thread I just thought I’d alert you that I responded to MichaelStJules, which should function equally as a response to your comment.
When you say that it could be true, do you mean that it could be true that the person themselves would judge their experience as better than nonexistence?
(Your paragraph reads to me as implying that there could be some more objective answer to this separate from a person’s own judgment of it, but it’s hard for me to imagine what that would even mean.)
When I look at factory-farmed animals, I feel awful for them. So coming into this, I have some expectation that my eventual understanding of consciousness, animal cognition, and morality (C/A/M) will add up to normalcy (i.e. not net positive for many animals). But maybe my gut reaction isn’t that trustworthy—that’s often the case in ethical dilemmas. I do think that that gut reaction is important information, even though I don’t have a detailed model of C/A/M.
(I think the main way I end up changing my mind here is being persuaded that my gut reaction is balking at their bad quality of life, but not actually considering the net-positive/negative question)
But:
‘It all adds up to normality’ doesn’t mean ‘you should assume your initial intuitions and snap judgments are correct even in cases where there’s no evolutionary or physical reason for the intuition/judgment to be correct’. It means ‘reductive explanations generally have to recapture the phenomenon somehow’. Here, the phenomenon is a feeling of your brain, and ‘that feeling is just anthropomorphism’ recaptures the phenomenon perfectly, regardless of whether animals are conscious, what their inner life is like (if they’re conscious), etc.
I agree with the claim ‘my gut reaction is that factory-farmed pigs suffer a lot’. I disagree with the claim ‘my gut reaction is that factory-farmed pigs would be better off not existing’. I think that’s a super different claim, and builds in a lot more theory and deliberative reasoning (though it may feel obvious once it’s been cached long enough).
I just disagree. I think it’s not important at all, except insofar as it helps us notice the hypothesis that life might be terrible, net-negative, etc. for chickens in factory farms.
E.g., a lot of people seem to think that chickens are obviously conscious, but that ants aren’t obviously conscious (or even that they’re obviously not conscious). This seems like an obviously silly position to me, unless the person has a very detailed, well-supported, predictive model of consciousness that makes that prediction. In this case, I think that going through the imaginative exercise of anthropomorphizing ants could be quite epistemically useful, to make it more salient that this really is a live possibility.
But no, I don’t think the imaginative exercise actually gives us Bayesian evidence about what’s going on inside ants’ brains — it’s purely ‘helping correct for a bias that made us bizarrely neglect a hypothesis a superintelligence would never neglect’; the way the exercise plays out in one’s head doesn’t covary with ant consciousness across possible worlds. And exactly the same is true for chickens.
I’m confused why you wrote “It doesn’t mean ‘you should assume your initial intuitions and snap judgments are correct’” when in the very next sentence I said “But maybe my gut reaction isn’t that trustworthy—that’s often the case in ethical dilemmas.”?
OK, but do you disagree with the claim ‘Turntrout’s gut reaction is that factory-farmed pigs would be better off not existing’? Because that’s true for me, at least on my first consideration of the issue.
[ETA: Removed superfluous reaction]
Attempted restatement of my point: My gut reaction is evidence about what my implicit C/A/M theories predict, which I should take seriously to the extent that I have been actually ingraining all the thought experiments I’ve considered. And just because the reaction isn’t subvocalized via a verbalized explicit theory, doesn’t mean it’s not important evidence.
Similarly: When considering an action, I may snap-judge it to be squidgy and bad, even though I didn’t yet run a full-blown game-theoretic analysis in my head.
(Let me know if I also seem to be sliding off of your point!)
I avoid factory farmed pork because their existence seems net negative to me, but don’t do this for chickens. This is largely because I believe pigs have qualia similar enough to me that I don’t need to worry about the animal cognition part of c/a/m (I do want to note that you seem to be arguing from a perspective wherein pro-existence is the null, and so you need to reason yourself out of it to be anti-natalist for the animals). I find chickens difficult to model using the machinery I use for humans, but that machinery works okay on pigs (although this is largely through seeing videos of them instead of in person interaction, so it’s absolutely possible I’m mistaken).
I’m not sure how to handle the “consciousness” part, since they cannot advocate for themselves or express preferences for or against existence in ways that are legible to me.