As someone who agrees with (almost) everything you wrote above, I fear that you haven’t seriously addressed what I take to be any of the best arguments against vegetarianism, which are:
Present Triviality. Becoming a vegetarian is at least a minor inconvenience — it restricts your social activities, forces you to devote extra resources to keeping yourself healthy, etc. If you’re an Effective Altruist, then your time, money, and mental energy would be much better spent on directly impacting society than on changing your personal behavior. Even minor inconveniences and attention drains will be a net negative. So you should tell everyone else (outside of EA) to be a vegetarian, but not be one yourself.
Future Triviality. Meanwhile, almost all potential suffering and well-being lies in the distant future; that is, even if we have only a small chance of expanding to the stars, the aggregate value for that vast sum of life dwarfs that of the present. So we should invest everything we have into making it as likely as possible that humans and non-humans will thrive in the distant future, e.g., by making Friendly AI that values non-human suffering. Even minor distractions from that goal are a big net loss.
Experiential Suffering Needn’t Correlate With Damage-Avoiding or Damage-Signaling Behavior. We have reason to think the two correlate in humans (or at least developed, cognitively normal humans) because we introspectively seem to suffer across a variety of neural and psychological states in our own lives. Since I remain a moral patient while changing dramatically over a lifetime, other humans, who differ from me little more than I differ from myself over time, must also be moral patients. But we lack any such evidence in the case of non-humans, especially non-humans with very different brains. For the same reason we can’t be confident that four-month-old fetuses feel pain, we can’t be confident that cows or chickens feel pain. Why is the inner experience of suffering causally indispensable for neurally mediated damage-avoiding behavior? If it isn’t causally indispensable, then why think it is selected at all in non-sapients? Alternatively, what indispensable mechanism could it be an evolutionarily unsurprising byproduct of?
Something About Sapience Is What Makes Suffering Bad. (Or, alternatively: Something about sapience is what makes true suffering possible.) There are LessWrongers who subscribe to the view that suffering doesn’t matter, unless accompanied by some higher cognitive function, like abstract thought, a concept of self, long-term preferences, or narratively structured memories — functions that are much less likely to exist in non-humans than ordinary suffering. So even if we grant that non-humans suffer, why think that it’s bad in non-humans? Perhaps the reason is something that falls victim to...
Aren’t You Just Anthropomorphizing Non-Humans? People don’t avoid kicking their pets because they have sophisticated ethical or psychological theories that demand as much. They avoid kicking their pets because they anthropomorphize their pets, reflexively put themselves in their pets’ shoes even though there is little scientific evidence that goldfish and cockatoos have a valenced inner life. (Plus being kind to pets is good signaling, and usually makes the pets more fun to be around.) If we built robots that looked and acted vaguely like humans, we’d be able to make humans empathize with those things too, just as they empathize with fictional characters. But this isn’t evidence that the thing empathized with is actually conscious.
I think these arguments can be resisted, but they can’t just be dismissed out of hand.
You also don’t give what I think is the best argument in favor of vegetarianism, which is that vegetarianism does a better job of accounting for uncertainty in our understanding of normative ethics (does suffering matter?) and our understanding of non-human psychology (do non-humans suffer?).
Becoming a vegetarian is at least a minor inconvenience — it restricts your social activities, forces you to devote extra resources to keeping yourself healthy, etc.
How about becoming a mostly vegetarian? Avoid eating meat… unless it would be really inconvenient to do so.
Depending on your specific situations, perhaps you could reduce your meat consumption by 50%, which from the utilitarian viewpoint is 50% as good as becoming a full vegetarian. And the costs are trivial.
This is what I am doing recently, and it works well for me. For example, if I have a lunch menu, by default I read the vegetarian option first, and I choose otherwise only if it is something I dislike (or if it contains sugar), which is maybe 20% of cases. The only difficult thing was to do it for the first week, then it works automatically; it is actually easier than reading the full list and deciding between similar options.
How about becoming a mostly vegetarian? Avoid eating meat… unless it would be really inconvenient to do so.
I think that would pretty much do away with the ‘it’s a minor inconvenience’ objections. However, I suspect it would also diminish most of the social and psychological benefits of vegetarianism—as willpower training, proof to yourself of your own virtue, proof to others of your virtue, etc. Still, this might be a good option for EAists to consider.
It’s worth keeping in mind that different people following this rule will end up committing to vegetarianism to very different extents, because both the level of inconvenience incurred, and the level of inconvenience that seems justifiable, will vary from person to person.
suspect it would also diminish most of the social and psychological benefits of vegetarianism—as willpower training, proof to yourself of your own virtue, proof to others of your virtue, etc. Still, this might be a good option for EAists to consider.
I can train my willpower on many other situations, so that’s not an issue. So it’s about the virtue, or more precisely, signalling. Well, depending on one’s mindset, one can find a “feeling of virtue” even in this. Whether the partial vegetarianism is easier to spread than full vegetarianism, I don’t know—and that is probably the most important part. But some people spreading full vegetarianism, and other people spreading partial vegetarianism where the former fail, feels like a good solution.
1) This is indeed an important consideration, although I think for most people the inconveniences would only present themselves during the transition phase. Once you get used to it sufficiently and if you live somewhere with lots of tasty veg*an food options, it might not be a problem anymore. Also, in the social context, being a vegetarian can be a good conversation starter which one can use to steer the conversation towards whatever ethical issues one considers most important. (“I’m not just concerned about personal purity, I also want to actively prevent suffering. For instance...”)
I suspect paying others to go veg*an for you might indeed be more effective, but especially for people who serve as social role models, personal choices may be very important as well, up to the point of being dominant.
2) Yeah but how is the AI going to care about non-human suffering if few humans (and, it seems to me, few people working on fAI) take it seriously?
3)-5) These are reasons for some probabilistic discounting, and then the question becomes whether it’s significant enough. They don’t strike me as too strong but this is worthy of discussion. Personally I never found 4. convincing at all but I’m curious as to whether people have arguments for this type of position that I’m not yet aware of.
1) I agree that being a good role model is an important consideration, especially if you’re a good spokesperson or are just generally very social. To many liberals and EA folks, vegetarianism signals ethical consistency, felt compassion, and a commitment to following through on your ideals.
I’m less convinced that vegetarianism only has opportunity costs during transition. I’m sure it becomes easier, but it might still be a significant drain, depending on your prior eating and social habits. Of course, this doesn’t matter as much if you aren’t involved in EA, or are involved in relatively low-priority EA.
(I’d add that vegetarianism might also make you better Effective Altruist in general, via virtue-ethics-style psychological mechanisms. I think this is one of the very best arguments for vegetarianism, though it may depend on the psychology and ethical code of each individual EAist.)
2) Coherent extrapolated volition. We aren’t virtuous enough to make healthy, scalable, sustainable economic decisions, but we wish we were.
3)-5) I agree that 4) doesn’t persuade me much, but it’s very interesting, and I’d like to hear it defended in more detail with a specific psychological model of what makes humans moral patients. 3) I think is a much more serious and convincing argument; indeed, it convinces me that at least some animals with complex nervous systems and damage-avoiding behavior do not suffer. Though my confidence is low enough that I’d probably still consider it immoral to, say, needlessly torture large numbers of insects.
2) Yes, I really hope CEV is going to come out in a way that also attributes moral relevance to nonhumans. But the fact that there might not be a unique way to coherently extrapolate values and that there might be arbitrariness in choosing the starting points makes me worried. Also, it is not guaranteed that a singleton will happen through an AI implementing CEV, so it would be nice to have a humanity with decent values as a back-up.
If you’re worried that CEV won’t work, do you have an alternative hope or expectation for FAI that would depend much more on humans’ actual dietary practices?
Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.
If we’re more worried that non-humans might be capable of unique forms of suffering than we are worried that non-humans might be capable of unique forms of joy and beauty, then preventing their existence makes the most sense (once humans have no need for them). That includes destroying purely wild species, and includes ones that only harm each other and are not impacted by humanity.
It doesn’t need to depend on people’s dietary habits directly. A lot of people think animals count at least somewhat, but they might be too prone to rationalizing objections and too lazy to draw any significant practical conclusions from that. However, if those people were presented with a political initiative that replaces animal products by plant-based options that are just as good/healthy/whatever, then a lot of them would hopefully vote for it. In that sense, raising awareness for the issue, even if behavioral change is slow, may already be an important improvement to the meme-pool. Whatever utility functions society as a whole or those in power eventually decide to implement, is seems that this to at least some extent depends on the values of currently existing people (and especially people with high potential for becoming influential at some time in the future). This is why I consider anti-speciesist value spreading a contender for top priority.
I actually don’t object to animals being killed, I’m just concerned about their suffering. But I suspect lots of people would object, so if it isn’t too expensive, why not just take care of those animals that already exist and let them live some happy years before they die eventually? I’m especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms. And I think species-membership is ethically irrelevant, so there is no need for conservation in my view.
I don’t want to fill the universe with animals, what would be the use of that? I’m mainly worried that people might decide to send out von Neumann probes to populate the whole universe with wildlife, or do ancestor simulations or other things that don’t take into account animal suffering. Also, there might be a link between speciesism and “substratism”, and of course I also care about all forms of conscious uploads and I wouldn’t want them to suffer either.
The thought that highly temporally variable memes might define the values for our AGI worries me a whole lot. But I can’t write the possibility off, so I agree this provides at least some reason to try to change the memetic landscape.
I actually don’t object to animals being killed, I’m just concerned about their suffering.
Ditto. It might be that killing in general is OK if it doesn’t cause anyone suffering. Or, if we’re preference utilitarians, it might be that killing non-humans is OK because their preferences are generally very short-term.
One interesting (and not crazy) alternative to lab-grown meat: If we figure out (with high confidence) the neural basis of suffering, we may be able to just switch it off in factory-farmed animals.
I’m especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms.
I’m about 95% confident that’s almost never true. If factory-farmed animals didn’t seem so perpetually scared (since fear of predation is presumably the main source of novel suffering in wild animals), or if their environment more closely resembled their ancestral environment, I’d find this line of argument more persuasive.
Yeah, I see no objections to eating meat from zombie-animals (or animals that are happy but cannot suffer). Though I can imagine that people would freak out about it.
Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully (if the population remains constant). This implies that the vast majority of wild animals die shortly after birth in ways that are presumably very painful. There is not enough time for having fun for these animals, even if life in the wild is otherwise nice (and that’s somewhat doubtful as well). We have to discount the suffering somewhat due to the possibility that newborn animals might not be conscious at the start, but it still seems highly likely that suffering dominates for wild animals, given these considerations about the prevalence of r-selection.
Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully
Yes, but we agree death itself isn’t a bad thing, and I don’t think most death is very painful and prolonged. Prolonged death burns calories, so predators tend to be reasonably efficient. (Parasites less so, though not all parasitism is painful.) Force-feeding your prey isn’t unheard of, but it’s unusual.
There is not enough time for having fun for these animals
If we’re worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed? Also, I agree it’s bad for an organism to suffer for 100% of a very short life, but it’s not necessarily any better for it to suffer for 80% of a life that’s twice as long.
it still seems highly likely that suffering dominates for wild animals
Oh, I have no doubt that suffering dominates for just about every sentient species on Earth. That’s part of why I suspect an FAI would drive nearly all species to extinction. What I doubt is that this suffering exceeds the suffering in typical factory farms. These organisms aren’t evolved to navigate environments like factory farms, so it’s less likely that they’ll have innate coping mechanisms for the horrors of pen life than for the horrors of jungle life. If factory farm animals are sentient, then their existence is probably hell, i.e., a superstimulus exceeding the pain and fear and frustration and sadness (if these human terms can map on to nonhuman psychology) they could ever realistically encounter in the wild.
If we’re worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed?
Yes, it would be hard give a good reason for treating these differently, unless you’re a preference utilitarian and think there is no point in creating new preference-bundles just in order to satisfy them later. I was arguing from within a classical utilitarian perspective, even though I don’t share this view (I’m leaning towards negative utilitarianism), in order to make the point that suffering dominates in nature. I see though, you might be right about factory farms being much worse on average. Some of the footage certainly is, even though the worst instance of suffering I’ve ever watched was an elephant being eaten by lions.
Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.
If it wanted to maximize positive states of consciousness, it would probably kill all sentient beings and attempt to convert all the matter in the universe into beings that efficiently experience large amounts of happiness. I find it plausible that this would be a good thing. See here for more discussion.
I don’t find that unlikely. (I think I’m a little less confident than Eliezer that something CEV-like would produce values actual humans would recognize, from their own limited perspectives, as preferable. Maybe my extrapolations are extrapolateder, and he places harder limits on how much we’re allowed to modify humans to make them more knowledgeable and rational for the purpose of determining what’s good.)
But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans. Humans care a lot more about themselves than about other species, and are less confident about non-human subjectivity.
Of course, I suppose the reverse is a possibility. Maybe some existing non-human terrestrial species has far greater capacities for well-being, or is harder to inflict suffering on, than humans are, and an FAI would kill humans and instead work on optimizing that other species. I find that scenario much less plausible than yours, though.
But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans.
If a CEV did this then I believe it would be acting unethically—at the very least, I find it highly implausible that, among the hundreds of thousands(?) of sentient species, homo sapiens is capable of producing the most happiness per unit of resources. This is a big reason why I feel uneasy about the idea of creating a CEV from human values. If we do create a CEV, it should take all existing interests into account, not just the interests of humans.
It also seems highly implausible that any extant species is optimized for producing pleasure. After all, evolution produces organisms that are good at carrying on genes, not feeling happy. A superintelligent AI could probably create much more effective happiness-experiencers than any currently-living beings. This seems to be similar to what you’re getting at in your last paragraph.
CEV asks what humans would value if their knowledge and rationality were vastly greater. I don’t find it implausible that if we knew more about the neural underpinnings of our own suffering and pleasure, knew more about the neurology of non-humans, and were more rational and internally consistent in relating this knowledge to our preferences, then our preferences would assign at least some moral weight to the well-being of non-sapients, independent of whether that well-being impacts any sapient.
As a simpler base case: I think the CEV of 19th-century slave-owners in the American South would have valued black and white people effectively equally. Do we at least agree about that much?
I don’t know much about CEV (I started to read Eliezer’s paper but I didn’t get very far), but I’m not sure it’s possible to extrapolate values like that. What if 19th-century slave owners hold white-people-are-better as a terminal value?
On the other hand, it does seem plausible that slave owner would oppose slavery if he weren’t himself a slave owner, so his CEV may indeed support racial equality. I simply don’t know enough about CEV or how to implement it to make a judgment one way or the other.
Terminal values can change with education. Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality. For instance, slave-owners don’t don’t on any deep level value consistency between their moral intuitions, or they assign zero weight to moral intuitions involving empathy.
If new experiences and rationality training couldn’t ever persuade a slave-owner to become an egalitarian, then I’m extremely confused by the fact that society has successfully eradicated the memes that restructured those slave-owners’ brains so quickly. Maybe I’m just more sanguine than most people about the possibility that new information can actually change people’s minds (including their values). Science doesn’t progress purely via the eradication of previous generations.
I’m not sure I’d agree with that framing. If an ethical feature changes with education, that’s good evidence that it’s not a terminal value, to whatever extent that it makes sense to talk about terminal values in humans. Which may very well be “not very much”; our value structure is a lot messier than that of the theoretical entities for which the terminal/instrumental dichotomy works well, and if we had a good way of cleaning it up we wouldn’t need proposals like CEV.
People can change between egalitarian and hierarchical ethics without neurological insults or biochemical tinkering, so human “terminal” values clearly don’t necessitate one or the other. More importantly, though, CEV is not magic; it can resolve contradictions between the ethics you feed into it, and it might be able to find refinements of those ethics that our biases blind us to or that we’re just not smart enough to figure out, but it’s only as good as its inputs. In particular, it’s not guaranteed to find universal human values when evaluated over a subset of humanity.
If you took a collection of 19th-century slave owners and extrapolated their ethical preferences according to CEV-like rules, I wouldn’t expect that to spit out an ethic that allowed slavery—the historical arguments I’ve read for the practice didn’t seem very good—but I wouldn’t be hugely surprised if it did, either. Either way it wouldn’t imply that the resulting ethic applies to all humans or that it derives from immutable laws of rationality; it’d just tell us whether it’s possible to reconcile slavery with middle-and-upper-class 19th-century ethics without downstream contradictions.
“Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality.”
Could you elaborate on this please? If you’re saying what I think you’re saying then I would strongly like to argue against your point.
I think the word “pain” is misleading. What I care about precisely is suffering, defined as a conscious state a being wants to get out of. If you don’t dislike it and don’t have an urge to make it stop, it’s not suffering. This is also why I think the “pain” of people with pain asymbolia is not morally bad.
Here is a thought experiment. Suppose that explorers arrive in a previously unknown area of the Amazon, where a strange tribe exists. The tribe suffers from a rare genetic anomaly, whereby all of its individuals are physically and cognitively stuck at the age of 3.
They laugh and they cry. They love and they hate. But they have no capacity for complex planning, or normative sophistication. So they live their lives as young children do—on a moment to moment basis—and they have no hope for ever developing beyond that.
If the explorers took these gentle creatures and murdered them—for science, for food, or for fun—would we say, “Oh but those children are not so intelligent, so the violence is ok.” Or would we be even more horrified by the violence, precisely because the children had no capacity to fend for themselves?
I would submit that the argument against animal exploitation is even stronger than the argument against violence in this thought experiment, because we could be quite confident that whatever awareness these children had, it was “less than” what a normal human has. We are comparing the same species after all, and presumably whatever the Amazonian children are missing, due to genetic anomaly, is not made up for in higher or richer awareness in other dimensions.
We cannot say that about other species. A dog may not be able to reason. But perhaps she delights in smells in a way that a less sensitive nose could never understand. Perhaps she enjoys food with a sophistication that a lesser palate cannot begin to grasp. Perhaps she feels loneliness with an intensity that a human being could never appreciate.
Richard Dawkins makes the very important point that cleverness, which we certainly have, gives us no reason to think that animal consciousness is any less rich or intense than human consciousness (http://directactioneverywhere.com/theliberationist/2013/7/18/g2givxwjippfa92qt9pgorvvheired). Indeed, since cleverness is, in a sense, an alternative mechanism for evolutionary survival to feelings (a perfect computational machine would need no feelings, as feelings are just a heuristic), there is a plausible case that clever animals should be given LESS consideration.
But all of this is really irrelevant. Because the basis of political equality, as Peter Singer has argued, has nothing to do with the facts of our experience. Someone who is born without the ability to feel pain does not somehow lose her rights because of that difference. Because equality is not a factual description, it is a normative demand—namely, that every being who crosses the threshold of sentience, every being that could be said to HAVE a will—ought be given the same respect and freedom that we ask for ourselves, as “willing” creatures.
This is a variant of the argument from marginal cases: if there is some quality that makes you count morally, and we can find some example humans (ex: 3 year olds) that have less of that quality than some animals, what do we do?
I’m very sure than an 8 year old human counts morally and that a chicken does not, and while I’m not very clear on where along that spectrum the quality I care about starts getting up to levels where it matters, I think it’s probably something no or almost no animals have and some humans don’t have. Making this distinction among humans, however, would be incredibly socially destructive, especially given how unsure I am about where the line should go, and so I think we end up with a much better society if we treat all humans as morally equal. This means I end up saying things like “value all humans equally; don’t value animals” when that’s not my real distinction, just the closest schelling point).
It seems like your answer to the argument from marginal cases is that maybe the (human) marginal cases don’t matter and “Making this distinction among humans, however, would be incredibly socially destructive.”
That may work for you, but I think it doesn’t work for the vast majority of people who don’t count animals as morally relevant. You are “very sure than an 8 year old human counts morally” (intrinsically, by which I mean “not just because doing otherwise would be socially destructive). I’m not sure if you think 3 year old humans count (intrinsically), but I’m sure that almost everyone does. I know that they count these humans intrinsically (and not just to avoid social destruction), because in fact most people do make these distinctions among humans: For example, median opinion in the US seems to be that humans start counting sometime in the second trimester.
Given this, it’s entirely reasonable to try to figure out what quality makes things count morally, and if you (a) care intrinsically about 3 year old humans (or 1 year old or minus 2 months old or whatever), and (b) find that chickens (or whatever) have more of this quality than 3 year old humans, you should care about chickens.
I’m very sure than an 8 year old human counts morally and that a chicken does not,
Consider an experience which, if had by an eight-year-old human, would be morally very bad, such as an experience of intense suffering. Now suppose that a chicken could have an experience that was phenomenally indistinguishable from that of the child. Would you be “very sure” that it would be very bad for this experience to be had by the human child, but not at all bad to be had by the chicken?
I smell a variation of Pascal’s Mugging here. In Pascal’s Mugging, you are told that you should consider a possibility with a small probability because the large consequence makes up for the fact that the probability is small. Here you are suggesting that someone may not be “very sure” (i.e. that he may have a small degree of uncertainty), but that even a small degree of uncertainty justifies becoming a vegetarian because something about the consequence of being wrong (presumably, multiplying by the high badness, though you don’t explicitly say so) makes up for the fact that the degree of uncertainty is small.
Now suppose that a chicken could have an experience that was phenomenally indistinguishable from that of the child.
“Phenomenally indistinguishable”… to whom?
In other words, what is the mind that’s having both of these experiences and then attempting to distinguish between them?
Thomas Nagel famously pointed out that we can’t know “what it’s like” to be — in his example — a bat; even if we found our mind suddenly transplanted into the body of a bat, all we’d know is what’s it’s like for us to be a bat, not what it’s like for the bat to be a bat. If our mind were transformed into the mind of a bat (and placed in a bat’s body), we could not analyze our experiences in order to compare them with anything, nor, in that form, would we have comprehension of what it had been like to be a human.
Phenomenal properties are always, inherently, relative to a point of view — the point of view of the mind experiencing them. So it is entirely unclear to me what it means for two experiences, instantiated in organisms of very different species, to be “phenomenally indistinguishable”.
In other words, what is the mind that’s having both of these experiences and then attempting to distinguish between them?
When a subject is having a phenomenal experience, certain phenomenal properties are instantiated. In saying that two experiences are phenomenally indistinguishable, I simply meant that they instantiate the same phenomenal properties. As should be obvious, there need not be any mind having both experiences in order for them to be indistinguishable from one another. For example, two people looking at the same patch of red may have phenomenally indistinguishable visual experiences—experiences that instantiate the same property of phenomenal redness. I’m simply asking Jeff to imagine a chicken having a painful experience that instantiates the property of unpleasantness to the same degree that a human child does, when we believe that the child’s painful experience is a morally bad thing.
Thomas Nagel famously pointed out that we can’t know “what it’s like” to be — in his example — a bat; even if we found our mind suddenly transplanted into the body of a bat, all we’d know is what’s it’s like for us to be a bat, not what it’s like for the bat to be a bat.
Sorry, but this is not an accurate characterization of Nagel’s argument.
How does this not apply to me imagining that I’m a toaster making toast? I can imagine a toaster having an experience all I want. That doesn’t imply that an actual toaster can have that experience or anything which can be meaningfully compared to a human experience at all.
Are you denying that chickens can have any of the experiences which, if had by a human, we would regard as morally bad? That seems implausible to me. Most people think that it would be very bad, for instance, if a child suffered intensely, and most people agree that chickens can suffer intensely.
That’s a view of phenomenal experience (namely, that phenomenal properties are intersubjectively comparable, and that “phenomenal properties” can be described from a third-person perspective) that is far, far from uncontroversial among professional philosophers, and I, personally, take it to be almost entirely unsupported (and probably unsupportable).
For example, two people looking at the same patch of red may have phenomenally indistinguishable visual experiences—experiences that instantiate the same property of phenomenal redness.
Intersubjective incomparability of color experiences is one of the classic examples of (alleged) intersubjective incomparability in the literature (cf. the huge piles of writing on the inverted spectrum problem, to which even I have contributed).
… imagine a chicken having a painful experience that instantiates the property of unpleasantness to the same degree that a human child does...
I really don’t think this is a coherent thing to imagine. Once again — unpleasantness to whom? “Unpleasant” is not a one-place predicate.
Sorry, but this is not an accurate characterization of Nagel’s argument.
If your objection is that Nagel only says that the structure of our minds and sensory organs does not allow us to imagine the what-it’s-like-ness of being a bat, and does not mention transplantation and the like, then I grant it; but my extension of it is, imo, consistent with his thesis. The point, in any case, is that it doesn’t make sense to speak of one mind having some experience which is generated by another mind (where “mind” is used broadly, in Nagel-esque examples, to include sensory modalities, i.e. sense organs and the brain hardware necessary to process their input; but in our example need not necessarily include input from the external world).
For example, two people looking at the same patch of red may have phenomenally indistinguishable visual experiences—experiences that instantiate the same property of phenomenal redness.
I don’t think there’s a God-given mapping from the set of Alice’s possible subjective experiences to the set of Bob’s possible subjective experiences. (This is why I think the inverted spectrum thing is meaningless.) We can define a mapping that maps each of Alice’s qualia to the one Bob experiences in response to the same kind of sensory input, but 1) there’s no guarantee it’s one-to-one (colours as seen by young, non-colourblind people would be a best case scenario, but think about flavours), and 2) it would make your claim tautological and devoid of empirical content.
Nagel had no problems with taking objective attributes of experience—e.g. indicia of suffering—and comparing them for the purposes of political and moral debate. The equivalence or even comparability of subjective experience (whether between different humans or different species) is not necessary for an equivalence of moral depravity.
Justifying violence against an oppressed group, on the basis of some unobserved and ambiguous quality, is the definition of bigotry.
Have you interacted with a disabled human before? What it is it about them that you think merits less consideration? My best friend growing up was differently abled, at the cognitive capacity of a young child. But he is also probably the most praiseworthy individual I have ever met. Generous to a fault, forgiving even of those who had mistreated him (and there were many of those), and completely lacking in artifice. A world filled with animals such as he would be a good world indeed. So why should he receive any fewer rights than you or I? What is this amorphous quality that he is missing?
Factually, it is not true that human inequality is “socially destructive.” Human civilization has thrived for 10,000 years despite horrific caste systems. And even just a generation prior, disabled humans were systematically mistreated as our moral inferiors. Even lions of the left like Arthur Miller had no qualms about locking up their disabled children and throwing away the key.
Inequality is a terrible thing, if you are on the wrong side of the hierarchy. But there is nothing intrinsically destabilizing about bigotry. Far from it, prejudice against “outsiders” is our natural state.
I think you are technically wrong. A world filled with people at the cognitive capacity of a young child would include a lot of suffering. (Unless there would be also someone else to solve their problems.) Hunger, diseases, predators… and no ability to defend against them.
DxE, I have to ask, and I don’t mean to be hostile: are you using emotionally-charged, question-begging language deliberately (to act as intuition pumps, perhaps)? Would you be able to rephrase your comments in more neutral, objective language?
The language I use is deliberate. It accurately conveys my point of view, including normative judgments. I do not relish the idea of antagonizing anyone. However, the content of certain viewpoints is inherently antagonizing. If I were to factually state that someone were a rapist, for example, I could not phrase that in a neutral, objective way.
For what it’s worth, I actually love jkaufman.. He’s one of the smartest and most solid people I know. But his views on this subject are bigoted.
I see. However, I disagree that your comments accurately convey your point of view, or any point of view; there’s a lot of unpacking I’d have to ask you to do on e.g. the great-grandparent before I could understand exactly what you were saying; and I’m afraid I’m not sufficiently interested to try.
If I were to factually state that someone were a rapist, for example, I could not phrase that in a neutral, objective way.
Couldn’t you? I could. Observe:
Bob has, on several occasions, initiated and carried on sexual intercourse with an unwilling partner, knowing that the person in question was not willing, and understanding his actions to be opposed to the wishes of said person, as well as to the social norms of his society.
There you go. That is, if anything, too neutral; I could make it less verbose and more colloquial without much loss of neutrality; but it showcases my point, I think. If you believe you can’t phrase something in language that doesn’t sound like you’re trying to incite a crowd, you are probably not trying hard enough.
If you like (and only if you like), I could go through your response to jkaufman and point out where and how your choice of language makes it difficult to respond to your comments in any kind of logical or civilized manner. For now, I will say only:
Expressing your normative judgments is not very useful, nor very interesting to most people. What we’re looking for is for you to support those judgments with something. The mere fact that you think something is bad, really very bad, just no good… is not interesting. It’s not anything to talk about.
There’s a difference between making it seem morally neutral and not implying anything about its morality or lack thereof. What SaidAchmiz was trying to do is the latter.
You’re right it might have been good to answer these in the core essay.
Present Triviality. Becoming a vegetarian is at least a minor inconvenience...
I disagree that being a vegetarian is an inconvenience. I haven’t found my social activities restricted in any non-trivial way and being healthy has been just as easy/hard as when eating meat. It does not drain my attention from other EA activities.
~
Future Triviality. [...] we should invest everything we have into making it as likely as possible that humans and non-humans will thrive in the distant future
I agree with this in principle, but again don’t think vegetarianism is a stop from that. Certainly removing factory farming is a small win compared to successful star colonization, but I don’t think there’s much we can do now to ensure successful colonization, while there is stuff we can do now to ensure factory farming elimination.
~
Experiential Suffering Needn’t Correlate With Damage-Avoiding or Damage-Signaling Behavior.
It need not, which is what makes consciousness thorny. I don’t think there is a tidy resolution to this problem. We’ll have to take our best guess, and that involves thinking nonhuman animals suffer. We’d probably even want to err on the safe side, which would increase our consideration toward nonhuman animals. It would also be consistent with an Ocham’s razor approach.
~
Something About Sapience Is What Makes Suffering Bad.
This doesn’t feature among my ethical framework, at least. I don’t know how this intuitively works for other people. I also don’t think there’s much I can say about it.
~
Aren’t You Just Anthropomorphizing Non-Humans? [...] But this isn’t evidence that the thing empathized with is actually conscious.
It’s not. But there’s other considerations and lines of evidence, so my worry that we’re just anthropomorphizing is present, but rather low.
This doesn’t feature among my ethical framework, at least.
Wait...what? Why not?
I don’t know how this intuitively works for other people. I also don’t think there’s much I can say about it.
My morality is applicable to agents. The extent to which an object can be modeled as an agent plays a big role (but not the only role) in determining its moral weight. As such, there is a rough hierarchy:
Practically speaking from an animal rights perspective, this means that I would consider it a moral victory if meat eaters shifted a greater portion of their meat diet downwards towards “lower” animals like fish and arthropods, The difference in weight between much more and much less intelligent animals is rather extreme—it would kill several crickets, shrimp, herring, or salmon to replace a single pig, but I would still count that as a positive because I think that a pig’s moral weight is magnitudes greater than a salmons. Convincing a person like me not to harm an object involves behavioral measures (with intelligence being one of several factors) which demonstrate the object as a certain kind of agent which is within the class of agents with positive moral weight.
I’m guessing that we’re thinking of different things when we read “sapience is what makes suffering bad (or possible)”. Do you think that my version of the thought doesn’t feature in your ethical framework? If not, what does determine which objects are morally weighty?
I’m guessing that we’re thinking of different things when we read “sapience is what makes suffering bad (or possible)”. Do you think that my version of the thought doesn’t feature in your ethical framework? If not, what does determine which objects are morally weighty?
For me, suffering is what makes suffering bad. Or, rather, I care about any entity that is capable of having feelings and experiences. And, for each of these entities, I much prefer them not to suffer. I care about not having them suffer for their sakes, of course, not for the sake of reducing suffering in the abstract. I don’t view entities as utility receptacles.
But I don’t think there’s anything special about sapience, per se. Rather, I only think sapeince or agentiness is relevant in so far as more sapient and more agenty entities are more capable of suffering / happiness. Which seems plausible, but isn’t certain.
~
Practically speaking from an animal rights perspective, this means that I would consider it a moral victory if meat eaters shifted a greater portion of their meat diet downwards towards “lower” animals like fish and arthropods
This seems plausible to me from a perspective of “these animals likely are less capable of suffering”, but I think you’re missing two things in your analysis:
…(1) the degree of suffering required to create the food, which varies between species, and
…(2) the amount of food provided by each animal.
When you add these two things together, you get a suffering per kg approach that has some counterintuitive conclusions, like the bulk of suffering being in chicken or fish, though I think this table is desperately in need of some updating with more and better research (something that’s been on my to-do list for awhile).
Let’s temporarily taboo words relating to inaccessible subjective experience, because the definitions of words like “suffering” haven’t been made rigorous enough to talk about this—we could define it in concrete neurological terms or specific computations, or we could define it in abstract terms of agents and preferences, and we’d end up talking past each other due to different definitions.
I want to make sure to define morality such that it’s not dependent on the particulars of the algorithm that an agent runs, but by the agent’s actions. If we were to meet weird alien beings in the future who operated in completely alien ways, but who act in ways that can be defined as preferences and can engage in trade, reciprocal altruism, etc...then our morality should extend to them.
Similarly, I think our morality shouldn’t extend to paperclippers—even if they make a “sad face” and run algorithms similar to human distress when a paperclip is destroyed, it doesn’t mean the same thing morally.
So I think morality must necessarily be based on input-output functions, not on what happens in between. (at this point someone usually brings up paralyzed people—briefly, you can quantify the extent of additions/modifications necessary to create a functioning input-output agent from something and use that to extrapolate agency in such cases.)
the amount of food provided by each animal.
Wait, didn’t I take that into account with...
The difference in weight between much more and much less intelligent animals is rather extreme—it would kill several crickets, shrimp, herring, or salmon to replace a single pig, but I would still count that as a positive because I think that a pig’s moral weight is magnitudes greater than a salmons.
...or are you referring to a different concept?
I really do think the relationship between moral weight and intelligence is exponential—as in, I consider a human life to be weighted like ~10 chimps, ~100 dogs...(very rough numbers, just to illustrate the exponential nature)...and I’m not sure there are enough insects in the world to morally outweigh one human life (instrumental concerns about the environment and the intrinsic value of diverse ecosystems aside, of course). I’d wager the human hedons and health benefits from eating something very simple, like a shrimp or a large but unintelligent fish, might actually outweigh the cost to the fish and be a net positive (as it is with plants). My certainty in that matter is low, of course
Let’s temporarily taboo words relating to inaccessible subjective experience, because the definitions of words like “suffering” haven’t been made rigorous enough to talk about this—we could define it in concrete neurological terms or specific computations, or we could define it in abstract terms of agents and preferences, and we’d end up talking past each other due to different definitions.
I agree that people generally and I specifically need to understand “suffering” better. But I don’t think substitutes like “runs an algorithm analogous to human distress” or “has thwarted preferences” offer anything better understood or well-defined.
I suppose when I think of suffering probably involves most of the following: noiception, a central nervous system (with connected nociceptors), endogenous opiods, a behavioral pain response, and a behavioral pain response affected by pain killers.
~
If we were to meet weird alien beings in the future who operated in completely alien ways, but who act in ways that can be defined as preferences and can engage in trade, reciprocal altruism, etc...then our morality should extend to them. Similarly, I think our morality shouldn’t extend to paperclippers—even if they make a “sad face” and run algorithms similar to human distress when a paperclip is destroyed, it doesn’t mean the same thing morally.
I think this is the clearest case where our moral theories differ. If the paperclipper suffers, I don’t see any reason not to care about that experience. Or, rather, I don’t fully understand why you lack care for the paperclipper.
Similarly, while I’m all for extending morality to weird aliens, I don’t think trade nor reciprocal altruism per se are the precise qualities that make things count morally (for me). I assume you mean these qualities as a proxy for “high intelligence”, though, rather than precise qualities?
~
Wait, didn’t I take that into account with...
Yes, you did. My bad for missing it. Sorry.
~
I’d wager the human hedons and health benefits from eating something very simple, like a shrimp or a large but unintelligent fish, might actually outweigh the cost to the fish and be a net positive (as it is with plants). My certainty in that matter is low, of course
How does your uncertainty weigh in practically in this case? Would you, for example, refrain from eating fish while trying to learn more?
But I don’t think substitutes like “runs an algorithm analogous to human distress” or “has thwarted preferences” offer anything better understood or well-defined.
Point of disagreement: I do think that both of those are more well-defined than “suffering”.
I suppose when I think of suffering probably involves most of the following: noiception, a central nervous system (with connected nociceptors), endogenous opiods, a behavioral pain response, and a behavioral pain response affected by pain killers.
Additionally, I think this statement means you define suffering as “runs an algorithm analogous to human distress”. All of these things are specific to Earth-evolved life forms. None of this applies to the class of agents in general.
(Also, nitpick—going by lay usage, you’ve outlined pain, not suffering. In my preferred usage, for humans at least pain is explicitly not morally relevant except insofar as it causes suffering.)
If the paperclipper suffers, I don’t see any reason not to care about that experience. Or, rather, I don’t fully understand why you lack care for the paperclipper.
Rain-check on this...have some work to finish. Will reply properly later.
Would you, for example, refrain from eating fish while trying to learn more?
I don’t think so, but I might be wrong...Is risk aversion in the face of uncertainty actually rational in this scenario? Seems to me that there are certain scenarios where risk aversion makes sense (personal finance, for example) and scenarios where it doesn’t (effective altruism, for example) and this decision seems to fall in the latter camp. AFAIK, risk / loss aversion only applies where there are diminishing returns on the value of something.
I haven’t seen any behavioral evidence of fish doing problem solving, being empathetic towards each other, exhibiting cognitive capacities beyond very basic associative learning & memory, or that sort of thing.
practically
Practically, I eat things fish and lower guilt-free. I limit consumption of animals higher than fish to very occasional consumption only—in a similar vein to how I sometimes do things that are bad for the environment, or (when I start earning) plan to sometimes spend money on things that aren’t charity, with the recognition that it’s mildly immoral selfishness and I should keep it to a minimum. Basically, eating animals seems to be on par with all the other forms of everyday selfishness we all engage in...certainly something to be minimized, but not an abomination.
Where I do consume higher animals, I have plans in the future to shift that consumption towards unpopular cuts of meat (organs, bones, etc) because that means less negative impact through reduced wasteage (and also cheaper, which may enable upgrades with respect to buying from ethical farms + better nutritional profile). The bulk of the profit from slaughtering seems to be the popular muscle meat cuts—if meat eaters would be more holistic about eating the entire animal and not parts of it, I think there would be less total slaughter.
The trade-offs here are not primarily a taste thing for me—I just get really lethargic after eating grains, so I try to limit them. My strain of indian culture is vegetarian, so I am accustomed to eating less meat and more grain through childhood...but after I reduced my intake of grains I felt more energetic and the period of fogginess that I usually get after meals went away. I also have a family history of diabetes and metabolic disorders (which accelerate age-related declines in cognitive function, which I’m terrified of), and what nutrition research I’ve done indicates that shifting towards a more paleolithic diet (fruits, vegetables, nuts and meat) is the best way to avoid this. Cutting out both meat and grain makes eating really hard and sounds like a bad idea.
I think this is the clearest case where our moral theories differ. If the paperclipper suffers, I don’t see any reason not to care about that experience. Or, rather, I don’t fully understand why you lack care for the paperclipper.
If the paper-clipper even can “suffer” … I suspect a more useful word to describe the state of the paperclipper is “unclippy”. Or maybe not...let’s not think about these labels for now. The question is, regardless of the label, what is the underlying morally relevant feature?
I would hazard to guess that many of the supercomputers running our google searches, calculating best-fit molecular models, etc… have enough processing power to simulate a fish that behaves exactly like other fishes. If one wished, one could model these as agents with preference functions. But it doesn’t mean anything to “torture” a google-search algorithm, whereas it does mean something to torture a fish, or to torture a simulation of a fish.
You could model something as simple as a light switch as an agent with a preference function but it would be a waste of time. In the case of an algorithm which finds solutions in a search space it is actually useful to model it as an agent who prefers to maximize some elements of a solution, as this allows you to predict its behavior without knowing details of how it works. But, just like the light switch, just because you are modelling it as an agent doesn’t mean you have to respect its preferences.
“rational agent” explores the search space of possible actions it can take, and chooses the actions which maximize its preferences—the “correct solution” is when all preferences are maximized. An agent is fully rational if it made the best-possible choice given the data at hand. There are no rational agents, but it’s useful to model things which act approximately in this way as agents.
Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have “preferences”, but not morally relevant ones.
A human (or, hopefully one day a friendly AI) seeks to fulfill an extremely complex set of preference...as does a fish. They have preferences which carry moral weight.
It’s not specific receptors or any particular algorithm that captures what is morally relevant to me about other agent’s preferences. If you took a human and replaced its brain with a search algorithm which found the motor output solutions which maximized the original human’s preferences, I’d consider this search algorithm to fit the definition of a person (though not necessarily the same person). I’d respect the search algorithm’s preferences the same way I respected the preferences of the human it replaced. This new sort of person might instrumentally prefer not having its arms chopped off, or terminally prefer that you not read its diary, but it might not show any signs of pain when you did these things unless showing signs of pain was instrumentally valuable. Violation of this being’s preferences may or may not be called “suffering” depending on how you define “suffering”...but either way, I think this being’s preferences are just as morally relevant as a humans.
So the question I would turn back to you is...under what conditions could a paper clipper suffer? Do all paper clippers suffer? What does this mean for other sorts of solution-maximizing algorithms, like search engines and molecular modelers?
My case is essentially that it is something about the composition of an agent’s preference function which contains the morally relevant component with regards to whether or not we should respect its preferences. The specific nature of the algorithm it uses to carry this preference function out—like whether it involves pain receptors or something—is not morally relevant.
Just as a data-point about intuition frequency, I found your intuitions about “a search algorithm which found the motor output solutions which maximized the original human’s preference” to be very surprising
Do you mean that the idea itself is weird and surprising to consider?
Or do you mean that my intuition that this search algorithm fits the definition of a “person” and is imbued with moral weight is surprising and does not match your moral intuition?
Thanks for the well-thought out comment. It helps me think through the issue of suffering a lot more.
~
If you took a human and replaced its brain with a search algorithm which found the motor output solutions which maximized the original human’s preferences, I’d consider this search algorithm to fit the definition of a person (though not necessarily the same person). [...] Violation of this being’s preferences may or may not be called “suffering” depending on how you define “suffering”...but either way, I think this being’s preferences are just as morally relevant as a humans. [...]
The question is, regardless of the label, what is the underlying morally relevant feature?
I think this is a good thought experiment and it does push me more toward preference satisfaction theories of well-being, which I have long been sympathetic to. I still don’t know much myself about what I view as suffering. I’d like to read and think more on the issue—I have bookmarked some of Brian Tomasik’s essays to read (he’s become more preference-focused recently) as well as an interview with Peter Singer where he explains why he’s abandoned preference utilitarianism for something else. So I’m not sure I can answer your question yet.
There are interesting problems with desires, such as formalizing it (what is a desire and what makes a desire stronger or weaker, etc.), population ethics (do we care about creating new beings with preferences, etc.) and others that we would have to deal with as well.
~
Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have “preferences”, but not morally relevant ones. A human (or, hopefully one day a friendly AI) seeks to fulfill an extremely complex set of preference...as does a fish. They have preferences which carry moral weight.
So it seems like, to you, an entity’s welfare matters when it has preferences, weighted based on the complexity of those preferences, with a certain zero threshold somewhere (so thermostat preferences don’t count).
I don’t think complexity is the key driver for me, but I can’t tell you what is.
~
I haven’t seen any behavioral evidence of fish doing problem solving, being empathetic towards each other, exhibiting cognitive capacities beyond very basic associative learning & memory, or that sort of thing.
Likewise, I don’t think this is much of a concern for me, and it seems inconsistent with the rest of what you’ve been saying.
Why are problem solving and empathy important? Surely I could imagine a non-empathetic program without the ability to solve most problems, that still has the kind of robust preferences you’ve been talking about.
And what level of empathy and problem solving are you looking for? Notably, fish engage in cleaning symbiosis (which seems to be in the lower-tier of the empathy skill tree) and Wikipedia seems to indicate (though perhaps unreliably) that fish have pretty good learning capabilities.
~
I don’t think so, but I might be wrong...Is risk aversion in the face of uncertainty actually rational in this scenario? Seems to me that there are certain scenarios where risk aversion makes sense (personal finance, for example) and scenarios where it doesn’t (effective altruism, for example) and this decision seems to fall in the latter camp.
an entity’s welfare matters when it has preferences, weighted based on the complexity of those preferences
No, it’s not complexity, but content of the preferences that make a difference. Sorry for mentioning the complexity—i didn’t mean to imply that it was the morally relevant feature.
I’m not yet sure what sort of preferences give an agent morally weighty status...the only thing I’m pretty sure about is that the morally relevant component is contained somewhere within the preferences, with intelligence as a possible mediating or enabling factor.
Here’s one pattern I think I’ve identified:
I belong within reference Class X.
All beings in Reference Class X care about other beings in Reference Class X, when you extrapolate their volition.
When I hear about altruistic mice, it is evidence that the mouse’s extrapolated volition would cause it to care about Class X-being’s preferences to the extent that it can comprehend them. The cross-species altruism of dogs and dolphins and elephants is an especially strong indicator of Class X membership.
On the other hand, the within-colony altruism of bees (basically identical to Reference Class X except it only applies to members of the colony and I do not belong in it), or the swarms and symbiosis of fishes or bacterial gut flora, wouldn’t count...being in Reference Class X is clearly not the factor behind the altruism in those cases.
...which sounds awfully like reciprocal altruism in practice, doesn’t it? Except that, rather than looking at the actual act of reciprocation of altruism, I’d be extrapolating the agent’s preferences for altruism. Perhaps Class X would be better named “Friendly”, in the “Friendly AI” sense—all beings within the class are to some extent Friendly towards each other.
This is at the rough edge of my thinking though—the ideas as just stated are experimental and I don’t have well defined notions about which preferences matter yet.
Edit: Another (very poorly thought out) trend which seems to emerge is that agents which have a certain sort of awareness are entitled to a sort of bodily autonomy … because it seems immoral to sit around torturing insects if one has no instrumental reason to do so. (But is it immoral in the sense that there are a certain number of insects which morally outweigh a a human? Or is it immoral in a virtue ethic-y, “this behavior signals sadism” sort of way?)
My main point is that I’m mildly guessing that it’s probably safe to narrow down the problem to some combination of preference functions and level of awareness. In any case, I’m almost certain that there exist preference functions are sufficient (but maybe not necessary?) to confer moral weight onto an agent...and though there may be other factors unrelated to preference or intelligence that play a role, preference function is the only thing with a concrete definition that I’ve identified so far.
...which sounds awfully like reciprocal altruism in practice, doesn’t it? Except that, rather than looking at the actual act of reciprocation of altruism, I’d be extrapolating the agent’s preferences for altruism. Perhaps Class X would be better named “Friendly”, in the “Friendly AI” sense—all beings within the class are to some extent Friendly towards each other.
Just so I understand you better, how would you compare and contrast this kind of pro-X “kin” altruism with utilitarianism?
Utilitarianism has never made much sense to me except as a handy way to talk about things abstractly when precision isn’t important
...but I suppose X would be a class of agents who consider each other’s preferences when they make utilitarian calculations? I pretty much came up with the pro-X idea less than a month ago, and haven’t thought it through very carefully.
Oh, here’s a good example of where preference utilitarianism fails which illustrates it:
10^100 intelligent people terminally prefer that 1 person is tortured. Preference utilitarianism says “do the torture”. My moral instinct says “no, it’s still wrong, no matter how many people prefer it”.
Perhaps under the pro-X system, the reason we can ignore the preferences of 10^100 people is that the preference which they have expressed lies strictly outside category X and therefore that preference can be ignored?
Whereas, if you have a Friendly Paperclipper (cares about X-agents and paperclips with some weight on each), the Friendly moral values put it within X...which means that we should now be willing to cater to its morally neutral paper-clip preferences as well.
(If this reads sloppy, it’s because my thoughts on the matter currently are sloppy)
So...I guess there’s sort of a taxonomy of moral-good, neutral-selfish, and evil preferences...and part of being good means caring about other people’s selfish preferences? And part of being evil means valuing the violation of other’s preferences? And, good agents can simply ignore evil preferences.
And (under the pro-X system), good agents can also ignore the preferences of agents that aren’t in any way good...which seems like it might not be correct, which is why I say that there might be other factors in addition to pro-X that make an agent worth caring about for my moral instincts, but if they exist I don’t know what they are.
Are you perhaps confusing ‘morally wrong’ with ‘a sucky tradeoff that I would prefer not to be bound by’?
Just because torturing one person sucks, just because we find it abhorrent, does not mean that it isn’t the best outcome in various situations. If your definition of ‘moral’ is “best outcome when all things are considered, even though aspects of it suck a lot and are far from ideal”, then yes, torturing someone can in fact be moral. If your definition of ‘moral’ is “those things which I find reprehensible”, then quite probably you can never find torturing someone to be moral. However, there are scenarios where it may still be necessary, or the best option.
Are you perhaps confusing ‘morally wrong’ with ‘a sucky tradeoff that I would prefer not to be bound by’?
Nope...because ..
quite probably you can never find torturing someone to be moral. However, there are scenarios where it may still be necessary, or the best option.
...because I believe that torturing someone could still instrumentally be the right thing to do on a consequential grounds.
In this scenario, 10^100 people terminally value torturing one person, but I do not care about their preferences, because it is an evil preference.
However, in an alternate scenario, if I had to choose between 10^100 people getting mildly hurt or 1 person getting tortured, I’d choose the one person getting tortured.
In these two scenarios, the preference weights are identical, but in the first scenario the preference of the 10^100 people is evil and therefore irrelevant in my calculations, whereas in the second scenario the needs of 10^100 outweigh the needs of the one.
This is less a discussion about torture, and more a discussion about whose/which preferences matter. Sadistic preferences (involving real harm, not the consensual kink), for example, don’t matter morally—there’s no moral imperative to fulfill those preferences, no “good” done when those preferences are fulfilled and no “evil” resulting from thwarting those preferences.
I think you should temporarily taboo ‘moral’, ‘morality’, and ‘evil’, and simply look at the utility calculations. 10^100 people terminally value something that you ascribe zero or negative value to; therefore, their preferences do not matter to you or will make your universe worse from the standpoint of your utility function.
Which preferences matter? Yours matter to you, and thiers matter to them. There’s no ‘good’ or ‘evil’ in any absolute sense, merely different utility functions that happen to conflict. There’s no utility function which is ‘correct’, except by some arbitrary metric, of which there are many.
Consider another hypothetical utility function: The needs of the 10^100 don’t outweigh the needs of the one, so we let the entire 10^100 suffer when we could eliminate it by inconveniencing one single entity. Neither you nor the 10^100 are happy with this one, but the person about to be tortured may think it’s just fine and dandy...
...I don’t denotatively disagree with anything you’ve said, but I also think you’re sort of missing the point and forgetting the context of the conversation as it was in the preceding comments.
We all have preferences, but we do not always know what our own preferences are. A subset of our preferences (generally those which do not directly reference ourselves) are termed “moral preferences”. The preceding discussion between me and Peter Hurford is an attempt to figure out what our preferences are.
In the above conversation, words like “matter”, “should” and “moral” is understood to mean “the shared preferences of Ishaan, Dentin, and Peter_Hurford which they agree to define as moral”. Since we are all human (and similar in many other ways beyond that), we probably have very similar moral preferences...so any disagreement that arises between us is usually due to one or both of us inaccurately understanding our own preferences.
There’s no ‘good’ or ‘evil’ in any absolute sense
This is technically true, but it’s also often a semantic stopsign which derails discussions of morality. The fact is that the three of us humans have a very similar notion of “good”, and can speak meaningfully about what it is...the implicitly understood background truths of moral nihilism notwithstanding.
It doesn’t do to exclaim “but wait! good and evil are relative!” during every moral discussion...because here, between us three humans, our moral preferences are pretty much in agreement and we’d all be well served by figuring out exactly those preferences are. It’s not like we’re negotiating morality with aliens.
Which preferences matter? Yours matter to you
Precisely...my preferences are all that matter to me, and our preferences are all that matter to us. So if 10^100 sadistic aliens want to torture...so what? We don’t care if they like torture, because we dislike torture and our preferences are all that matter. Who cares about overall utility? “Morality”, for all practical purposes, means shared human morality...or, at least, the shared morality of the humans who are having the discussion.
“Utility” is kind of like “paperclips”...yes, I understand that in the best case scenario it might be possible to create some sort of construct which measures how much “utility” various agent-like objects get from various real world outcomes, but maximizing utility for all agents within this framework is not necessarily my goal...just like maximizing paperclips is not my goal.
For the purposes of this conversation at least. I’ve largely got them taboo’d in general because I find them confusing and full of political connotations; I suspect at least some of that is the problem here as well.
10^100 intelligent people terminally prefer that 1 person is tortured. Preference utilitarianism says “do the torture”. My moral instinct says “no, it’s still wrong, no matter how many people prefer it”.
Yet your moral instinct is perfectly fine with having a justice system that puts innocent people in jail with a greater than 1 in 10^100 error rate.
Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have “preferences”, but not morally relevant ones.
Usually people speak of preferences when there is a possibility of choice—the agent can meaningfully choose between doing A and doing B.
This is not the case with respect to molecular models, search engines, and light switches.
At least for search engines, I would say there exist a meaningful level of description where it can be said that the search engine chooses which results to display in response to a query, approximately maximizing some kind of scoring function.
there exist a meaningful level of description where it can be said that the search engine chooses which results to display in response to a query
I don’t think it is meaningful in the current context. The search engine is not an autonomous agent and doesn’t choose anything any more than, say, the following bit of pseudocode: if (rnd() > 0.5) { print “Ha!” } else { print “Ooops!” }
“If you search for “potatoes” the engine could choose to return results for “tomatoes” instead...but will choose to return results for potatoes because it (roughly speaking) wants to maximize the usefulness of the search results.”
“If I give you a dollar you could choose to tear it to shreds, but you instead will choose to put it in your wallet because (roughly speaking) you want to xyz...”
When you flip the light switch “on” it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the “on” position.
Except for degree of complexity, what’s the difference? “Choice” can be applied to anything modeled as an Agent.
When you flip the light switch “on” it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the “on” position.
Sorry, I read this as nonsense. What does it mean for a light switch to “want”?
To determine the “preferences” of objects which you are modeling as agents, see what occurs, and construct a preference function that explains those occurrences.
Example: This amoeba appears to be engaging in a diverse array of activities which I do not understand at all, but they all end up resulting in the maintenance of its physical body. I will therefore model it as “preferring not to die”, and use that model to make predictions about how the amoeba will respond to various situations.
I think the light switch example is far fetched, but the search engine isn’t. The point is whether there exist a meaningful level of description where framing the system behavior in terms of making choices to satisfy certain preferences is informative.
The distinction you are making between the input-output function of a human as a “choice” vs. the input-output of a machine as “not-a-choice” sounds very reminiscent of the traditional naive / confused model of free will that people commonly have before dissolving the question...but you’re a frequent poster here, so perhaps I’ve misunderstood your meaning. Are you using a specialized definition of the word “choice”?
I have no wish for this to develop into a debate about free will. Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.
As a practical matter, speaking about choices of light switches seems silly. Given this, I don’t see why speaking about choices of search engines is not silly. It might be useful conversational shorthand in some contexts, but I don’t think it is useful in the context of talking about morality.
Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.
Ah, ok—sorry. The materialist, dissolved view of free will related questions has been a strongly held view of mine since a very young age, so my prior for a person who is aware of thesel yet subscribes to what I’ll call the “naive view” for lack of the better word is very low.
It’s not really the particulars of the sequences here which are in question—the people who say free will doesn’t exist, and the people who say it does but redefine free will in funny ways, the pan-psychists, the compatiblists and non-compatiblists, all share in common a non-dualist view which does not allow them to label the search engine’s processes and the human’s processes as fundamentally, qualitatively different processes. This is a deep philosophical divide that has been debated for, as far as I am aware, at least two thousand years.
As a practical matter, speaking about choices of light switches seems silly. Given this, I don’t see why speaking about choices of search engines is not silly.
By analogy, speaking of choices of humans seems silly, since humans are made of the same basic laws.
The fundamental disagreement here runs rather deeply—it’s not going to be possible to talk about this without diving into free will.
If I understood the causal mechanisms underlying the actions of humans as well as I do those underlying lightswitches, talking about the former as “choices” would seem as silly to me as talking that way about the latter does.
But I don’t, so it doesn’t.
I assume you don’t understand the causal mechanisms underlying the actions of humans either. So why does talking about them as “choices” seem silly to you?
I agree with you. Whether we model something as an agent or an object is a feature of our map, not the territory. It’s not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.
However, in the context of the larger discussion, I interpret Lumifer as treating the distinction between “choice” and “event” as a feature of the territory itself, and positing a fundamental qualitative difference between a “choice” and other sorts of events. My reply should be seen as an assertion that such qualitative differences are not features of the map—if it’s impossible to model a light switch as having choices, then it’s also impossible to model a human as having choices. (My actual belief is that it’s possible to model both as having choices or not having them)
Is your actual belief that there are equivalent grounds for modeling both either way?
If so, I disagree… from my own perspective, modeling people as preference-maximizing agents is significantly more justified (due to differences in the territory) than modeling a light switch that way.
If not, to what do you attribute the differential?
Is your actual belief that there are equivalent grounds for modeling both either way?
...it is possible to model things either way, but it is more useful for some objects than others.
It’s not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.
Modeling an object as agents is useful when the object exhibits a pattern of behavior which is roughly consistent with preference maximizing. A search engine is well modeled as an agent. A human is very well modeled as an agent.
A light switch is very poorly modeled as an agent. Thinking of it in terms of preference pattern doesn’t make it any easier to predict its behavior. But you can model it as an agent, if you’d like.
I am willing to adopt “useful” in place of “justified” if it makes this conversation easier. In which case my question could be rephrased “Is it equally useful to model both either way?”
To which your answer seems to be no… it’s more useful to model a human as an agent than it is a light-switch. (I’m inferring, because despite introducing the “useful” language, what you actually say instead introduces the language of something being “well-modeled.” But I’m assuming that by “well-modeled” you mean “useful.”)
And your answer to the followup question is because the pattern of behavior of a light switch is different from that of a search engine or a human, such that adopting an intentional stance towards the former doesn’t make it easier to predict.
Yup. Modeling something as a preference maximizing agent is generally useful to adopt for things which systematically behave in ways that maximize certain outcomes in a diverse array of situations. It allows you to make accurate predictions even when you don’t fully understand the mechanics of the events that occur in generating the events you are predicting.
(I distinguished useful and justified because I wasn’t sure if “justified” had moral connotations in your usage)
Edit: On reading the wiki, I tend to agree with the views that the wiki attributes to Dennett. Thanks for the reference and the word “intentional stance”.
OK. So, having clarified that, I return to your initial comment:
The distinction you are making between the input-output function of a human as a “choice” vs. the input-output of a machine as “not-a-choice” sounds very reminiscent of the traditional naive / confused model of free will that people commonly have before dissolving the question
...and am as puzzled by it as I was in the first place.
You agree that the input-output function of a human differs from the input-output of a machine like a light switch in ways that make it more useful to model the former but not the latter as maximizing preferences. (To adopt the intentional stance towards the former and the design stance towards the latter, in Dennett’s terminology.)
So, given that, what is your objection to Lumifer’s distinction? “Choice” seems like a perfectly reasonable word to use when taking an intentional stance, and to not use when taking a design stance.
When I asked earlier, you explained that your objection had to do with attributing “territory-level” differences to humans and machines, when it’s really a “map-level” objection… that it’s possible to talk about a light-switch’s choices, or not talk about a human’s choices, so it’s not really a difference in the system at all, just a difference in the speaker.
But given that you agree that there’s a salient “territory-level” difference between the two systems (specifically, the differences which make the intentional stance more useful than the design stance wrt humans, but not wrt light-switches), I don’t quite get the objection. Sure, it’s possible to take either stance towards either system, but it’s more useful to take the intentional stance towards humans, and that’s a “fact about the territory.”
Because in the preceding comment, I was demonstrating that we should not morally care about light switches, search engines, and paperclippers...whereas we should morally care about fishes, dogs, and humans… because of differences in the preference profiles of these beings when they are modeled as agents.
Peter Hurford disagreed with me on the non-moral status of the paper-clipper. I was demonstrating the non-moral status of a being which cared only for paper clips by analogy to a search engine (a being which only cares about bringing up the best search result).
Whereas what Lumifer was saying is that the very premise that a search engine could have choices was fundamentally flawed (which, if true, would cause the whole analogy to break down).
The thing is, it’s not fundamentally flawed to thing of a search engine as having choices. Sure, search engines are a little less usefully modeled as agent-like when compared to humans, but it’s just a matter of degree.
the input-output function of a human as a “choice” vs. the input-output of a machine as “not-a-choice”
I was objecting to his hard, qualitative binary, not your and Dennet’s soft/qualitative spectrum.
This seems plausible to me from a perspective of “these animals likely are less capable of suffering”, but I think you’re missing two things in your analysis: …(1) the degree of suffering required to create the food, which varies between species, and …(2) the amount of food provided by each animal.
Additionally, when there is a burden of evidence to suggest that nutrient-equivalent food sources can be produced in a more energy-efficient manner and with no direct suffering to animals (indirect suffering being, for example, the unavoidable death of insects in crop harvesting), I believe it is a rational choice to move towards those methods.
Your points (1) and (2) seem like fully general counterarguments against any activity at all, other than the single most effective activity at any given time. I do agree with you that future suffering could potentially greatly outweigh present suffering, and I think it’s very important to try to prevent future suffering of non-human animals. However, it seems that one of the best ways to do that is to encourage others to care more for the welfare of non-human animals, i.e. become veg*ans.
Perhaps more importantly, it makes sense from a psychological perspective to become a veg*an if you care about non-human animals. It seems that if I ate meat, cognitive dissonance would make it much harder for me to make an effort to prevent non-human suffering on a broader scale.
(4): Although I see no way to falsify this belief, I also don’t see any reason to believe that it’s true. Furthermore, it runs counter to my intuitions. Are profoundly mentally disabled humans incapable of “true” suffering?
(5): Humans and non-human animals evolved in the same way, so it strikes me as highly implausible that humans would be capable of suffering while all non-humans would lack this capacity.
I don’t engage in the vast majority of possible activities. Neither do you, so on net, the class of arguments you accept must mitigate against almost all activities, right?
Why did you type that comment? Did you consider the arguments for typing that comment as fully general counterarguments against all the other possible comments you could have made? If not, why not post them too?
I’m not sure I understand what you’re trying to say. It sounds like you’re saying that we make decisions without considering all possible arguments for and against them, in which case I’m not sure what you’re saying with regard to my original comment.
To construct the comment that you just replied to, I considered various possible questions that I roughly rated by how effectively they would help me to understand what you’re saying, and limited my search due to time constraints. The arguments for posting that comment work as counterarguments against posting any other comment I considered, e.g. it was the best comment I considered. It’s not the best possible comment, but it would be a waste of time to search the entirety of comment-space to find the optimal comment.
Your points (1) and (2) seem like fully general counterarguments against any activity at all, other than the single most effective activity at any given time.
That’s more or less what I intended them to be. Isn’t doing only the most effective activities available to you… a good idea?
However, I’d phrase the argument in terms of degrees: Activities are good to the extent they conduce to your making better decisions for the future, bad to the extent they conduce to your making worse decisions for the future. So doing the dishes might be OK even if it’s not the Single Best Thing You Could Possibly Be Doing Right Now, provided it indirectly helps you do better things than you otherwise would. Some suboptimal things are more suboptimal than others.
However, it seems that one of the best ways to do that is to encourage others to care more for the welfare of non-human animals
Maybe? If you could give such an argument, though, it would show that my argument isn’t a fully general counterargument—vegetarianism would be an exception, precisely because it would be the optimal decision.
it makes sense from a psychological perspective to become a veg*an if you care about non-human animals.
Right. I think the disagreement is about the ethical character of vegetarianism, not about whether it’s a psychologically or aesthetically appealing life-decision (to some people). It’s possible to care about the wrong things, and it’s possible to assign moral weight to things that don’t deserve it. Ghosts, blastocysts, broccoli stalks, abstract objects....
Although I see no way to falsify this belief, I also don’t see any reason to believe that it’s true.
To assess (4) I think we’d need to look at the broader ethical and neurological theories that entail it, and assess the evidence for and against them. This is a big project. Personally, my uncertainty about the moral character of non-sapients is very large, though I think I lean in your direction. (Actually, my uncertainty and confusion about most things sapience- and sentience- related are very large.)
That’s more or less what I intended them to be. Isn’t doing only the most effective activities available to you… a good idea?
Within practical limits. It’s not effective altruism if you drive yourself crazy trying to hold yourself to unattainable standards and burn yourself out.
Practical limits are built into ‘effective’. The most effective activity for you to engage in is the most effective activity for you to engage in, not for a perfectly rational arbitrarily computationally powerful god to engage in. Going easy on yourself, to the optimal degree, is (for creatures like us) part of behaving optimally at all. If your choice (foreseeably) burns you out, and the burnout isn’t worth the gain, your choice was just wrong.
That’s more or less what I intended them to be. Isn’t doing only the most effective activities available to you… a good idea?
However, I’d phrase the argument in terms of degrees: Activities are good to the extent they conduce to your making better decisions for the future, bad to the extent they conduce to your making worse decisions for the future. So doing the dishes might be OK even if it’s not the Single Best Thing You Could Possibly Be Doing Right Now, provided it indirectly helps you do better things than you otherwise would. Some suboptimal things are more suboptimal than others.
Wouldn’t you agree that veganism is less suboptimal than say entertainment? I’m assuming you’re okay with people playing video games, going to the movies etc. even if those activities don’t accomplish any long term altruistic goals. So I don’t know what your issue with veganism is.
Wouldn’t you agree that veganism is less suboptimal than say entertainment?
Depends. For a lot of people, some measure of entertainment helps recharge their batteries and do better work, much more so than veganism probably would. I’ll agree that excessive recreational time is a much bigger waste (for otherwise productive individuals) than veganism. I’m not singling veganism out here; it just happens to be the topic of discussion for this thread. If veganism recharges altruists’ batteries in a way similar to small amounts of recreation, and nothing better could do the job in either case, then veganism is justifiable for the same reason small amounts of recreation is.
For a lot of people, some measure of entertainment helps recharge their batteries and do better work
I suspect that most people engage in much more entertainment than is necessary for recharging their batteries to do more work. I hope you don’t think that entertainment and recreation are justifiable only because they allow us to work.
and nothing better could do the job in either case
This sounds like a fully general counterargument against doing almost anything at all.
I suspect that most people engage in much more entertainment than is necessary for recharging their batteries to do more work.
Yes. I would interpret that as meaning that people spend too much time having small amounts of fun, rather than securing much larger amounts of fun for their descendants.
I hope you don’t think that entertainment and recreation are justifiable only because they allow us to work.
No, fun is intrinsically good. But it’s not so hugely intrinsically good that this good can outweigh large opportunity costs. And our ability to impact the future is large enough that small distractions, especially affecting people with a lot of power to change the world, can have big costs. I’m with Peter Singer on this one; buying a fancy suit is justifiable if it helps you save starving Kenyans, but if it comes at the expense of starving Kenyans then you’re responsible for taking that counterfactual money from them. And time, of course, is money too.
(I’m not sure this is a useful way for altruists to think about their moral obligations. It might be too stressful. But at this point I’m just discussing the obligations themselves, not the ideal heuristics for fulfilling them.)
This sounds like a fully general counterargument against doing almost anything at all.
It is, as long as you keep in mind that for every degree of utility there’s an independent argument favoring that degree over the one right below it. So it’s a fully general argument schema: ‘For any two incompatible options X and Y, if utility(X) > utility(Y), don’t choose Y if you could instead choose X.’ This makes it clear that the best option is preferable to all suboptimal options, even though somewhat suboptimal things are a lot better than direly suboptimal ones.
In that case, why are you spending time arguing against vegetarianism, instead of spending time arguing against behaviors that waste even more time and resources?
That’s more or less what I intended them to be. Isn’t doing only the most effective activities available to you… a good idea?
I felt like it was a bit unfair for you to use fully general counterarguments against veganism in particular. However, after your most recent reply, I can better see where you’re coming from. I think a better message to take from this essay (although I’m not sure Peter would agree) is that people in general should eat less meat, not necessarily you in particular. If you can get one other person to become a vegan in lieu of becoming one yourself, that’s just as good.
I think the disagreement is about the ethical character of vegetarianism, not about whether it’s a psychologically or aesthetically appealing life-decision (to some people).
If non-vegans are less effective at reducing suffering than vegans due to a quirk of human psychology (i.e. cognitive dissonance preventing them from caring sufficiently about non-humans), then this becomes an ethical issue and not just a psychological one.
To assess (4) I think we’d need to look at the broader ethical and neurological theories that entail it, and assess the evidence for and against them. This is a big project.
I agree with you here. I feel sufficiently confident that animal suffering matters, but the empirical evidence here is rather weak.
That’s some excellent steelmanning. I would also add that creating animals for food with lives barely worth living is better than not creating them at all, from a utilitarian (if repugnant) point of view. And it’s not clear whether a farm chicken’s life is below that threshold.
I think it’s fairly clear that a farm chicken’s life is well below that threshold. If I had the choice between losing consciousness for an hour or spending an hour as a chicken on a factory farm, I would definitely choose the former.
Ninja Edit: I think a lot of people have poor intuitions when comparing life to non-life because our brains are wired to strongly shy away from non-life. That’s why the example I gave above used temporary loss of consciousness rather than death. Even if you don’t buy the above example, I think it’s possible to see that factory-farmed life is worse than death. This article discussed how doctors—the people most familiar with medical treatment—frequently choose to die sooner rather than attempt to prolong their lives when they know they will suffer greatly in their last days. It seems that life on a factory farm would entail much more suffering than death by a common illness.
I don’t see why a chicken would choose any differently. We have no reason to believe that chicken-suffering is categorically different from human-suffering.
If we were to put a bunch of chickens into a room, and on one side of the room was a wolf, and the other side had factory farming cages that protected the chickens from the wolf, I would expect the chickens to run into the cages.
It’s true that chickens can comprehend a wolf much better than they can comprehend factory farming, but I’m not quite sure how that affects this thought experiment.
Even if this is correct, in terms of value spreading it seems to be a very problematic message to convey. Most people are deontologists and would never even consider accepting this argument for human infants, so if we implicitly or explicitly accept it for animals, then this is just going to reinforce the prejudice that some forms of suffering are less important simply because they are not experienced by humans/our species. And such a defect in our value system may potentially have much more drastic consequences than the opportunity costs of not getting some extra live-years that are slightly worth living.
Then there is also an objection from moral uncertainty: If the animals in farms and especially factory farms (where most animals raised for food-purposes are held) are above “worth living”, then barely so! It’s not like much is at stake (the situation would be different if we’d wirehead them to experience constant orgasm). Conversely, if you’re wrong about classical utilitarianism being your terminal value, then all the suffering inflicted on them would be highly significant.
I find the argument quite unconvincing; Hanson seems to be making the mistake of conflating “life worth living” with “not committing suicide” that is well addressed in MTGandP’s reply (and grandchildren).
This is a good point, and was raised below. Note that the argument doesn’t seem to be factually true, independent of moral considerations. (You don’t actually create more lives by eating meat.)
Regarding (4) (and to a certain extent 3 and 5): I assume you agree that a species feels phenomenal pain just in case it proves evolutionarily beneficial. So why would it improve fitness to feel pain only if you have “abstract thought”?
The major reason I have heard for phenomenal pain is learning, and all vertebrates show long-term behavior modification as the result of painful stimuli, as anyone who has taken a pet to the vet can verify. (Notably, many invertebrates do not show long-term modification, suggesting that vertebrate vs. invertebrate may be a non-trivial distinction.)
Richard Dawkins has even suggested that phenomenal pain is inversely related to things like “abstract thought”, although I’m not sure I would go that far.
Actually, I’m an eliminativist about phenomenal states. I wouldn’t be completely surprised to learn that the illusion of phenomenal states is restricted to humans, but I don’t think that this illusion is necessary for one to be a moral patient. Suppose we encountered an alien species whose computational substrate and architecture was so exotic that we couldn’t rightly call anything it experienced ‘pain’. Nonetheless it might experience something suitably pain-like, in its coarse-grained functional roles, that we would be monsters to start torturing members of this species willy-nilly.
My views about non-human animals are similar. I suspect their psychological states are so exotic that we would never recognize them as pain, joy, sorrow, surprise, etc. (I’d guess this is more true for the positive states than the negative ones?) if we merely glimpsed their inner lives directly. But the similarity is nonetheless sufficient for our taking their alien mental lives seriously, at least in some cases.
So, I suspect that phenomenal pain as we know it is strongly tied to the evolution of abstract thought, complex self-models, and complex models of other minds. But I’m open to non-humans having experiences that aren’t technically pain but that are pain-like enough to count for moral purposes.
RobbBB, in what sense can phenomenal agony be an “illusion”? If your pain becomes so bad that abstract thought is impossible, does your agony—or the “illusion of agony”—somehow stop? The same genes, same neurotransmitters, same anatomical pathways and same behavioural responses to noxious stimuli are found in humans and the nonhuman animals in our factory-farms. A reasonable (but unproven) inference is that factory-farmed nonhumans endure misery—or the “illusion of misery” as the eliminativist puts it—as do abused human infants and toddlers.
But I’m open to non-humans having experiences that aren’t technically pain but that are pain-like enough to count for moral purposes.
I guess maybe I just didn’t understand how you were using the term “pain”—I agree that other species will feel things differently, but being “pain-like enough to count for moral purposes” seems to be the relevant criterion here.
As someone who agrees with (almost) everything you wrote above, I fear that you haven’t seriously addressed what I take to be any of the best arguments against vegetarianism, which are:
Present Triviality. Becoming a vegetarian is at least a minor inconvenience — it restricts your social activities, forces you to devote extra resources to keeping yourself healthy, etc. If you’re an Effective Altruist, then your time, money, and mental energy would be much better spent on directly impacting society than on changing your personal behavior. Even minor inconveniences and attention drains will be a net negative. So you should tell everyone else (outside of EA) to be a vegetarian, but not be one yourself.
Future Triviality. Meanwhile, almost all potential suffering and well-being lies in the distant future; that is, even if we have only a small chance of expanding to the stars, the aggregate value for that vast sum of life dwarfs that of the present. So we should invest everything we have into making it as likely as possible that humans and non-humans will thrive in the distant future, e.g., by making Friendly AI that values non-human suffering. Even minor distractions from that goal are a big net loss.
Experiential Suffering Needn’t Correlate With Damage-Avoiding or Damage-Signaling Behavior. We have reason to think the two correlate in humans (or at least developed, cognitively normal humans) because we introspectively seem to suffer across a variety of neural and psychological states in our own lives. Since I remain a moral patient while changing dramatically over a lifetime, other humans, who differ from me little more than I differ from myself over time, must also be moral patients. But we lack any such evidence in the case of non-humans, especially non-humans with very different brains. For the same reason we can’t be confident that four-month-old fetuses feel pain, we can’t be confident that cows or chickens feel pain. Why is the inner experience of suffering causally indispensable for neurally mediated damage-avoiding behavior? If it isn’t causally indispensable, then why think it is selected at all in non-sapients? Alternatively, what indispensable mechanism could it be an evolutionarily unsurprising byproduct of?
Something About Sapience Is What Makes Suffering Bad. (Or, alternatively: Something about sapience is what makes true suffering possible.) There are LessWrongers who subscribe to the view that suffering doesn’t matter, unless accompanied by some higher cognitive function, like abstract thought, a concept of self, long-term preferences, or narratively structured memories — functions that are much less likely to exist in non-humans than ordinary suffering. So even if we grant that non-humans suffer, why think that it’s bad in non-humans? Perhaps the reason is something that falls victim to...
Aren’t You Just Anthropomorphizing Non-Humans? People don’t avoid kicking their pets because they have sophisticated ethical or psychological theories that demand as much. They avoid kicking their pets because they anthropomorphize their pets, reflexively put themselves in their pets’ shoes even though there is little scientific evidence that goldfish and cockatoos have a valenced inner life. (Plus being kind to pets is good signaling, and usually makes the pets more fun to be around.) If we built robots that looked and acted vaguely like humans, we’d be able to make humans empathize with those things too, just as they empathize with fictional characters. But this isn’t evidence that the thing empathized with is actually conscious.
I think these arguments can be resisted, but they can’t just be dismissed out of hand.
You also don’t give what I think is the best argument in favor of vegetarianism, which is that vegetarianism does a better job of accounting for uncertainty in our understanding of normative ethics (does suffering matter?) and our understanding of non-human psychology (do non-humans suffer?).
How about becoming a mostly vegetarian? Avoid eating meat… unless it would be really inconvenient to do so.
Depending on your specific situations, perhaps you could reduce your meat consumption by 50%, which from the utilitarian viewpoint is 50% as good as becoming a full vegetarian. And the costs are trivial.
This is what I am doing recently, and it works well for me. For example, if I have a lunch menu, by default I read the vegetarian option first, and I choose otherwise only if it is something I dislike (or if it contains sugar), which is maybe 20% of cases. The only difficult thing was to do it for the first week, then it works automatically; it is actually easier than reading the full list and deciding between similar options.
I think that would pretty much do away with the ‘it’s a minor inconvenience’ objections. However, I suspect it would also diminish most of the social and psychological benefits of vegetarianism—as willpower training, proof to yourself of your own virtue, proof to others of your virtue, etc. Still, this might be a good option for EAists to consider.
It’s worth keeping in mind that different people following this rule will end up committing to vegetarianism to very different extents, because both the level of inconvenience incurred, and the level of inconvenience that seems justifiable, will vary from person to person.
I can train my willpower on many other situations, so that’s not an issue. So it’s about the virtue, or more precisely, signalling. Well, depending on one’s mindset, one can find a “feeling of virtue” even in this. Whether the partial vegetarianism is easier to spread than full vegetarianism, I don’t know—and that is probably the most important part. But some people spreading full vegetarianism, and other people spreading partial vegetarianism where the former fail, feels like a good solution.
Good points!
1) This is indeed an important consideration, although I think for most people the inconveniences would only present themselves during the transition phase. Once you get used to it sufficiently and if you live somewhere with lots of tasty veg*an food options, it might not be a problem anymore. Also, in the social context, being a vegetarian can be a good conversation starter which one can use to steer the conversation towards whatever ethical issues one considers most important. (“I’m not just concerned about personal purity, I also want to actively prevent suffering. For instance...”)
I suspect paying others to go veg*an for you might indeed be more effective, but especially for people who serve as social role models, personal choices may be very important as well, up to the point of being dominant.
2) Yeah but how is the AI going to care about non-human suffering if few humans (and, it seems to me, few people working on fAI) take it seriously?
3)-5) These are reasons for some probabilistic discounting, and then the question becomes whether it’s significant enough. They don’t strike me as too strong but this is worthy of discussion. Personally I never found 4. convincing at all but I’m curious as to whether people have arguments for this type of position that I’m not yet aware of.
1) I agree that being a good role model is an important consideration, especially if you’re a good spokesperson or are just generally very social. To many liberals and EA folks, vegetarianism signals ethical consistency, felt compassion, and a commitment to following through on your ideals.
I’m less convinced that vegetarianism only has opportunity costs during transition. I’m sure it becomes easier, but it might still be a significant drain, depending on your prior eating and social habits. Of course, this doesn’t matter as much if you aren’t involved in EA, or are involved in relatively low-priority EA.
(I’d add that vegetarianism might also make you better Effective Altruist in general, via virtue-ethics-style psychological mechanisms. I think this is one of the very best arguments for vegetarianism, though it may depend on the psychology and ethical code of each individual EAist.)
2) Coherent extrapolated volition. We aren’t virtuous enough to make healthy, scalable, sustainable economic decisions, but we wish we were.
3)-5) I agree that 4) doesn’t persuade me much, but it’s very interesting, and I’d like to hear it defended in more detail with a specific psychological model of what makes humans moral patients. 3) I think is a much more serious and convincing argument; indeed, it convinces me that at least some animals with complex nervous systems and damage-avoiding behavior do not suffer. Though my confidence is low enough that I’d probably still consider it immoral to, say, needlessly torture large numbers of insects.
2) Yes, I really hope CEV is going to come out in a way that also attributes moral relevance to nonhumans. But the fact that there might not be a unique way to coherently extrapolate values and that there might be arbitrariness in choosing the starting points makes me worried. Also, it is not guaranteed that a singleton will happen through an AI implementing CEV, so it would be nice to have a humanity with decent values as a back-up.
If you’re worried that CEV won’t work, do you have an alternative hope or expectation for FAI that would depend much more on humans’ actual dietary practices?
Honestly, I find it most likely that an FAI would kill all non-human animals, because sustaining multiple species with very different needs and preferences is inefficient and/or because of meta-ethical or psychological uncertainty about the value of non-humans.
If we’re more worried that non-humans might be capable of unique forms of suffering than we are worried that non-humans might be capable of unique forms of joy and beauty, then preventing their existence makes the most sense (once humans have no need for them). That includes destroying purely wild species, and includes ones that only harm each other and are not impacted by humanity.
It doesn’t need to depend on people’s dietary habits directly. A lot of people think animals count at least somewhat, but they might be too prone to rationalizing objections and too lazy to draw any significant practical conclusions from that. However, if those people were presented with a political initiative that replaces animal products by plant-based options that are just as good/healthy/whatever, then a lot of them would hopefully vote for it. In that sense, raising awareness for the issue, even if behavioral change is slow, may already be an important improvement to the meme-pool. Whatever utility functions society as a whole or those in power eventually decide to implement, is seems that this to at least some extent depends on the values of currently existing people (and especially people with high potential for becoming influential at some time in the future). This is why I consider anti-speciesist value spreading a contender for top priority.
I actually don’t object to animals being killed, I’m just concerned about their suffering. But I suspect lots of people would object, so if it isn’t too expensive, why not just take care of those animals that already exist and let them live some happy years before they die eventually? I’m especially talking about wild animals because for some animals, life in the wild might be as bad as in factory farms. And I think species-membership is ethically irrelevant, so there is no need for conservation in my view.
I don’t want to fill the universe with animals, what would be the use of that? I’m mainly worried that people might decide to send out von Neumann probes to populate the whole universe with wildlife, or do ancestor simulations or other things that don’t take into account animal suffering. Also, there might be a link between speciesism and “substratism”, and of course I also care about all forms of conscious uploads and I wouldn’t want them to suffer either.
The thought that highly temporally variable memes might define the values for our AGI worries me a whole lot. But I can’t write the possibility off, so I agree this provides at least some reason to try to change the memetic landscape.
Ditto. It might be that killing in general is OK if it doesn’t cause anyone suffering. Or, if we’re preference utilitarians, it might be that killing non-humans is OK because their preferences are generally very short-term.
One interesting (and not crazy) alternative to lab-grown meat: If we figure out (with high confidence) the neural basis of suffering, we may be able to just switch it off in factory-farmed animals.
I’m about 95% confident that’s almost never true. If factory-farmed animals didn’t seem so perpetually scared (since fear of predation is presumably the main source of novel suffering in wild animals), or if their environment more closely resembled their ancestral environment, I’d find this line of argument more persuasive.
Yeah, I see no objections to eating meat from zombie-animals (or animals that are happy but cannot suffer). Though I can imagine that people would freak out about it.
Most animals in the wild use r-selection as a reproductive strategy, so they have huge amounts of offspring of which only one child per parent survives and reproduces successfully (if the population remains constant). This implies that the vast majority of wild animals die shortly after birth in ways that are presumably very painful. There is not enough time for having fun for these animals, even if life in the wild is otherwise nice (and that’s somewhat doubtful as well). We have to discount the suffering somewhat due to the possibility that newborn animals might not be conscious at the start, but it still seems highly likely that suffering dominates for wild animals, given these considerations about the prevalence of r-selection.
Yes, but we agree death itself isn’t a bad thing, and I don’t think most death is very painful and prolonged. Prolonged death burns calories, so predators tend to be reasonably efficient. (Parasites less so, though not all parasitism is painful.) Force-feeding your prey isn’t unheard of, but it’s unusual.
If we’re worried about lost opportunities for short-lived animals, are we also worried about lost opportunities for counterfactual animals that easily could have existed? Also, I agree it’s bad for an organism to suffer for 100% of a very short life, but it’s not necessarily any better for it to suffer for 80% of a life that’s twice as long.
Oh, I have no doubt that suffering dominates for just about every sentient species on Earth. That’s part of why I suspect an FAI would drive nearly all species to extinction. What I doubt is that this suffering exceeds the suffering in typical factory farms. These organisms aren’t evolved to navigate environments like factory farms, so it’s less likely that they’ll have innate coping mechanisms for the horrors of pen life than for the horrors of jungle life. If factory farm animals are sentient, then their existence is probably hell, i.e., a superstimulus exceeding the pain and fear and frustration and sadness (if these human terms can map on to nonhuman psychology) they could ever realistically encounter in the wild.
Yes, it would be hard give a good reason for treating these differently, unless you’re a preference utilitarian and think there is no point in creating new preference-bundles just in order to satisfy them later. I was arguing from within a classical utilitarian perspective, even though I don’t share this view (I’m leaning towards negative utilitarianism), in order to make the point that suffering dominates in nature. I see though, you might be right about factory farms being much worse on average. Some of the footage certainly is, even though the worst instance of suffering I’ve ever watched was an elephant being eaten by lions.
If it wanted to maximize positive states of consciousness, it would probably kill all sentient beings and attempt to convert all the matter in the universe into beings that efficiently experience large amounts of happiness. I find it plausible that this would be a good thing. See here for more discussion.
I don’t find that unlikely. (I think I’m a little less confident than Eliezer that something CEV-like would produce values actual humans would recognize, from their own limited perspectives, as preferable. Maybe my extrapolations are extrapolateder, and he places harder limits on how much we’re allowed to modify humans to make them more knowledgeable and rational for the purpose of determining what’s good.)
But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans. Humans care a lot more about themselves than about other species, and are less confident about non-human subjectivity.
Of course, I suppose the reverse is a possibility. Maybe some existing non-human terrestrial species has far greater capacities for well-being, or is harder to inflict suffering on, than humans are, and an FAI would kill humans and instead work on optimizing that other species. I find that scenario much less plausible than yours, though.
If a CEV did this then I believe it would be acting unethically—at the very least, I find it highly implausible that, among the hundreds of thousands(?) of sentient species, homo sapiens is capable of producing the most happiness per unit of resources. This is a big reason why I feel uneasy about the idea of creating a CEV from human values. If we do create a CEV, it should take all existing interests into account, not just the interests of humans.
It also seems highly implausible that any extant species is optimized for producing pleasure. After all, evolution produces organisms that are good at carrying on genes, not feeling happy. A superintelligent AI could probably create much more effective happiness-experiencers than any currently-living beings. This seems to be similar to what you’re getting at in your last paragraph.
I don’t understand how CEV would be capable of deducing that non-human animals have moral value purely from current human values.
CEV asks what humans would value if their knowledge and rationality were vastly greater. I don’t find it implausible that if we knew more about the neural underpinnings of our own suffering and pleasure, knew more about the neurology of non-humans, and were more rational and internally consistent in relating this knowledge to our preferences, then our preferences would assign at least some moral weight to the well-being of non-sapients, independent of whether that well-being impacts any sapient.
As a simpler base case: I think the CEV of 19th-century slave-owners in the American South would have valued black and white people effectively equally. Do we at least agree about that much?
I don’t know much about CEV (I started to read Eliezer’s paper but I didn’t get very far), but I’m not sure it’s possible to extrapolate values like that. What if 19th-century slave owners hold white-people-are-better as a terminal value?
On the other hand, it does seem plausible that slave owner would oppose slavery if he weren’t himself a slave owner, so his CEV may indeed support racial equality. I simply don’t know enough about CEV or how to implement it to make a judgment one way or the other.
Terminal values can change with education. Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality. For instance, slave-owners don’t don’t on any deep level value consistency between their moral intuitions, or they assign zero weight to moral intuitions involving empathy.
If new experiences and rationality training couldn’t ever persuade a slave-owner to become an egalitarian, then I’m extremely confused by the fact that society has successfully eradicated the memes that restructured those slave-owners’ brains so quickly. Maybe I’m just more sanguine than most people about the possibility that new information can actually change people’s minds (including their values). Science doesn’t progress purely via the eradication of previous generations.
I’m not sure I’d agree with that framing. If an ethical feature changes with education, that’s good evidence that it’s not a terminal value, to whatever extent that it makes sense to talk about terminal values in humans. Which may very well be “not very much”; our value structure is a lot messier than that of the theoretical entities for which the terminal/instrumental dichotomy works well, and if we had a good way of cleaning it up we wouldn’t need proposals like CEV.
People can change between egalitarian and hierarchical ethics without neurological insults or biochemical tinkering, so human “terminal” values clearly don’t necessitate one or the other. More importantly, though, CEV is not magic; it can resolve contradictions between the ethics you feed into it, and it might be able to find refinements of those ethics that our biases blind us to or that we’re just not smart enough to figure out, but it’s only as good as its inputs. In particular, it’s not guaranteed to find universal human values when evaluated over a subset of humanity.
If you took a collection of 19th-century slave owners and extrapolated their ethical preferences according to CEV-like rules, I wouldn’t expect that to spit out an ethic that allowed slavery—the historical arguments I’ve read for the practice didn’t seem very good—but I wouldn’t be hugely surprised if it did, either. Either way it wouldn’t imply that the resulting ethic applies to all humans or that it derives from immutable laws of rationality; it’d just tell us whether it’s possible to reconcile slavery with middle-and-upper-class 19th-century ethics without downstream contradictions.
“Saying that the coherent extrapolated volition of 19th-century slave owners would have been racist is equivalent to saying that either racism is justified by the facts, or the fundamental norms of rationality latent in 19th-century slave-owner cognition are radically unlike our contemporary fundamental norms of rationality.”
Could you elaborate on this please? If you’re saying what I think you’re saying then I would strongly like to argue against your point.
You might also like Brian Tomasik’s critique of CEV
Do you think the kind of pain I feel while (say) eating spicy foods is bad whether or not I dislike it?
I think the word “pain” is misleading. What I care about precisely is suffering, defined as a conscious state a being wants to get out of. If you don’t dislike it and don’t have an urge to make it stop, it’s not suffering. This is also why I think the “pain” of people with pain asymbolia is not morally bad.
Here is a thought experiment. Suppose that explorers arrive in a previously unknown area of the Amazon, where a strange tribe exists. The tribe suffers from a rare genetic anomaly, whereby all of its individuals are physically and cognitively stuck at the age of 3.
They laugh and they cry. They love and they hate. But they have no capacity for complex planning, or normative sophistication. So they live their lives as young children do—on a moment to moment basis—and they have no hope for ever developing beyond that.
If the explorers took these gentle creatures and murdered them—for science, for food, or for fun—would we say, “Oh but those children are not so intelligent, so the violence is ok.” Or would we be even more horrified by the violence, precisely because the children had no capacity to fend for themselves?
I would submit that the argument against animal exploitation is even stronger than the argument against violence in this thought experiment, because we could be quite confident that whatever awareness these children had, it was “less than” what a normal human has. We are comparing the same species after all, and presumably whatever the Amazonian children are missing, due to genetic anomaly, is not made up for in higher or richer awareness in other dimensions.
We cannot say that about other species. A dog may not be able to reason. But perhaps she delights in smells in a way that a less sensitive nose could never understand. Perhaps she enjoys food with a sophistication that a lesser palate cannot begin to grasp. Perhaps she feels loneliness with an intensity that a human being could never appreciate.
Richard Dawkins makes the very important point that cleverness, which we certainly have, gives us no reason to think that animal consciousness is any less rich or intense than human consciousness (http://directactioneverywhere.com/theliberationist/2013/7/18/g2givxwjippfa92qt9pgorvvheired). Indeed, since cleverness is, in a sense, an alternative mechanism for evolutionary survival to feelings (a perfect computational machine would need no feelings, as feelings are just a heuristic), there is a plausible case that clever animals should be given LESS consideration.
But all of this is really irrelevant. Because the basis of political equality, as Peter Singer has argued, has nothing to do with the facts of our experience. Someone who is born without the ability to feel pain does not somehow lose her rights because of that difference. Because equality is not a factual description, it is a normative demand—namely, that every being who crosses the threshold of sentience, every being that could be said to HAVE a will—ought be given the same respect and freedom that we ask for ourselves, as “willing” creatures.
This is a variant of the argument from marginal cases: if there is some quality that makes you count morally, and we can find some example humans (ex: 3 year olds) that have less of that quality than some animals, what do we do?
I’m very sure than an 8 year old human counts morally and that a chicken does not, and while I’m not very clear on where along that spectrum the quality I care about starts getting up to levels where it matters, I think it’s probably something no or almost no animals have and some humans don’t have. Making this distinction among humans, however, would be incredibly socially destructive, especially given how unsure I am about where the line should go, and so I think we end up with a much better society if we treat all humans as morally equal. This means I end up saying things like “value all humans equally; don’t value animals” when that’s not my real distinction, just the closest schelling point).
It seems like your answer to the argument from marginal cases is that maybe the (human) marginal cases don’t matter and “Making this distinction among humans, however, would be incredibly socially destructive.”
That may work for you, but I think it doesn’t work for the vast majority of people who don’t count animals as morally relevant. You are “very sure than an 8 year old human counts morally” (intrinsically, by which I mean “not just because doing otherwise would be socially destructive). I’m not sure if you think 3 year old humans count (intrinsically), but I’m sure that almost everyone does. I know that they count these humans intrinsically (and not just to avoid social destruction), because in fact most people do make these distinctions among humans: For example, median opinion in the US seems to be that humans start counting sometime in the second trimester.
Given this, it’s entirely reasonable to try to figure out what quality makes things count morally, and if you (a) care intrinsically about 3 year old humans (or 1 year old or minus 2 months old or whatever), and (b) find that chickens (or whatever) have more of this quality than 3 year old humans, you should care about chickens.
Consider an experience which, if had by an eight-year-old human, would be morally very bad, such as an experience of intense suffering. Now suppose that a chicken could have an experience that was phenomenally indistinguishable from that of the child. Would you be “very sure” that it would be very bad for this experience to be had by the human child, but not at all bad to be had by the chicken?
I smell a variation of Pascal’s Mugging here. In Pascal’s Mugging, you are told that you should consider a possibility with a small probability because the large consequence makes up for the fact that the probability is small. Here you are suggesting that someone may not be “very sure” (i.e. that he may have a small degree of uncertainty), but that even a small degree of uncertainty justifies becoming a vegetarian because something about the consequence of being wrong (presumably, multiplying by the high badness, though you don’t explicitly say so) makes up for the fact that the degree of uncertainty is small.
“Phenomenally indistinguishable”… to whom?
In other words, what is the mind that’s having both of these experiences and then attempting to distinguish between them?
Thomas Nagel famously pointed out that we can’t know “what it’s like” to be — in his example — a bat; even if we found our mind suddenly transplanted into the body of a bat, all we’d know is what’s it’s like for us to be a bat, not what it’s like for the bat to be a bat. If our mind were transformed into the mind of a bat (and placed in a bat’s body), we could not analyze our experiences in order to compare them with anything, nor, in that form, would we have comprehension of what it had been like to be a human.
Phenomenal properties are always, inherently, relative to a point of view — the point of view of the mind experiencing them. So it is entirely unclear to me what it means for two experiences, instantiated in organisms of very different species, to be “phenomenally indistinguishable”.
When a subject is having a phenomenal experience, certain phenomenal properties are instantiated. In saying that two experiences are phenomenally indistinguishable, I simply meant that they instantiate the same phenomenal properties. As should be obvious, there need not be any mind having both experiences in order for them to be indistinguishable from one another. For example, two people looking at the same patch of red may have phenomenally indistinguishable visual experiences—experiences that instantiate the same property of phenomenal redness. I’m simply asking Jeff to imagine a chicken having a painful experience that instantiates the property of unpleasantness to the same degree that a human child does, when we believe that the child’s painful experience is a morally bad thing.
Sorry, but this is not an accurate characterization of Nagel’s argument.
How does this not apply to me imagining that I’m a toaster making toast? I can imagine a toaster having an experience all I want. That doesn’t imply that an actual toaster can have that experience or anything which can be meaningfully compared to a human experience at all.
Are you denying that chickens can have any of the experiences which, if had by a human, we would regard as morally bad? That seems implausible to me. Most people think that it would be very bad, for instance, if a child suffered intensely, and most people agree that chickens can suffer intensely.
That’s a view of phenomenal experience (namely, that phenomenal properties are intersubjectively comparable, and that “phenomenal properties” can be described from a third-person perspective) that is far, far from uncontroversial among professional philosophers, and I, personally, take it to be almost entirely unsupported (and probably unsupportable).
Intersubjective incomparability of color experiences is one of the classic examples of (alleged) intersubjective incomparability in the literature (cf. the huge piles of writing on the inverted spectrum problem, to which even I have contributed).
I really don’t think this is a coherent thing to imagine. Once again — unpleasantness to whom? “Unpleasant” is not a one-place predicate.
If your objection is that Nagel only says that the structure of our minds and sensory organs does not allow us to imagine the what-it’s-like-ness of being a bat, and does not mention transplantation and the like, then I grant it; but my extension of it is, imo, consistent with his thesis. The point, in any case, is that it doesn’t make sense to speak of one mind having some experience which is generated by another mind (where “mind” is used broadly, in Nagel-esque examples, to include sensory modalities, i.e. sense organs and the brain hardware necessary to process their input; but in our example need not necessarily include input from the external world).
I don’t think there’s a God-given mapping from the set of Alice’s possible subjective experiences to the set of Bob’s possible subjective experiences. (This is why I think the inverted spectrum thing is meaningless.) We can define a mapping that maps each of Alice’s qualia to the one Bob experiences in response to the same kind of sensory input, but 1) there’s no guarantee it’s one-to-one (colours as seen by young, non-colourblind people would be a best case scenario, but think about flavours), and 2) it would make your claim tautological and devoid of empirical content.
Nagel had no problems with taking objective attributes of experience—e.g. indicia of suffering—and comparing them for the purposes of political and moral debate. The equivalence or even comparability of subjective experience (whether between different humans or different species) is not necessary for an equivalence of moral depravity.
jkaufman,
Justifying violence against an oppressed group, on the basis of some unobserved and ambiguous quality, is the definition of bigotry.
Have you interacted with a disabled human before? What it is it about them that you think merits less consideration? My best friend growing up was differently abled, at the cognitive capacity of a young child. But he is also probably the most praiseworthy individual I have ever met. Generous to a fault, forgiving even of those who had mistreated him (and there were many of those), and completely lacking in artifice. A world filled with animals such as he would be a good world indeed. So why should he receive any fewer rights than you or I? What is this amorphous quality that he is missing?
Factually, it is not true that human inequality is “socially destructive.” Human civilization has thrived for 10,000 years despite horrific caste systems. And even just a generation prior, disabled humans were systematically mistreated as our moral inferiors. Even lions of the left like Arthur Miller had no qualms about locking up their disabled children and throwing away the key.
Inequality is a terrible thing, if you are on the wrong side of the hierarchy. But there is nothing intrinsically destabilizing about bigotry. Far from it, prejudice against “outsiders” is our natural state.
I think you are technically wrong. A world filled with people at the cognitive capacity of a young child would include a lot of suffering. (Unless there would be also someone else to solve their problems.) Hunger, diseases, predators… and no ability to defend against them.
DxE, I have to ask, and I don’t mean to be hostile: are you using emotionally-charged, question-begging language deliberately (to act as intuition pumps, perhaps)? Would you be able to rephrase your comments in more neutral, objective language?
The language I use is deliberate. It accurately conveys my point of view, including normative judgments. I do not relish the idea of antagonizing anyone. However, the content of certain viewpoints is inherently antagonizing. If I were to factually state that someone were a rapist, for example, I could not phrase that in a neutral, objective way.
For what it’s worth, I actually love jkaufman.. He’s one of the smartest and most solid people I know. But his views on this subject are bigoted.
I see. However, I disagree that your comments accurately convey your point of view, or any point of view; there’s a lot of unpacking I’d have to ask you to do on e.g. the great-grandparent before I could understand exactly what you were saying; and I’m afraid I’m not sufficiently interested to try.
Couldn’t you? I could. Observe:
Bob has, on several occasions, initiated and carried on sexual intercourse with an unwilling partner, knowing that the person in question was not willing, and understanding his actions to be opposed to the wishes of said person, as well as to the social norms of his society.
There you go. That is, if anything, too neutral; I could make it less verbose and more colloquial without much loss of neutrality; but it showcases my point, I think. If you believe you can’t phrase something in language that doesn’t sound like you’re trying to incite a crowd, you are probably not trying hard enough.
If you like (and only if you like), I could go through your response to jkaufman and point out where and how your choice of language makes it difficult to respond to your comments in any kind of logical or civilized manner. For now, I will say only:
Expressing your normative judgments is not very useful, nor very interesting to most people. What we’re looking for is for you to support those judgments with something. The mere fact that you think something is bad, really very bad, just no good… is not interesting. It’s not anything to talk about.
So what you are demonstrating is that it is possible (and apparently, in your eyes, desirable) to whitewash rape and make it seem morally neutral.
No thanks.
There’s a difference between making it seem morally neutral and not implying anything about its morality or lack thereof. What SaidAchmiz was trying to do is the latter.
You’re right it might have been good to answer these in the core essay.
I disagree that being a vegetarian is an inconvenience. I haven’t found my social activities restricted in any non-trivial way and being healthy has been just as easy/hard as when eating meat. It does not drain my attention from other EA activities.
~
I agree with this in principle, but again don’t think vegetarianism is a stop from that. Certainly removing factory farming is a small win compared to successful star colonization, but I don’t think there’s much we can do now to ensure successful colonization, while there is stuff we can do now to ensure factory farming elimination.
~
It need not, which is what makes consciousness thorny. I don’t think there is a tidy resolution to this problem. We’ll have to take our best guess, and that involves thinking nonhuman animals suffer. We’d probably even want to err on the safe side, which would increase our consideration toward nonhuman animals. It would also be consistent with an Ocham’s razor approach.
~
This doesn’t feature among my ethical framework, at least. I don’t know how this intuitively works for other people. I also don’t think there’s much I can say about it.
~
It’s not. But there’s other considerations and lines of evidence, so my worry that we’re just anthropomorphizing is present, but rather low.
Wait...what? Why not?
My morality is applicable to agents. The extent to which an object can be modeled as an agent plays a big role (but not the only role) in determining its moral weight. As such, there is a rough hierarchy:
nonliving things and single celled organisms < plants, oysters, etc < arthropods, worms, etc < fish, lizards < dumber animals (chickens, cows) < smarter animals (pigs, dogs, crows) < smartest animals (apes, elephants, cetaceans...)
Practically speaking from an animal rights perspective, this means that I would consider it a moral victory if meat eaters shifted a greater portion of their meat diet downwards towards “lower” animals like fish and arthropods, The difference in weight between much more and much less intelligent animals is rather extreme—it would kill several crickets, shrimp, herring, or salmon to replace a single pig, but I would still count that as a positive because I think that a pig’s moral weight is magnitudes greater than a salmons. Convincing a person like me not to harm an object involves behavioral measures (with intelligence being one of several factors) which demonstrate the object as a certain kind of agent which is within the class of agents with positive moral weight.
I’m guessing that we’re thinking of different things when we read “sapience is what makes suffering bad (or possible)”. Do you think that my version of the thought doesn’t feature in your ethical framework? If not, what does determine which objects are morally weighty?
For me, suffering is what makes suffering bad. Or, rather, I care about any entity that is capable of having feelings and experiences. And, for each of these entities, I much prefer them not to suffer. I care about not having them suffer for their sakes, of course, not for the sake of reducing suffering in the abstract. I don’t view entities as utility receptacles.
But I don’t think there’s anything special about sapience, per se. Rather, I only think sapeince or agentiness is relevant in so far as more sapient and more agenty entities are more capable of suffering / happiness. Which seems plausible, but isn’t certain.
~
This seems plausible to me from a perspective of “these animals likely are less capable of suffering”, but I think you’re missing two things in your analysis: …(1) the degree of suffering required to create the food, which varies between species, and …(2) the amount of food provided by each animal.
When you add these two things together, you get a suffering per kg approach that has some counterintuitive conclusions, like the bulk of suffering being in chicken or fish, though I think this table is desperately in need of some updating with more and better research (something that’s been on my to-do list for awhile).
Let’s temporarily taboo words relating to inaccessible subjective experience, because the definitions of words like “suffering” haven’t been made rigorous enough to talk about this—we could define it in concrete neurological terms or specific computations, or we could define it in abstract terms of agents and preferences, and we’d end up talking past each other due to different definitions.
I want to make sure to define morality such that it’s not dependent on the particulars of the algorithm that an agent runs, but by the agent’s actions. If we were to meet weird alien beings in the future who operated in completely alien ways, but who act in ways that can be defined as preferences and can engage in trade, reciprocal altruism, etc...then our morality should extend to them.
Similarly, I think our morality shouldn’t extend to paperclippers—even if they make a “sad face” and run algorithms similar to human distress when a paperclip is destroyed, it doesn’t mean the same thing morally.
So I think morality must necessarily be based on input-output functions, not on what happens in between. (at this point someone usually brings up paralyzed people—briefly, you can quantify the extent of additions/modifications necessary to create a functioning input-output agent from something and use that to extrapolate agency in such cases.)
Wait, didn’t I take that into account with...
...or are you referring to a different concept?
I really do think the relationship between moral weight and intelligence is exponential—as in, I consider a human life to be weighted like ~10 chimps, ~100 dogs...(very rough numbers, just to illustrate the exponential nature)...and I’m not sure there are enough insects in the world to morally outweigh one human life (instrumental concerns about the environment and the intrinsic value of diverse ecosystems aside, of course). I’d wager the human hedons and health benefits from eating something very simple, like a shrimp or a large but unintelligent fish, might actually outweigh the cost to the fish and be a net positive (as it is with plants). My certainty in that matter is low, of course
I agree that people generally and I specifically need to understand “suffering” better. But I don’t think substitutes like “runs an algorithm analogous to human distress” or “has thwarted preferences” offer anything better understood or well-defined.
I suppose when I think of suffering probably involves most of the following: noiception, a central nervous system (with connected nociceptors), endogenous opiods, a behavioral pain response, and a behavioral pain response affected by pain killers.
~
I think this is the clearest case where our moral theories differ. If the paperclipper suffers, I don’t see any reason not to care about that experience. Or, rather, I don’t fully understand why you lack care for the paperclipper.
Similarly, while I’m all for extending morality to weird aliens, I don’t think trade nor reciprocal altruism per se are the precise qualities that make things count morally (for me). I assume you mean these qualities as a proxy for “high intelligence”, though, rather than precise qualities?
~
Yes, you did. My bad for missing it. Sorry.
~
How does your uncertainty weigh in practically in this case? Would you, for example, refrain from eating fish while trying to learn more?
Point of disagreement: I do think that both of those are more well-defined than “suffering”.
Additionally, I think this statement means you define suffering as “runs an algorithm analogous to human distress”. All of these things are specific to Earth-evolved life forms. None of this applies to the class of agents in general.
(Also, nitpick—going by lay usage, you’ve outlined pain, not suffering. In my preferred usage, for humans at least pain is explicitly not morally relevant except insofar as it causes suffering.)
Rain-check on this...have some work to finish. Will reply properly later.
I don’t think so, but I might be wrong...Is risk aversion in the face of uncertainty actually rational in this scenario? Seems to me that there are certain scenarios where risk aversion makes sense (personal finance, for example) and scenarios where it doesn’t (effective altruism, for example) and this decision seems to fall in the latter camp. AFAIK, risk / loss aversion only applies where there are diminishing returns on the value of something.
I haven’t seen any behavioral evidence of fish doing problem solving, being empathetic towards each other, exhibiting cognitive capacities beyond very basic associative learning & memory, or that sort of thing.
Practically, I eat things fish and lower guilt-free. I limit consumption of animals higher than fish to very occasional consumption only—in a similar vein to how I sometimes do things that are bad for the environment, or (when I start earning) plan to sometimes spend money on things that aren’t charity, with the recognition that it’s mildly immoral selfishness and I should keep it to a minimum. Basically, eating animals seems to be on par with all the other forms of everyday selfishness we all engage in...certainly something to be minimized, but not an abomination.
Where I do consume higher animals, I have plans in the future to shift that consumption towards unpopular cuts of meat (organs, bones, etc) because that means less negative impact through reduced wasteage (and also cheaper, which may enable upgrades with respect to buying from ethical farms + better nutritional profile). The bulk of the profit from slaughtering seems to be the popular muscle meat cuts—if meat eaters would be more holistic about eating the entire animal and not parts of it, I think there would be less total slaughter.
The trade-offs here are not primarily a taste thing for me—I just get really lethargic after eating grains, so I try to limit them. My strain of indian culture is vegetarian, so I am accustomed to eating less meat and more grain through childhood...but after I reduced my intake of grains I felt more energetic and the period of fogginess that I usually get after meals went away. I also have a family history of diabetes and metabolic disorders (which accelerate age-related declines in cognitive function, which I’m terrified of), and what nutrition research I’ve done indicates that shifting towards a more paleolithic diet (fruits, vegetables, nuts and meat) is the best way to avoid this. Cutting out both meat and grain makes eating really hard and sounds like a bad idea.
Just for the sake of completeness, I’ll wait for you to follow-up on this before continuing our discussion here.
If the paper-clipper even can “suffer” … I suspect a more useful word to describe the state of the paperclipper is “unclippy”. Or maybe not...let’s not think about these labels for now. The question is, regardless of the label, what is the underlying morally relevant feature?
I would hazard to guess that many of the supercomputers running our google searches, calculating best-fit molecular models, etc… have enough processing power to simulate a fish that behaves exactly like other fishes. If one wished, one could model these as agents with preference functions. But it doesn’t mean anything to “torture” a google-search algorithm, whereas it does mean something to torture a fish, or to torture a simulation of a fish.
You could model something as simple as a light switch as an agent with a preference function but it would be a waste of time. In the case of an algorithm which finds solutions in a search space it is actually useful to model it as an agent who prefers to maximize some elements of a solution, as this allows you to predict its behavior without knowing details of how it works. But, just like the light switch, just because you are modelling it as an agent doesn’t mean you have to respect its preferences.
“rational agent” explores the search space of possible actions it can take, and chooses the actions which maximize its preferences—the “correct solution” is when all preferences are maximized. An agent is fully rational if it made the best-possible choice given the data at hand. There are no rational agents, but it’s useful to model things which act approximately in this way as agents.
Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have “preferences”, but not morally relevant ones.
A human (or, hopefully one day a friendly AI) seeks to fulfill an extremely complex set of preference...as does a fish. They have preferences which carry moral weight.
It’s not specific receptors or any particular algorithm that captures what is morally relevant to me about other agent’s preferences. If you took a human and replaced its brain with a search algorithm which found the motor output solutions which maximized the original human’s preferences, I’d consider this search algorithm to fit the definition of a person (though not necessarily the same person). I’d respect the search algorithm’s preferences the same way I respected the preferences of the human it replaced. This new sort of person might instrumentally prefer not having its arms chopped off, or terminally prefer that you not read its diary, but it might not show any signs of pain when you did these things unless showing signs of pain was instrumentally valuable. Violation of this being’s preferences may or may not be called “suffering” depending on how you define “suffering”...but either way, I think this being’s preferences are just as morally relevant as a humans.
So the question I would turn back to you is...under what conditions could a paper clipper suffer? Do all paper clippers suffer? What does this mean for other sorts of solution-maximizing algorithms, like search engines and molecular modelers?
My case is essentially that it is something about the composition of an agent’s preference function which contains the morally relevant component with regards to whether or not we should respect its preferences. The specific nature of the algorithm it uses to carry this preference function out—like whether it involves pain receptors or something—is not morally relevant.
Just as a data-point about intuition frequency, I found your intuitions about “a search algorithm which found the motor output solutions which maximized the original human’s preference” to be very surprising
Do you mean that the idea itself is weird and surprising to consider?
Or do you mean that my intuition that this search algorithm fits the definition of a “person” and is imbued with moral weight is surprising and does not match your moral intuition?
Thanks for the well-thought out comment. It helps me think through the issue of suffering a lot more.
~
I think this is a good thought experiment and it does push me more toward preference satisfaction theories of well-being, which I have long been sympathetic to. I still don’t know much myself about what I view as suffering. I’d like to read and think more on the issue—I have bookmarked some of Brian Tomasik’s essays to read (he’s become more preference-focused recently) as well as an interview with Peter Singer where he explains why he’s abandoned preference utilitarianism for something else. So I’m not sure I can answer your question yet.
There are interesting problems with desires, such as formalizing it (what is a desire and what makes a desire stronger or weaker, etc.), population ethics (do we care about creating new beings with preferences, etc.) and others that we would have to deal with as well.
~
So it seems like, to you, an entity’s welfare matters when it has preferences, weighted based on the complexity of those preferences, with a certain zero threshold somewhere (so thermostat preferences don’t count).
I don’t think complexity is the key driver for me, but I can’t tell you what is.
~
Likewise, I don’t think this is much of a concern for me, and it seems inconsistent with the rest of what you’ve been saying.
Why are problem solving and empathy important? Surely I could imagine a non-empathetic program without the ability to solve most problems, that still has the kind of robust preferences you’ve been talking about.
And what level of empathy and problem solving are you looking for? Notably, fish engage in cleaning symbiosis (which seems to be in the lower-tier of the empathy skill tree) and Wikipedia seems to indicate (though perhaps unreliably) that fish have pretty good learning capabilities.
~
That makes sense to me.
No, it’s not complexity, but content of the preferences that make a difference. Sorry for mentioning the complexity—i didn’t mean to imply that it was the morally relevant feature.
I’m not yet sure what sort of preferences give an agent morally weighty status...the only thing I’m pretty sure about is that the morally relevant component is contained somewhere within the preferences, with intelligence as a possible mediating or enabling factor.
Here’s one pattern I think I’ve identified:
I belong within reference Class X.
All beings in Reference Class X care about other beings in Reference Class X, when you extrapolate their volition.
When I hear about altruistic mice, it is evidence that the mouse’s extrapolated volition would cause it to care about Class X-being’s preferences to the extent that it can comprehend them. The cross-species altruism of dogs and dolphins and elephants is an especially strong indicator of Class X membership.
On the other hand, the within-colony altruism of bees (basically identical to Reference Class X except it only applies to members of the colony and I do not belong in it), or the swarms and symbiosis of fishes or bacterial gut flora, wouldn’t count...being in Reference Class X is clearly not the factor behind the altruism in those cases.
...which sounds awfully like reciprocal altruism in practice, doesn’t it? Except that, rather than looking at the actual act of reciprocation of altruism, I’d be extrapolating the agent’s preferences for altruism. Perhaps Class X would be better named “Friendly”, in the “Friendly AI” sense—all beings within the class are to some extent Friendly towards each other.
This is at the rough edge of my thinking though—the ideas as just stated are experimental and I don’t have well defined notions about which preferences matter yet.
Edit: Another (very poorly thought out) trend which seems to emerge is that agents which have a certain sort of awareness are entitled to a sort of bodily autonomy … because it seems immoral to sit around torturing insects if one has no instrumental reason to do so. (But is it immoral in the sense that there are a certain number of insects which morally outweigh a a human? Or is it immoral in a virtue ethic-y, “this behavior signals sadism” sort of way?)
My main point is that I’m mildly guessing that it’s probably safe to narrow down the problem to some combination of preference functions and level of awareness. In any case, I’m almost certain that there exist preference functions are sufficient (but maybe not necessary?) to confer moral weight onto an agent...and though there may be other factors unrelated to preference or intelligence that play a role, preference function is the only thing with a concrete definition that I’ve identified so far.
Just so I understand you better, how would you compare and contrast this kind of pro-X “kin” altruism with utilitarianism?
Utilitarianism has never made much sense to me except as a handy way to talk about things abstractly when precision isn’t important
...but I suppose X would be a class of agents who consider each other’s preferences when they make utilitarian calculations? I pretty much came up with the pro-X idea less than a month ago, and haven’t thought it through very carefully.
Oh, here’s a good example of where preference utilitarianism fails which illustrates it:
10^100 intelligent people terminally prefer that 1 person is tortured. Preference utilitarianism says “do the torture”. My moral instinct says “no, it’s still wrong, no matter how many people prefer it”.
Perhaps under the pro-X system, the reason we can ignore the preferences of 10^100 people is that the preference which they have expressed lies strictly outside category X and therefore that preference can be ignored?
Whereas, if you have a Friendly Paperclipper (cares about X-agents and paperclips with some weight on each), the Friendly moral values put it within X...which means that we should now be willing to cater to its morally neutral paper-clip preferences as well.
(If this reads sloppy, it’s because my thoughts on the matter currently are sloppy)
So...I guess there’s sort of a taxonomy of moral-good, neutral-selfish, and evil preferences...and part of being good means caring about other people’s selfish preferences? And part of being evil means valuing the violation of other’s preferences? And, good agents can simply ignore evil preferences.
And (under the pro-X system), good agents can also ignore the preferences of agents that aren’t in any way good...which seems like it might not be correct, which is why I say that there might be other factors in addition to pro-X that make an agent worth caring about for my moral instincts, but if they exist I don’t know what they are.
Are you perhaps confusing ‘morally wrong’ with ‘a sucky tradeoff that I would prefer not to be bound by’?
Just because torturing one person sucks, just because we find it abhorrent, does not mean that it isn’t the best outcome in various situations. If your definition of ‘moral’ is “best outcome when all things are considered, even though aspects of it suck a lot and are far from ideal”, then yes, torturing someone can in fact be moral. If your definition of ‘moral’ is “those things which I find reprehensible”, then quite probably you can never find torturing someone to be moral. However, there are scenarios where it may still be necessary, or the best option.
Nope...because ..
...because I believe that torturing someone could still instrumentally be the right thing to do on a consequential grounds.
In this scenario, 10^100 people terminally value torturing one person, but I do not care about their preferences, because it is an evil preference.
However, in an alternate scenario, if I had to choose between 10^100 people getting mildly hurt or 1 person getting tortured, I’d choose the one person getting tortured.
In these two scenarios, the preference weights are identical, but in the first scenario the preference of the 10^100 people is evil and therefore irrelevant in my calculations, whereas in the second scenario the needs of 10^100 outweigh the needs of the one.
This is less a discussion about torture, and more a discussion about whose/which preferences matter. Sadistic preferences (involving real harm, not the consensual kink), for example, don’t matter morally—there’s no moral imperative to fulfill those preferences, no “good” done when those preferences are fulfilled and no “evil” resulting from thwarting those preferences.
I think you should temporarily taboo ‘moral’, ‘morality’, and ‘evil’, and simply look at the utility calculations. 10^100 people terminally value something that you ascribe zero or negative value to; therefore, their preferences do not matter to you or will make your universe worse from the standpoint of your utility function.
Which preferences matter? Yours matter to you, and thiers matter to them. There’s no ‘good’ or ‘evil’ in any absolute sense, merely different utility functions that happen to conflict. There’s no utility function which is ‘correct’, except by some arbitrary metric, of which there are many.
Consider another hypothetical utility function: The needs of the 10^100 don’t outweigh the needs of the one, so we let the entire 10^100 suffer when we could eliminate it by inconveniencing one single entity. Neither you nor the 10^100 are happy with this one, but the person about to be tortured may think it’s just fine and dandy...
...I don’t denotatively disagree with anything you’ve said, but I also think you’re sort of missing the point and forgetting the context of the conversation as it was in the preceding comments.
We all have preferences, but we do not always know what our own preferences are. A subset of our preferences (generally those which do not directly reference ourselves) are termed “moral preferences”. The preceding discussion between me and Peter Hurford is an attempt to figure out what our preferences are.
In the above conversation, words like “matter”, “should” and “moral” is understood to mean “the shared preferences of Ishaan, Dentin, and Peter_Hurford which they agree to define as moral”. Since we are all human (and similar in many other ways beyond that), we probably have very similar moral preferences...so any disagreement that arises between us is usually due to one or both of us inaccurately understanding our own preferences.
This is technically true, but it’s also often a semantic stopsign which derails discussions of morality. The fact is that the three of us humans have a very similar notion of “good”, and can speak meaningfully about what it is...the implicitly understood background truths of moral nihilism notwithstanding.
It doesn’t do to exclaim “but wait! good and evil are relative!” during every moral discussion...because here, between us three humans, our moral preferences are pretty much in agreement and we’d all be well served by figuring out exactly those preferences are. It’s not like we’re negotiating morality with aliens.
Precisely...my preferences are all that matter to me, and our preferences are all that matter to us. So if 10^100 sadistic aliens want to torture...so what? We don’t care if they like torture, because we dislike torture and our preferences are all that matter. Who cares about overall utility? “Morality”, for all practical purposes, means shared human morality...or, at least, the shared morality of the humans who are having the discussion.
“Utility” is kind of like “paperclips”...yes, I understand that in the best case scenario it might be possible to create some sort of construct which measures how much “utility” various agent-like objects get from various real world outcomes, but maximizing utility for all agents within this framework is not necessarily my goal...just like maximizing paperclips is not my goal.
So, I’m curious… can you unpack what you mean by “temporarily” in this comment?
For the purposes of this conversation at least. I’ve largely got them taboo’d in general because I find them confusing and full of political connotations; I suspect at least some of that is the problem here as well.
Yet your moral instinct is perfectly fine with having a justice system that puts innocent people in jail with a greater than 1 in 10^100 error rate.
Sure, on instrumental grounds for consequentialist reasons. Not a terminal preference.
Usually people speak of preferences when there is a possibility of choice—the agent can meaningfully choose between doing A and doing B.
This is not the case with respect to molecular models, search engines, and light switches.
At least for search engines, I would say there exist a meaningful level of description where it can be said that the search engine chooses which results to display in response to a query, approximately maximizing some kind of scoring function.
I don’t think it is meaningful in the current context. The search engine is not an autonomous agent and doesn’t choose anything any more than, say, the following bit of pseudocode: if (rnd() > 0.5) { print “Ha!” } else { print “Ooops!” }
“If you search for “potatoes” the engine could choose to return results for “tomatoes” instead...but will choose to return results for potatoes because it (roughly speaking) wants to maximize the usefulness of the search results.”
“If I give you a dollar you could choose to tear it to shreds, but you instead will choose to put it in your wallet because (roughly speaking) you want to xyz...”
When you flip the light switch “on” it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the “on” position.
Except for degree of complexity, what’s the difference? “Choice” can be applied to anything modeled as an Agent.
Sorry, I read this as nonsense. What does it mean for a light switch to “want”?
To determine the “preferences” of objects which you are modeling as agents, see what occurs, and construct a preference function that explains those occurrences.
Example: This amoeba appears to be engaging in a diverse array of activities which I do not understand at all, but they all end up resulting in the maintenance of its physical body. I will therefore model it as “preferring not to die”, and use that model to make predictions about how the amoeba will respond to various situations.
I think the light switch example is far fetched, but the search engine isn’t. The point is whether there exist a meaningful level of description where framing the system behavior in terms of making choices to satisfy certain preferences is informative.
Don’t forget that the original context was morality.
You don’t think it is far-fetched to speak of morality of search engines?
Yes, it is.
The distinction you are making between the input-output function of a human as a “choice” vs. the input-output of a machine as “not-a-choice” sounds very reminiscent of the traditional naive / confused model of free will that people commonly have before dissolving the question...but you’re a frequent poster here, so perhaps I’ve misunderstood your meaning. Are you using a specialized definition of the word “choice”?
I have no wish for this to develop into a debate about free will. Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.
As a practical matter, speaking about choices of light switches seems silly. Given this, I don’t see why speaking about choices of search engines is not silly. It might be useful conversational shorthand in some contexts, but I don’t think it is useful in the context of talking about morality.
Ah, ok—sorry. The materialist, dissolved view of free will related questions has been a strongly held view of mine since a very young age, so my prior for a person who is aware of thesel yet subscribes to what I’ll call the “naive view” for lack of the better word is very low.
It’s not really the particulars of the sequences here which are in question—the people who say free will doesn’t exist, and the people who say it does but redefine free will in funny ways, the pan-psychists, the compatiblists and non-compatiblists, all share in common a non-dualist view which does not allow them to label the search engine’s processes and the human’s processes as fundamentally, qualitatively different processes. This is a deep philosophical divide that has been debated for, as far as I am aware, at least two thousand years.
By analogy, speaking of choices of humans seems silly, since humans are made of the same basic laws.
The fundamental disagreement here runs rather deeply—it’s not going to be possible to talk about this without diving into free will.
Philosophical disagreements aside, that doesn’t seem to be a good way to construct priors for other people’s views.
If I understood the causal mechanisms underlying the actions of humans as well as I do those underlying lightswitches, talking about the former as “choices” would seem as silly to me as talking that way about the latter does.
But I don’t, so it doesn’t.
I assume you don’t understand the causal mechanisms underlying the actions of humans either. So why does talking about them as “choices” seem silly to you?
I agree with you. Whether we model something as an agent or an object is a feature of our map, not the territory. It’s not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.
However, in the context of the larger discussion, I interpret Lumifer as treating the distinction between “choice” and “event” as a feature of the territory itself, and positing a fundamental qualitative difference between a “choice” and other sorts of events. My reply should be seen as an assertion that such qualitative differences are not features of the map—if it’s impossible to model a light switch as having choices, then it’s also impossible to model a human as having choices. (My actual belief is that it’s possible to model both as having choices or not having them)
Is your actual belief that there are equivalent grounds for modeling both either way?
If so, I disagree… from my own perspective, modeling people as preference-maximizing agents is significantly more justified (due to differences in the territory) than modeling a light switch that way.
If not, to what do you attribute the differential?
...it is possible to model things either way, but it is more useful for some objects than others.
Modeling an object as agents is useful when the object exhibits a pattern of behavior which is roughly consistent with preference maximizing. A search engine is well modeled as an agent. A human is very well modeled as an agent.
A light switch is very poorly modeled as an agent. Thinking of it in terms of preference pattern doesn’t make it any easier to predict its behavior. But you can model it as an agent, if you’d like.
By “justified” do you mean “useful”?
I am willing to adopt “useful” in place of “justified” if it makes this conversation easier. In which case my question could be rephrased “Is it equally useful to model both either way?”
To which your answer seems to be no… it’s more useful to model a human as an agent than it is a light-switch. (I’m inferring, because despite introducing the “useful” language, what you actually say instead introduces the language of something being “well-modeled.” But I’m assuming that by “well-modeled” you mean “useful.”)
And your answer to the followup question is because the pattern of behavior of a light switch is different from that of a search engine or a human, such that adopting an intentional stance towards the former doesn’t make it easier to predict.
Have I understood you correctly?
Yup. Modeling something as a preference maximizing agent is generally useful to adopt for things which systematically behave in ways that maximize certain outcomes in a diverse array of situations. It allows you to make accurate predictions even when you don’t fully understand the mechanics of the events that occur in generating the events you are predicting.
(I distinguished useful and justified because I wasn’t sure if “justified” had moral connotations in your usage)
Edit: On reading the wiki, I tend to agree with the views that the wiki attributes to Dennett. Thanks for the reference and the word “intentional stance”.
OK. So, having clarified that, I return to your initial comment:
...and am as puzzled by it as I was in the first place.
You agree that the input-output function of a human differs from the input-output of a machine like a light switch in ways that make it more useful to model the former but not the latter as maximizing preferences. (To adopt the intentional stance towards the former and the design stance towards the latter, in Dennett’s terminology.)
So, given that, what is your objection to Lumifer’s distinction? “Choice” seems like a perfectly reasonable word to use when taking an intentional stance, and to not use when taking a design stance.
When I asked earlier, you explained that your objection had to do with attributing “territory-level” differences to humans and machines, when it’s really a “map-level” objection… that it’s possible to talk about a light-switch’s choices, or not talk about a human’s choices, so it’s not really a difference in the system at all, just a difference in the speaker.
But given that you agree that there’s a salient “territory-level” difference between the two systems (specifically, the differences which make the intentional stance more useful than the design stance wrt humans, but not wrt light-switches), I don’t quite get the objection. Sure, it’s possible to take either stance towards either system, but it’s more useful to take the intentional stance towards humans, and that’s a “fact about the territory.”
No?
Because in the preceding comment, I was demonstrating that we should not morally care about light switches, search engines, and paperclippers...whereas we should morally care about fishes, dogs, and humans… because of differences in the preference profiles of these beings when they are modeled as agents.
Peter Hurford disagreed with me on the non-moral status of the paper-clipper. I was demonstrating the non-moral status of a being which cared only for paper clips by analogy to a search engine (a being which only cares about bringing up the best search result).
Whereas what Lumifer was saying is that the very premise that a search engine could have choices was fundamentally flawed (which, if true, would cause the whole analogy to break down).
The thing is, it’s not fundamentally flawed to thing of a search engine as having choices. Sure, search engines are a little less usefully modeled as agent-like when compared to humans, but it’s just a matter of degree.
I was objecting to his hard, qualitative binary, not your and Dennet’s soft/qualitative spectrum.
Thanks for clarifying.
Additionally, when there is a burden of evidence to suggest that nutrient-equivalent food sources can be produced in a more energy-efficient manner and with no direct suffering to animals (indirect suffering being, for example, the unavoidable death of insects in crop harvesting), I believe it is a rational choice to move towards those methods.
Existential risk reduction charities?
I’m very unsure about the expected success of existential risk reduction charities.
Your points (1) and (2) seem like fully general counterarguments against any activity at all, other than the single most effective activity at any given time. I do agree with you that future suffering could potentially greatly outweigh present suffering, and I think it’s very important to try to prevent future suffering of non-human animals. However, it seems that one of the best ways to do that is to encourage others to care more for the welfare of non-human animals, i.e. become veg*ans.
Perhaps more importantly, it makes sense from a psychological perspective to become a veg*an if you care about non-human animals. It seems that if I ate meat, cognitive dissonance would make it much harder for me to make an effort to prevent non-human suffering on a broader scale.
(4): Although I see no way to falsify this belief, I also don’t see any reason to believe that it’s true. Furthermore, it runs counter to my intuitions. Are profoundly mentally disabled humans incapable of “true” suffering?
(5): Humans and non-human animals evolved in the same way, so it strikes me as highly implausible that humans would be capable of suffering while all non-humans would lack this capacity.
I don’t engage in the vast majority of possible activities. Neither do you, so on net, the class of arguments you accept must mitigate against almost all activities, right?
Are you saying that most arguments that you should to X are fully general counterarguments against doing anything other than X?
Why did you type that comment? Did you consider the arguments for typing that comment as fully general counterarguments against all the other possible comments you could have made? If not, why not post them too?
I’m not sure I understand what you’re trying to say. It sounds like you’re saying that we make decisions without considering all possible arguments for and against them, in which case I’m not sure what you’re saying with regard to my original comment.
To construct the comment that you just replied to, I considered various possible questions that I roughly rated by how effectively they would help me to understand what you’re saying, and limited my search due to time constraints. The arguments for posting that comment work as counterarguments against posting any other comment I considered, e.g. it was the best comment I considered. It’s not the best possible comment, but it would be a waste of time to search the entirety of comment-space to find the optimal comment.
No I don’t decide what to do with my time by coming up with arguments ruling out every other activity that I could be doing.
That’s more or less what I intended them to be. Isn’t doing only the most effective activities available to you… a good idea?
However, I’d phrase the argument in terms of degrees: Activities are good to the extent they conduce to your making better decisions for the future, bad to the extent they conduce to your making worse decisions for the future. So doing the dishes might be OK even if it’s not the Single Best Thing You Could Possibly Be Doing Right Now, provided it indirectly helps you do better things than you otherwise would. Some suboptimal things are more suboptimal than others.
Maybe? If you could give such an argument, though, it would show that my argument isn’t a fully general counterargument—vegetarianism would be an exception, precisely because it would be the optimal decision.
Right. I think the disagreement is about the ethical character of vegetarianism, not about whether it’s a psychologically or aesthetically appealing life-decision (to some people). It’s possible to care about the wrong things, and it’s possible to assign moral weight to things that don’t deserve it. Ghosts, blastocysts, broccoli stalks, abstract objects....
To assess (4) I think we’d need to look at the broader ethical and neurological theories that entail it, and assess the evidence for and against them. This is a big project. Personally, my uncertainty about the moral character of non-sapients is very large, though I think I lean in your direction. (Actually, my uncertainty and confusion about most things sapience- and sentience- related are very large.)
Within practical limits. It’s not effective altruism if you drive yourself crazy trying to hold yourself to unattainable standards and burn yourself out.
Practical limits are built into ‘effective’. The most effective activity for you to engage in is the most effective activity for you to engage in, not for a perfectly rational arbitrarily computationally powerful god to engage in. Going easy on yourself, to the optimal degree, is (for creatures like us) part of behaving optimally at all. If your choice (foreseeably) burns you out, and the burnout isn’t worth the gain, your choice was just wrong.
Wouldn’t you agree that veganism is less suboptimal than say entertainment? I’m assuming you’re okay with people playing video games, going to the movies etc. even if those activities don’t accomplish any long term altruistic goals. So I don’t know what your issue with veganism is.
Depends. For a lot of people, some measure of entertainment helps recharge their batteries and do better work, much more so than veganism probably would. I’ll agree that excessive recreational time is a much bigger waste (for otherwise productive individuals) than veganism. I’m not singling veganism out here; it just happens to be the topic of discussion for this thread. If veganism recharges altruists’ batteries in a way similar to small amounts of recreation, and nothing better could do the job in either case, then veganism is justifiable for the same reason small amounts of recreation is.
I suspect that most people engage in much more entertainment than is necessary for recharging their batteries to do more work. I hope you don’t think that entertainment and recreation are justifiable only because they allow us to work.
This sounds like a fully general counterargument against doing almost anything at all.
Yes. I would interpret that as meaning that people spend too much time having small amounts of fun, rather than securing much larger amounts of fun for their descendants.
No, fun is intrinsically good. But it’s not so hugely intrinsically good that this good can outweigh large opportunity costs. And our ability to impact the future is large enough that small distractions, especially affecting people with a lot of power to change the world, can have big costs. I’m with Peter Singer on this one; buying a fancy suit is justifiable if it helps you save starving Kenyans, but if it comes at the expense of starving Kenyans then you’re responsible for taking that counterfactual money from them. And time, of course, is money too.
(I’m not sure this is a useful way for altruists to think about their moral obligations. It might be too stressful. But at this point I’m just discussing the obligations themselves, not the ideal heuristics for fulfilling them.)
It is, as long as you keep in mind that for every degree of utility there’s an independent argument favoring that degree over the one right below it. So it’s a fully general argument schema: ‘For any two incompatible options X and Y, if utility(X) > utility(Y), don’t choose Y if you could instead choose X.’ This makes it clear that the best option is preferable to all suboptimal options, even though somewhat suboptimal things are a lot better than direly suboptimal ones.
In that case, why are you spending time arguing against vegetarianism, instead of spending time arguing against behaviors that waste even more time and resources?
I felt like it was a bit unfair for you to use fully general counterarguments against veganism in particular. However, after your most recent reply, I can better see where you’re coming from. I think a better message to take from this essay (although I’m not sure Peter would agree) is that people in general should eat less meat, not necessarily you in particular. If you can get one other person to become a vegan in lieu of becoming one yourself, that’s just as good.
If non-vegans are less effective at reducing suffering than vegans due to a quirk of human psychology (i.e. cognitive dissonance preventing them from caring sufficiently about non-humans), then this becomes an ethical issue and not just a psychological one.
I agree with you here. I feel sufficiently confident that animal suffering matters, but the empirical evidence here is rather weak.
That’s some excellent steelmanning. I would also add that creating animals for food with lives barely worth living is better than not creating them at all, from a utilitarian (if repugnant) point of view. And it’s not clear whether a farm chicken’s life is below that threshold.
I think it’s fairly clear that a farm chicken’s life is well below that threshold. If I had the choice between losing consciousness for an hour or spending an hour as a chicken on a factory farm, I would definitely choose the former.
Ninja Edit: I think a lot of people have poor intuitions when comparing life to non-life because our brains are wired to strongly shy away from non-life. That’s why the example I gave above used temporary loss of consciousness rather than death. Even if you don’t buy the above example, I think it’s possible to see that factory-farmed life is worse than death. This article discussed how doctors—the people most familiar with medical treatment—frequently choose to die sooner rather than attempt to prolong their lives when they know they will suffer greatly in their last days. It seems that life on a factory farm would entail much more suffering than death by a common illness.
I probably would too, but I am not a chicken. I think you are over-anthropomorphizing them.
I don’t see why a chicken would choose any differently. We have no reason to believe that chicken-suffering is categorically different from human-suffering.
If we were to put a bunch of chickens into a room, and on one side of the room was a wolf, and the other side had factory farming cages that protected the chickens from the wolf, I would expect the chickens to run into the cages.
It’s true that chickens can comprehend a wolf much better than they can comprehend factory farming, but I’m not quite sure how that affects this thought experiment.
And I expect that a human would do the same thing.
I made a hash of that comment; I’m sorry.
This is testable; give the chickens a lever to peck that knocks them out for an hour.
Even if this is correct, in terms of value spreading it seems to be a very problematic message to convey. Most people are deontologists and would never even consider accepting this argument for human infants, so if we implicitly or explicitly accept it for animals, then this is just going to reinforce the prejudice that some forms of suffering are less important simply because they are not experienced by humans/our species. And such a defect in our value system may potentially have much more drastic consequences than the opportunity costs of not getting some extra live-years that are slightly worth living.
Then there is also an objection from moral uncertainty: If the animals in farms and especially factory farms (where most animals raised for food-purposes are held) are above “worth living”, then barely so! It’s not like much is at stake (the situation would be different if we’d wirehead them to experience constant orgasm). Conversely, if you’re wrong about classical utilitarianism being your terminal value, then all the suffering inflicted on them would be highly significant.
Robin Hanson has advocated this point of view.
I find the argument quite unconvincing; Hanson seems to be making the mistake of conflating “life worth living” with “not committing suicide” that is well addressed in MTGandP’s reply (and grandchildren).
This is a good point, and was raised below. Note that the argument doesn’t seem to be factually true, independent of moral considerations. (You don’t actually create more lives by eating meat.)
Regarding (4) (and to a certain extent 3 and 5): I assume you agree that a species feels phenomenal pain just in case it proves evolutionarily beneficial. So why would it improve fitness to feel pain only if you have “abstract thought”?
The major reason I have heard for phenomenal pain is learning, and all vertebrates show long-term behavior modification as the result of painful stimuli, as anyone who has taken a pet to the vet can verify. (Notably, many invertebrates do not show long-term modification, suggesting that vertebrate vs. invertebrate may be a non-trivial distinction.)
Richard Dawkins has even suggested that phenomenal pain is inversely related to things like “abstract thought”, although I’m not sure I would go that far.
Actually, I’m an eliminativist about phenomenal states. I wouldn’t be completely surprised to learn that the illusion of phenomenal states is restricted to humans, but I don’t think that this illusion is necessary for one to be a moral patient. Suppose we encountered an alien species whose computational substrate and architecture was so exotic that we couldn’t rightly call anything it experienced ‘pain’. Nonetheless it might experience something suitably pain-like, in its coarse-grained functional roles, that we would be monsters to start torturing members of this species willy-nilly.
My views about non-human animals are similar. I suspect their psychological states are so exotic that we would never recognize them as pain, joy, sorrow, surprise, etc. (I’d guess this is more true for the positive states than the negative ones?) if we merely glimpsed their inner lives directly. But the similarity is nonetheless sufficient for our taking their alien mental lives seriously, at least in some cases.
So, I suspect that phenomenal pain as we know it is strongly tied to the evolution of abstract thought, complex self-models, and complex models of other minds. But I’m open to non-humans having experiences that aren’t technically pain but that are pain-like enough to count for moral purposes.
RobbBB, in what sense can phenomenal agony be an “illusion”? If your pain becomes so bad that abstract thought is impossible, does your agony—or the “illusion of agony”—somehow stop? The same genes, same neurotransmitters, same anatomical pathways and same behavioural responses to noxious stimuli are found in humans and the nonhuman animals in our factory-farms. A reasonable (but unproven) inference is that factory-farmed nonhumans endure misery—or the “illusion of misery” as the eliminativist puts it—as do abused human infants and toddlers.
I guess maybe I just didn’t understand how you were using the term “pain”—I agree that other species will feel things differently, but being “pain-like enough to count for moral purposes” seems to be the relevant criterion here.
A strong asertion of this principle can be foud here