Though I easily grant that e.g. cows can experience pain, I am not entirely convinced that it’s sensible to refer to their mental states and ours by the same word, “suffering”. I think this terminological conflation, too, begs the question. But that is a side issue.
Why? I actually think this is an important consideration. Is “suffering” by definition something only humans can do? If so, isn’t this arbitrarily restricting the definition? If not, do you doubt something empirical about nonhuman animal minds?
~
My objection was precisely to (1). Why should we care about suffering regardless of who or what is suffering? I care about the suffering of humans, or other beings of sufficient (i.e. approximately-human) intelligence to be self-aware. You seem to think I should care about “suffering”[1] more broadly.
You’ve characterized my argument correctly. It seems to me that most people already care about the suffering of nonhuman animals without quite realizing it, i.e. why they on the intuitive level resist kicking kittens and puppies. But I acknowledge that some people aren’t like this.
I don’t think there’s a good track record for the success of moral arguments. As a moral anti-realist, I must admit that there’s nothing irrational per se about restricting your moral sphere to humans. I guess my only counterargument would be that it seems weird and arbitrary.
What would you say to someone who thinks we should only care about the suffering of white humans of European descent? Would you be fine with that?
As a moral anti-realist, I must admit that there’s nothing irrational per se about restricting your moral sphere to humans. I guess my only counterargument would be that it seems weird and arbitrary.
Suppose morality is a ‘mutual sympathy pact,’ and it seems neither weird nor arbitrary to decide how sympathetic to be to others by their ability to be sympathetic towards you. Suppose instead that morality is a ‘demonstration of compassion,’ and the reverse effect holds—sympathizing with the suffering of those unable to defend themselves (and thus unable to defend you) demonstrates more compassion than the previous approach which requires direct returns. (There are, of course, indirect returns to this approach.)
Basically, I don’t think much of your counterargument because it’s unimaginative. If you ask the question of what morality is good for, you find a significant number of plausible answers, and different moralities satisfy those values to different degrees. If you can’t identify what practical values are encouraged by holding a particular moral principle, what argument do you have for that moral principle besides that you currently hold it?
Broadly, I think moral principles exist as logical standards by wish actions can be measured. It’s a fact whether a particular action is endorsed by utilitarianism or deontology, etc. Therefore moral facts exist in the same realm as any other sort of fact.
More specifically, I think the actual set of moral principles someone lives by are a personal choice that is subject to a lot of factors. Some of it might be self-interest, but even if it is, it’s usually indirect, not overt.
But standards are not facts. They are metrics in the same way that a unit of length, say, meter, is not a fact but a metric.
True. But whether something meets a standard is a fact. While a meter is a standard, it’s an objective fact that my height is approximately 1.85 meters.
~
How do you validate the choice of meters (and not, say, yards) to measure?
Social consensus. Also, a meter is much easier to use than a yard.
~
The usual answer is “fitness for a purpose”, but how does this work for morality?
Standards could be evaluated on further desiderata, like internal consistency and robustness in the face of thought experiments.
Social consensus and ease of use could also be factors.
What do you think validates a standard of morality?
Nothing, pretty much. I think standards of morality cannot be validated.
That’s not a very helpful retort.
I don’t know if you think your position is defensible or it was just a throwaway line. It’s rather trivial to construct a bunch of moralities which will pass your validation criteria and look pretty awful at the same time.
It seems to me things like social consensus and ease of use are factors in determining whether a morality is popular, but I don’t see how they can validate moral values.
Nothing, pretty much. I think standards of morality cannot be validated.
In a handful of discussions now, you’ve commented “X doesn’t do Y,” and then later followed up with “nothing can do Y,” which strikes me as logically rude compared to saying “X doesn’t do Y, which I see as a special case of nothing doing Y.” For example, in this comment, asking the question “what does it mean for a moral principle to be validated?” seems like the best way to clarify peter_hurford’s position.
I do think that standards of morality can be ‘validated,’ but what I mean by that is that standards of morality have practical effects if implemented, and one approach to metaethics is to choose a moral system by the desirability of its practical effects. I understood peter_hurford’s response here to be “I don’t think practical effects are the reason to follow any morality.”
This comment makes great sense inside of a morality, because moralities often operate by setting value systems. If one decides to adopt a value system which requires vegetarianism in order to signal that they are compassionate, that suggests their actual value system is the one which rewards signalling compassion. To use jargon, moralities want to be terminal goals, but in this metaethical system they are instrumental goals.
I don’t think this comment makes sense outside of a morality (i.e. I have a low opinion of the implied metaethics). If one is deciding whether to adopt morality A or morality B, knowing that A thinks B is immoral and B thinks A is immoral doesn’t help much (this is the content of the claim that a moral sphere restricted to humans is weird and arbitrary.) Knowing that morality A will lead to a certain kind of life and morality B will lead to a different kind of life seems more useful (although there’s still the question of how to choose between multiple kinds of lives!).
This leads to the position that even if you have the Absolutely Correct Morality handed to you by God, so long as that morality is furthered by more adherents it would be useful to think outside of that morality because standard persuasion advice is to emphasize the benefits the other party would receive from following your suggestion, rather than emphasizing the benefits you would receive if the other party follows your suggestions (“I get a referral bonus from the Almighty for every soul I save” is very different from “you’ll much prefer being in Heaven over being in Hell”). Instead of showing how your conclusion follows from your premises, it’s more effective to show how your conclusion is implied by their premises.
(I should point out that you can sort of see this happening by the use of “weird and arbitrary” as they don’t make sense as a logical claim but do make sense as a social claim. “All the cool kids are vegetarian these days” is an actual and strong reason to become vegetarian.)
Well, I didn’t mean to be rude but I’ll watch myself a bit more carefully for such tendencies. Talking to people over the ’net leads one to pick up some unfortunate habits :-)
For example, in this comment...
That one actually was a bona fide question. I didn’t think morality could be validated, but on the other hand I didn’t spend too much time thinking about the issue. So—maybe I was missing something, and this was a question with the meaning of “well, how could one go about it?” Maybe there was a way which didn’t occur to me.
one approach to metaethics is to choose a moral system by the desirability of its practical effects.
I am not a big fan of such an approach because I think that in this respect ethics is like philosophy—any attempts at meta very quickly become just another ethics or just another philosophy. And choosing on the basis of consequences is the same thing as expecting a system of ethics to be consistent (since you evaluate the desirability of consequences on the basis of some moral values). In other words I don’t think ethics can be usefully tiered—it’s a flat system.
Oh, and I think that moralities do not set value systems. Moralities are value systems. And they are terminal goals (or criteria, or metrics, or standards), they cannot be instrumental (again, because it’s a flat system).
“All the cool kids are vegetarian these days” is an actual and strong reason to become vegetarian.
I very strongly disagree with this. From the descriptive side individual morality of course is influenced by social pressure. From the normative side, however, I don’t believe it should be.
I am not a big fan of such an approach because I think that in this respect ethics is like philosophy—any attempts at meta very quickly become just another ethics or just another philosophy.
Agreed that a given metaethical approach will cash out as a particular ethics in a particular situation. The reason I think it’s useful to go to metaethics is because you can then see the linkage between the situation and the prescription, which is useful for both insight and correcting flaws in an ethical system. I also think that while infinite regress problems are theoretically possible, for most humans there is a meaningful cliff suggesting it’s not worth it to go from meta-meta-ethics to meta-meta-meta-ethics, because to me ethics looks like a set of behaviors and responses, metaethics looks like psychology and economics, and meta-meta-ethics looks like biology.
I very strongly disagree with this. From the descriptive side individual morality of course is influenced by social pressure. From the normative side, however, I don’t believe it should be.
It seems to me that there are a lot of obvious ways for morality derived without any sort of social help to go wrong, but we may be operating under different conceptions of ‘pressure.’
because you can then see the linkage between the situation and the prescription
Can you give me an example where metaethics is explicitly useful for that? I don’t see why in flat/collapsed ethics this should be a problem.
to me ethics looks like a set of behaviors and responses, metaethics looks like psychology and economics, and meta-meta-ethics looks like biology.
Ah. Interesting. To me ethics is practical application (that is, actions) of morality which is a system of values. Morality is normative. Psychology and economics for me are descriptive (with an important side-note that they describe not only what is, but also boundaries for what is possible/likely). Biology provides provides powerful external forces and boundaries which certainly shape and affect morality, but they are external—you have to accept them as a given.
there are a lot of obvious ways for morality derived without any sort of social help to go wrong
Of course, but so what? I suspect this issue will turn on the attitude towards the primacy of social vs the primacy of individual.
Can you give me an example where metaethics is explicitly useful for that? I don’t see why in flat/collapsed ethics this should be a problem.
Sure, but first I should try to be a little clearer: by ‘situation’ here I mean the incentives on the agent, not any particular dilemma. That is, I would cluster ethics as rules that eat scenarios and output actions, and meta-ethics as rules that eat agent-world pairs and output ethics. As a side note, I think requiring this sort of functional upgrade when you move up a meta level makes the transition much more meaningful and makes infinite regress much less likely to happen practically.
I should also comment that I’ve been using ethics and morality interchangeably in this series of comments, even though I think it’s useful for the terms to be different along the lines you describe (of differentiating between value systems and action systems), mostly because I want to describe the system of picking value systems as meta-ethics instead of meta-morality.
It also seems worthwhile to remember that for most people, stated justifications follow decisions rather than decisions following stated justifications. This matches up with making decisions in near mode and justifying those decisions in far mode, which in the language I’m using here would look like far mode as ethics and near mode as meta-ethics.
An example would be vegetarianism. Vegetarianism in modern urban America, with a well-developed understanding of nutrition, is about as healthy as also eating animal products (possibly healthier, possibly less healthy, probably dependent on individual biology). Vegetarianism in undeveloped or rural areas is generally associated with malnutrition (often at subclinical levels, but that still has an effect on health and longevity). A metaethical system which recommends vegetarianism in America where it’s cheap and recommends against it in undeveloped areas where it’s expensive seems easy to construct; an ethical system which measures the weal gained and the woe inflicted to animals by eating meat and gets the balancing parameter just right to make the same recommendations seems difficult to construct.
(The operative phrase of that last sentence being the ‘balancing parameter’- if the stated justifications are driving the decisions, they need to be doing so quantitatively, and the parameters need to be inputs to the model, not outputs. It’s easy to say “this is the rule I want, find a parameter to implement that rule,” but difficult to say “this is the right parameter to use, and that results in this rule.”)
I suspect this issue will turn on the attitude towards the primacy of social vs the primacy of individual.
Even if you give individuals primacy, specialization of labor is still a powerful force for efficiency. (For social influence to be a obvious net negative, I think you would need individual neurological diversity on a level far higher than we currently have, even though we do see some negative impacts at our current level of neurological diversity.)
I would cluster ethics as rules that eat scenarios and output actions, and meta-ethics as rules that eat agent-world pairs and output ethics.
Ah. That makes a lot of sense.
for most people, stated justifications follow decisions rather than decisions following stated justifications.
True, but the key word here is “stated”.
making decisions in near mode and justifying those decisions in far mode, which in the language I’m using here would look like far mode as ethics and near mode as meta-ethics.
That doesn’t look right to me. For most people (those who justify post-factum) the majority of their ethics is submerged, below their consciousness level. That’s why “stated” is a very important qualifier. People necessarily make decisions based on their “real” ethics but bringing the real reasons to the surface might not be psychologically acceptable to them, so post-factum justifications come into play.
I don’t think people making decisions in near mode apply rules that “eat agent-world pairs and output ethics”. I think that for many people factors like “convenience”, “lookin’ good”, and “let’s discount far future to zero” play considerably larger role in their real ethics than they are willing to admit or even realize.
A metaethical system which recommends vegetarianism in America where it’s cheap and recommends against it in undeveloped areas where it’s expensive seems easy to construct; an ethical system which measures the weal gained and the woe inflicted to animals by eating meat and gets the balancing parameter just right to make the same recommendations seems difficult to construct.
I don’t see how this is so. Your meta-ethical system will still need to get that balancing parameter “just right” unless you start with the end result being known. Just because you divide the path from moral axioms to actions into several stages you don’t get to avoid sections of that path, you still need to walk it all.
Oh, and I don’t believe modern America has a “well-developed understanding of nutrition”, though it’s a separate discussion altogether.
Even if you give individuals primacy, specialization of labor is still a powerful force for efficiency.
I don’t understand. What does specialization of labor has to do with morality?
And perhaps I should clarify my reaction. When I saw “All the cool kids are vegetarian these days” called “an actual and strong reason” to adopt this morality—well, my first thought was “All the cool kids root out hidden Jews / string up uppity Negroes / find and kill Tutsi / denounce the educated agents of imperialism / etc.” That must be an actual and strong reason to adopt this set of morals as well, right?
I don’t know how to figure out whether social influence is a net positive given that in practice social influence is always there and you can’t find a control group. My point is that accepting morality because many other people seem to follow it is a very dubious heuristic for me.
Agreed that it’s a stretch; “hidden ethics” and “stated ethics” is a much more natural divide for the two. I do think that “convenience” and “lookin’ good” depends on the agent-world pair, but I think the adaption is opaque and slow (i.e. learn it when you’re young over a long period) rather than explicit and fast.
I don’t see how this is so.
I was unclear there as well; I’m assuming that the “right” result is the one that maximizes the health and social standing of the implementer. Targeting that directly is easy; targeting it indirectly by using animal welfare is hard.
Oh, and I don’t believe modern America has a “well-developed understanding of nutrition”, though it’s a separate discussion altogether.
I was unclear; I meant that vegetarianism is safer for individuals with a well-developed understanding, not that urban America as a whole has a well-developed understanding.
I don’t understand. What does specialization of labor has to do with morality?
Many moral questions are hard to figure out, especially when they rely on second or third order effects. Think of the parable of the broken window, of journalistic, clerical, or medical ethics which promise non-intervention or secrecy. There is strong value in the communication of moral claims, which I’m not sure how to distinguish from social pressure (and think social pressure may be a necessary part of communicating those claims).
There is strong value in the communication of moral claims
It seems to me the issues of trust and credibility are dominant here. People get moral claims thrown at them constantly from different directions, many of them are incompatible or sometimes even direct opposites of each other. One needs some system of sorting them out, of evaluating them and deciding whether to accept them or not. Popularity is, of course, one such system but it has its problems, especially when moral claims come from those with power. There are obvious incentives in spreading moral memes advantageous to you.
I guess I see the social communication of moral claims to be strongly manipulated by those who stand to gain from it (which basically means those with power—political, commercial, religious, etc.) and so suspect.
Nothing, pretty much. I think standards of morality cannot be validated.
I think we agree there, then.
It seems to me things like social consensus and ease of use are factors in determining whether a morality is popular, but I don’t see how they can validate moral values.
I was thinking of a different kind of “validation”.
Why? I actually think this is an important consideration. Is “suffering” by definition something only humans can do? If so, isn’t this arbitrarily restricting the definition? If not, do you doubt something empirical about nonhuman animal minds?
I try not to argue by definition, so it’s the latter: I have empirical concerns. See this post, point 4 (but also 3 and 5), for a near-perfect summary of my concerns.
That said, my overall objection to your view does not hinge on this point.
As a moral anti-realist, I must admit that there’s nothing irrational per se about restricting your moral sphere to humans. I guess my only counterargument would be that it seems weird and arbitrary.
Well, firstly, I have to point out that I am not restricting my moral sphere to humans, per se. (Of known existing creatures, dolphins may qualify for membership; of imaginable creatures, aliens and AIs might.) In any case, the circle I draw seems quite non-arbitrary, even obvious, to me; but I suppose this only speaks to the non-universality of moral intuitions.
What would you say to someone who thinks we should only care about the suffering of white humans of European descent? Would you be fine with that?
That would indeed seem weird and arbitrary. One objection I might raise to such a person is that it’s non-trivial, in many cases, to discern someone’s “whiteness”, not to mention one’s exact ancestry. “European” is not a sharp boundary where humans are concerned, and a great many factors confound such categorization. Most of my other objections would be aimed at drawing out the moral intuitions behind this person’s judgments about what sorts of beings are objects of morality (do they think “superficial” characteristics matter as much as functional ones? what is their response to various thought experiments such as brain transplant scenarios? etc.). It seems to me that there are both empirical facts and analytic arguments that would shift this person’s position closer to my own; a logically contradictory, empirically incoherent, or reflectively inconsistent moral position is generally bound to be less convincing.
(Of course, I might answer entirely differently. I might say: no, I would not be fine with that, because my own ancestry may or may not be classified as “European” or “white”, depending on who’s doing the classifying. So I would, quite naturally, argue against a moral circle drawn thus. Moral anti-realism notwithstanding, I might convince some people (and in fact that seems to be, in part, how the American civil rights movement, and similar social movements across the world, have succeeded: by means of people who were previously outside the moral circle arguing for their own inclusion). Cows, of course, cannot attempt to persuade us that we should include them in our moral considerations. I do not take this to be an irrelevant fact.)
Me: What would you say to someone who thinks we should only care about the suffering of white humans of European descent? Would you be fine with that?
You: That would indeed seem weird and arbitrary. One objection I might raise to such a person is that it’s non-trivial, in many cases, to discern someone’s “whiteness”, not to mention one’s exact ancestry. “European” is not a sharp boundary where humans are concerned...
I think that fights the hypothetical a bit much. Imagine something a bit sharper, like citizenship. Why not restrict our moral sphere to US citizens? Or take Derek Parfit’s within-a-mile altruism, where you only have concern for people within a mile of you. Weird, I agree. But irrational? Hard to demonstrate.
~
I try not to argue by definition, so it’s the latter: I have empirical concerns. See this post, point 4 (but also 3 and 5), for a near-perfect summary of my concerns.
So do you think nonhuman animals may not suffer? I agree that’s a possibility, but it’s not likely. What do you think of the body of evidence provided in this post?
I don’t think there is a tidy resolution to this problem. We’ll have to take our best guess, and that involves thinking nonhuman animals suffer. We’d probably even want to err on the safe side, which would increase our consideration toward nonhuman animals. It would also be consistent with an Ocham’s razor approach.
~
It seems to me that there are both empirical facts and analytic arguments that would shift this person’s position closer to my own; a logically contradictory, empirically incoherent, or reflectively inconsistent moral position is generally bound to be less convincing.
The moral sphere needn’t work like a threshold, where one should extend equal concern to everyone within the sphere and no concern at all to anyone outside it. My moral beliefs are not cosmopolitan—I think it is morally right to care more for my family than for absolute strangers. In fact, I think it is a huge failing of standard utilitarianism that it doesn’t deliver this verdict (without having to rely on post-hoc contortions about long-term utility benefits). I also think it is morally acceptable to care more for people cognitively similar to me than for people cognitively distant (people with radically different interests/beliefs/cultural backgrounds).
This doesn’t mean that I don’t have any moral concern at all for the cognitively distant. I still think they’re owed the usual suite of liberal rights, and that I have obligations of assistance to them, etc. It’s just that I would save the life of one of my friends over the lives of, say, three random Japanese people, and I consider this the right thing to do.
I follow a similar heuristic when I move across species. I think we owe the great apes more moral consideration than we owe, say, dolphins. I don’t eat any mammals but I eat chicken.
The idea of a completely cosmopolitan ethic just seems bizarre to me. I can see why one would be motivated to adopt it if the only alternative were caring about some subset of people/sentient beings and not caring at all about anyone else. Then there would be something arbitrary about where one draws the line. But this is not the most plausible alternative. One could have a sphere of moral concern that doesn’t just stop suddenly but instead attenuates with distance.
The morality you suggest is what Derek Parfit calls collectively self-defeating. This means that if everyone were to follow it perfectly, there could be empirical situations where your actual goals, namely the well-being of those closest to you, are achieved less well than they would be if everyone followed a different moral view. So there could be situations in which people have more influence on the well-being of the family of strangers, and if they’d all favor their own relatives, everyone would end up worse off, despite everyone acting perfectly moral. Personally I want a world where everyone acts perfectly moral to be as close to Paradise as is empirically possible, but whether this is something you are concerned about is a different question (that depends on what question your seeking to answer by coming up with a coherent moral view).
By this reasoning everyone should give all their money and resources to charity (except to the extent that they need some of their resources to keep their job and make money).
People are motivated to do things that make money because the money benefits themselves and their loved ones. Many such things are also beneficial to everyone, either directly (inventors, for instance, or people who manufacture useful goods), or indirectly (someone who is just willing to work hard because working hard benefits themselves, thus producing more and improiving the economy). In a world where everyone gave their money to random strangers and kept them at an equal level of wealth, nobody would be able to make any money (since 1) any money they made would be accompanied by a reduction by the money other people gave them, and 2) they would feel (by hypothesis) obligated to give away the proceeds anyway). This would mean that money as a motivation would no longer exist, and we would lose everything that we gain when money is a motivation. Thts would be bad.
Even if you modified the rule to “I should give money to people so as to arranbge an equal level of wealth except where necessary to provide motivation”, in deciding exactly who gets your money you’d essentially have a planned economy done piecemeal by billions of individual decisions. Unlike a normal planned economy, it wouldn’t be imposed from the top, but it would have the same problem as a normal planned economy in that there’s really nobody competent to plan such a thing. The result would be disaster.
So overall it would be a better world if people kept the money they made even if someone else could use it more than they could.
Furthermore, the state where everyone acts this way is unstable. Even if your family would be better off if everyone acted that altruistically, your family would be worse off if half the world acted that way and you and they were part of that half.
Yes. At least as long as there are problems in the world. What’s wrong with that?
Everyone, including nonhumans, would have their interests/welfare-function fulfilled as well as possible. If I had to determine the utility function of moral agents before being placed into the world in any position at random, I would choose some form of utilitarianism from a selfish point of view because it maximizes my expected well-being. If doing the “morally right” thing doesn’t make the world a better place for the sentient beings in the world, I don’t see a reason to call it “right”. Also note that this is not an all-or-nothing issue, it seems unfruitful to single out only those actions that produce the perfect outcome, or the perfect outcome in expectation. Every improvement into the right direction counts, because every improvement leads to someone else being better off.
If all the agents in the situation acted according to utilitarianism, everyone would be better off. To the extent that everyone acting according to common sense morality predictably fails to bring about the best of all possible worlds in this situation, and to the extent that one cares about this fact, this constitutes an argument against common sense morality.
Of course, if decision theory or game theory could make those agents cooperate successfully (so they don’t do predictably worse than other moralities anymore) in all logically possible situations, then the objection disappears. I see no reason to assume this, though.
This seems nonsensical; a utility function does not prescribe actions. If I care about my family most, but acting in a certain way will cause them to be worse off, then I won’t act that way. In other words, if everyone acting perfectly moral causes everyone to end up worse off, then by definition, at least some people were not acting perfectly moral.
The problem is not with your actions, but with the actions of all the others (who are following the same general kind of utility function but because your utility function is agent-relative, they use different variables, i.e. they care primarily about their family and friend as opposed to yours). However, I was in fact wondering whether this problem disappears if we make the agents timeless (or whatever does the job), so they would cooperate with each other to avoid the suboptimal outcome. This is seems fair enough since acting “perfectly moral” seems to imply the best decision theory.
Does this solve the problem? I think not; we could tweak the thought experiment further to account for it: we could imagine that due to empirical circumstances, such cooperation is prohibited. Let’s assume that the agents lack the knowledge that the other agents are timeless. Is this an unfair addendum to the scenario? I don’t see why, because given the empirical situation (which seems perfectly logically possible) the agents find themselves in, the moral algorithm they collectively follow may still lead to results that are suboptimal for everyone concerned.
No, but you need some decision theory to go with your utility function, and I was considering the possibility that Parfit merely pointed out a flaw of CDT and not a flaw of common sense morality. However, given that we can still think of situations where common sense morality (no matter the decision theory) executed by everyone does predictably worse for everyone concerned than some other theory, Parfit’s objection still stands.
(Incidentally, I suspect that there could be situations where modifying your utility function is a way to solve a prisoner’s dilemma, but that wasn’t what I meant here.)
It seems implausible to me that there is any ethical decision procedure that human beings (rather than idealized perfectly informed and perfectly rational super-beings) could follow that wouldn’t be collectively self-defeating in this sense. Do you (or Parfit) have an example of one that isn’t?
Anyway, I don’t see this as a huge problem. First, I’m pretty sure I’m never going to live in a world (or even a close approximation to one) where everyone adheres to my moral beliefs perfectly. So I don’t see why the state of such a world should be relevant to my moral beliefs. Second, my moral beliefs are ultimately beliefs about which consequences—which states of the world—are best, not beliefs about which actions are best. If there was good evidence that acting in a certain manner (in the aggregate) wasn’t effective at producing morally better states of affairs, then I wouldn’t advocate acting in that manner.
But I am not convinced that following a cosmopolitan decision procedure (or advocating that others follow one) would empirically be an effective means to achieving my decidedly non-cosmopolitan moral ends. Perhaps if everyone in the world mimicked my moral behavior (or did what I told them) it would be, but alas, that is not the case.
Utilitarianism is not collectively self-defeating, but then there’d be no room for non-cosmopolitan moral ends.
(rather than idealized perfectly informed and perfectly rational super-beings)
This part shouldn’t make a difference. If humans are too irrational to directly follow utilitarianism (U), then U implies they should come up with easier/less dangerous rules of thumb that will, on average, produce the most utility. This is termed “indirectly individually self defeating”, if you have a theory that implies it would be best to follow some other theory. Parfit concludes, and I agree with him here, that this is not a reason to reject U. U doesn’t imply that one ought to actively implement utilitarianism, it only wants you to bring about the best consequences regardless of how this happens.
If humans are too irrational to directly follow utilitarianism (U), then U implies they should come up with easier/less dangerous rules of thumb that will, on average, produce the most utility.
This is a pretty dubious move. Why think there will be easy to follow rules that will maximize aggregate utility? And even if such rules exist, how would we go about discovering them, given that the reason we need them in the first place is due to our inability to fully predict the consequences of our actions and their attached utilities?
Do you just mean that we should pick easy to follow rules that tend to produce more utility than other sets of easy to follow rules (as far as we can figure out), but not necessarily ones that maximize utility relative to all possible patterns of behavior? In that case, I don’t see why your utilitarianism isn’t collectively self-defeating according to the definition you gave. A world in which everyone acts according to such rules will not be a world that is as close to the utilitarian Paradise as empirically possible. After all, it seems entirely empirically possible for people to accurately recognize particular situations where actions contrary to the rules produce higher utility.
Also note that the view you outlined is often concerned with the question of helping others. When it comes to not harming others, many people would agree with the declaration of human rights that inflicting suffering is equally bad regardless of one’s geographical or emotional proximity to the victims. Personal vegetarianism is an instance of not harming.
I disagree with cosmopolitanism when it comes to “not harming” as well. I think needlessly inflicting suffering on human beings is always really bad, but it is worse if, say, you do it to your own children rather than to a random stranger’s.
I basically agree with pragmatist’s response, with the caveat only that I think many (most?) people’s moral spheres have too steep a gradient between “family, for whom I would happily murder any ten strangers” and “strangers, who can go take a flying leap for all I care”. My own gradient is not nearly that steep, but the idea of a gradient rather than a sharp border is sound. (Of course, since it’s still the case that I would kill N chickens to save my grandmother, where N can be any number, it seems that chickens fall nowhere at all on this gradient.)
So do you think nonhuman animals may not suffer? I agree that’s a possibility, but it’s not likely. What do you think of the body of evidence provided in this post?
Well, you can phrase this as “nonhuman animals don’t suffer”, or as “nonhuman animal suffering is morally uninteresting”, as you see fit; I’m not here to dispute definitions, I assure you. As for the evidence, to be honest, I don’t see that you’ve provided any. What specifically do you think offers up evidence against points 3 through 5 of RobbBB’s post?
[Thinking that nonhuman animals suffer] would also be consistent with an Ocham’s razor approach.
I don’t think so; or at least this is not obviously the case.
What [empirical facts and analytic arguments] would you suggest?
Well, just the stuff about boundaries and hypotheticals and such that you referred to as “fighting the hypothetical”. Is there something specific you’re looking for, here?
The essay cited the Cambridge Declaration of Consciousness, as well as a couple of other pieces of evidence.
That’s not evidence, that’s a declaration of opinion.
In particular, reading things like “Evidence of near human-like levels of consciousness has
been most dramatically observed in African grey parrots” (emphasis mine) makes me highly sceptical of that opinion.
It’s not scientific evidence, but it is rational evidence. In Bayesian terms, a consensus statement of experts in the field is probably much stronger evidence than, say, a single peer-reviewed study. Expert consensus statements are less likely to be wrong than almost any other form of evidence where I don’t have the necessary expertise to independently evaluate claims.
It’s not scientific evidence, but it is rational evidence.
Not if I believe that this particular panel of experts is highly biased and is using this declaration instrumentally to further their undeclared goal.
In Bayesian terms, a consensus statement of experts in the field is probably much stronger evidence than, say, a single peer-reviewed study.
That may or may not be true, but doesn’t seem to be particularly relevant here. The question is what constitutes “near human-like levels consciousness”. If you point to an African grey as your example, I’ll laugh and walk away. Maybe, if I were particularly polite, I’d ask in which meaning you’re using the word “near” here.
The question is what constitutes “near human-like levels consciousness”. If you point to an African grey as your example, I’ll laugh and walk away.
If I were in your place, I’d be skeptical of my own intuitions regarding the level of consciousness of African grey parrots. Reality sometimes is unintuitive, and I’d be more inclined to trust the expert consensus than my own intuition. Five hundred years ago, I probably would have laughed at someone who said we would travel to the moon one day.
This is evidence from reality. In reality, a bunch of neuroscientists organized by a highly respectable university all agree that many non-human animals are approximately as conscious as humans. This is very strong Bayesian evidence in favor of this proposition being true.
What form of evidence would you find more convincing than this?
all agree that many non-human animals are approximately as conscious as humans
That’s not a statement of fact. That’s just their preferred definition for the expression “approximately as conscious as humans”. I can define slugs to be “approximately as conscious as humans” and point out that compared with rocks, they are.
That’s just their preferred definition for the expression “approximately as conscious as humans”. I can define slugs to be “approximately as conscious as humans” and point out that compared with rocks, they are.
That interpretation of the quoted expression strikes me as implausible, especially in the context of the other statements made in the declaration; for example: “Mammalian and avian emotional networks and cognitive microcircuitries appear to be far more homologous than previously thought.” This indicates that humans’ and birds’ consciousnesses are more similar than most people intuitively believe.
Again, I ask: What form of evidence would you find more convincing than the Cambridge Declaration of Consciousness?
What form of evidence would you find more convincing than the Cambridge Declaration of Consciousness?
Evidence of what?
It seems that you want to ask a question “Are human and non-human minds similar?” That question is essentially about the meaning of the word “similar” in this context—a definition of “similar” would be the answer.
There are no facts involved, it’s all a question of terminology, of what “approximately as conscious as humans” means.
Sure, you plausibly define some metric (or several of them) of similarity-to-human-mind and arrange various living creatures on the that scale. But that scale is continuous and unless you have a specific purpose in mind, thresholds are arbitrary. I don’t know why defining only a few mammals and birds as having a mind similar-to-human is more valid than defining everything up to a slug as having a mind similar-to-human.
I originally posted the Cambridge Declaration of Consciousness because Peter asked you, “What do you think of the body of evidence provided in this post [that nonhuman animals suffer]?” You said he hadn’t provided any, and I offered the Cambridge Declaration as evidence. The question is, in response to your original reply to Peter, what would you consider to be meaningful evidence that non-human animals suffer in a morally relevant way?
what would you consider to be meaningful evidence that non-human animals suffer in a morally relevant way?
I freely admit that animals can and do feel pain. “Suffer” is a complicated word and it’s possible to debate whether it can properly be applied only to humans or not only. However for simplicity’s sake I’ll stipulate that animals can suffer.
Now, a “morally relevant way” is a much more iffy proposition. It depends on your morality which is not a matter of facts or evidence. In some moral systems animal suffering would be “morally relevant”, in others it would not be. No evidence would be capable of changing that.
African grays are pretty smart. I’m not sure I’d go so far as to call them near-human, but from what I’ve read there’s a case for putting them on par with cetaceans or nonhuman primates.
The real trouble is that the research into this sort of thing is fiendishly subjective and surprisingly sparse. Even a detailed ordering of relative animal intelligence involves a lot of decisions about which researchers to trust, and comparison with humans is worse.
Why? I actually think this is an important consideration. Is “suffering” by definition something only humans can do? If so, isn’t this arbitrarily restricting the definition? If not, do you doubt something empirical about nonhuman animal minds?
~
You’ve characterized my argument correctly. It seems to me that most people already care about the suffering of nonhuman animals without quite realizing it, i.e. why they on the intuitive level resist kicking kittens and puppies. But I acknowledge that some people aren’t like this.
I don’t think there’s a good track record for the success of moral arguments. As a moral anti-realist, I must admit that there’s nothing irrational per se about restricting your moral sphere to humans. I guess my only counterargument would be that it seems weird and arbitrary.
What would you say to someone who thinks we should only care about the suffering of white humans of European descent? Would you be fine with that?
Suppose morality is a ‘mutual sympathy pact,’ and it seems neither weird nor arbitrary to decide how sympathetic to be to others by their ability to be sympathetic towards you. Suppose instead that morality is a ‘demonstration of compassion,’ and the reverse effect holds—sympathizing with the suffering of those unable to defend themselves (and thus unable to defend you) demonstrates more compassion than the previous approach which requires direct returns. (There are, of course, indirect returns to this approach.)
I’m confused as to what those considerations are supposed to demonstrate.
Basically, I don’t think much of your counterargument because it’s unimaginative. If you ask the question of what morality is good for, you find a significant number of plausible answers, and different moralities satisfy those values to different degrees. If you can’t identify what practical values are encouraged by holding a particular moral principle, what argument do you have for that moral principle besides that you currently hold it?
I don’t think moral principles are validated with reference to practical self-interested considerations.
What do you think moral principles are validated by?
Or, to ask a more general question, what they could possibly be validated by?
Broadly, I think moral principles exist as logical standards by wish actions can be measured. It’s a fact whether a particular action is endorsed by utilitarianism or deontology, etc. Therefore moral facts exist in the same realm as any other sort of fact.
More specifically, I think the actual set of moral principles someone lives by are a personal choice that is subject to a lot of factors. Some of it might be self-interest, but even if it is, it’s usually indirect, not overt.
OK. But standards are not facts. They are metrics in the same way that a unit of length, say, meter, is not a fact but a metric.
How do you validate the choice of meters (and not, say, yards) to measure?
The usual answer is “fitness for a purpose”, but how does this work for morality?
True. But whether something meets a standard is a fact. While a meter is a standard, it’s an objective fact that my height is approximately 1.85 meters.
~
Social consensus. Also, a meter is much easier to use than a yard.
~
Standards could be evaluated on further desiderata, like internal consistency and robustness in the face of thought experiments.
Social consensus and ease of use could also be factors.
I agree. You can state as a fact whether some action meets some standard of morality. That does nothing to validate a standard of morality, however.
Oh, boy. Social consensus, ease of use, really?
I’m not sure a standard of morality could ever be validated in the way you might like.
What do you think validates a standard of morality?
~
That’s not a very helpful retort.
Nothing, pretty much. I think standards of morality cannot be validated.
I don’t know if you think your position is defensible or it was just a throwaway line. It’s rather trivial to construct a bunch of moralities which will pass your validation criteria and look pretty awful at the same time.
It seems to me things like social consensus and ease of use are factors in determining whether a morality is popular, but I don’t see how they can validate moral values.
In a handful of discussions now, you’ve commented “X doesn’t do Y,” and then later followed up with “nothing can do Y,” which strikes me as logically rude compared to saying “X doesn’t do Y, which I see as a special case of nothing doing Y.” For example, in this comment, asking the question “what does it mean for a moral principle to be validated?” seems like the best way to clarify peter_hurford’s position.
I do think that standards of morality can be ‘validated,’ but what I mean by that is that standards of morality have practical effects if implemented, and one approach to metaethics is to choose a moral system by the desirability of its practical effects. I understood peter_hurford’s response here to be “I don’t think practical effects are the reason to follow any morality.”
This comment makes great sense inside of a morality, because moralities often operate by setting value systems. If one decides to adopt a value system which requires vegetarianism in order to signal that they are compassionate, that suggests their actual value system is the one which rewards signalling compassion. To use jargon, moralities want to be terminal goals, but in this metaethical system they are instrumental goals.
I don’t think this comment makes sense outside of a morality (i.e. I have a low opinion of the implied metaethics). If one is deciding whether to adopt morality A or morality B, knowing that A thinks B is immoral and B thinks A is immoral doesn’t help much (this is the content of the claim that a moral sphere restricted to humans is weird and arbitrary.) Knowing that morality A will lead to a certain kind of life and morality B will lead to a different kind of life seems more useful (although there’s still the question of how to choose between multiple kinds of lives!).
This leads to the position that even if you have the Absolutely Correct Morality handed to you by God, so long as that morality is furthered by more adherents it would be useful to think outside of that morality because standard persuasion advice is to emphasize the benefits the other party would receive from following your suggestion, rather than emphasizing the benefits you would receive if the other party follows your suggestions (“I get a referral bonus from the Almighty for every soul I save” is very different from “you’ll much prefer being in Heaven over being in Hell”). Instead of showing how your conclusion follows from your premises, it’s more effective to show how your conclusion is implied by their premises.
(I should point out that you can sort of see this happening by the use of “weird and arbitrary” as they don’t make sense as a logical claim but do make sense as a social claim. “All the cool kids are vegetarian these days” is an actual and strong reason to become vegetarian.)
Well, I didn’t mean to be rude but I’ll watch myself a bit more carefully for such tendencies. Talking to people over the ’net leads one to pick up some unfortunate habits :-)
That one actually was a bona fide question. I didn’t think morality could be validated, but on the other hand I didn’t spend too much time thinking about the issue. So—maybe I was missing something, and this was a question with the meaning of “well, how could one go about it?” Maybe there was a way which didn’t occur to me.
I am not a big fan of such an approach because I think that in this respect ethics is like philosophy—any attempts at meta very quickly become just another ethics or just another philosophy. And choosing on the basis of consequences is the same thing as expecting a system of ethics to be consistent (since you evaluate the desirability of consequences on the basis of some moral values). In other words I don’t think ethics can be usefully tiered—it’s a flat system.
Oh, and I think that moralities do not set value systems. Moralities are value systems. And they are terminal goals (or criteria, or metrics, or standards), they cannot be instrumental (again, because it’s a flat system).
I very strongly disagree with this. From the descriptive side individual morality of course is influenced by social pressure. From the normative side, however, I don’t believe it should be.
Agreed that a given metaethical approach will cash out as a particular ethics in a particular situation. The reason I think it’s useful to go to metaethics is because you can then see the linkage between the situation and the prescription, which is useful for both insight and correcting flaws in an ethical system. I also think that while infinite regress problems are theoretically possible, for most humans there is a meaningful cliff suggesting it’s not worth it to go from meta-meta-ethics to meta-meta-meta-ethics, because to me ethics looks like a set of behaviors and responses, metaethics looks like psychology and economics, and meta-meta-ethics looks like biology.
It seems to me that there are a lot of obvious ways for morality derived without any sort of social help to go wrong, but we may be operating under different conceptions of ‘pressure.’
Can you give me an example where metaethics is explicitly useful for that? I don’t see why in flat/collapsed ethics this should be a problem.
Ah. Interesting. To me ethics is practical application (that is, actions) of morality which is a system of values. Morality is normative. Psychology and economics for me are descriptive (with an important side-note that they describe not only what is, but also boundaries for what is possible/likely). Biology provides provides powerful external forces and boundaries which certainly shape and affect morality, but they are external—you have to accept them as a given.
Of course, but so what? I suspect this issue will turn on the attitude towards the primacy of social vs the primacy of individual.
Sure, but first I should try to be a little clearer: by ‘situation’ here I mean the incentives on the agent, not any particular dilemma. That is, I would cluster ethics as rules that eat scenarios and output actions, and meta-ethics as rules that eat agent-world pairs and output ethics. As a side note, I think requiring this sort of functional upgrade when you move up a meta level makes the transition much more meaningful and makes infinite regress much less likely to happen practically.
I should also comment that I’ve been using ethics and morality interchangeably in this series of comments, even though I think it’s useful for the terms to be different along the lines you describe (of differentiating between value systems and action systems), mostly because I want to describe the system of picking value systems as meta-ethics instead of meta-morality.
It also seems worthwhile to remember that for most people, stated justifications follow decisions rather than decisions following stated justifications. This matches up with making decisions in near mode and justifying those decisions in far mode, which in the language I’m using here would look like far mode as ethics and near mode as meta-ethics.
An example would be vegetarianism. Vegetarianism in modern urban America, with a well-developed understanding of nutrition, is about as healthy as also eating animal products (possibly healthier, possibly less healthy, probably dependent on individual biology). Vegetarianism in undeveloped or rural areas is generally associated with malnutrition (often at subclinical levels, but that still has an effect on health and longevity). A metaethical system which recommends vegetarianism in America where it’s cheap and recommends against it in undeveloped areas where it’s expensive seems easy to construct; an ethical system which measures the weal gained and the woe inflicted to animals by eating meat and gets the balancing parameter just right to make the same recommendations seems difficult to construct.
(The operative phrase of that last sentence being the ‘balancing parameter’- if the stated justifications are driving the decisions, they need to be doing so quantitatively, and the parameters need to be inputs to the model, not outputs. It’s easy to say “this is the rule I want, find a parameter to implement that rule,” but difficult to say “this is the right parameter to use, and that results in this rule.”)
Even if you give individuals primacy, specialization of labor is still a powerful force for efficiency. (For social influence to be a obvious net negative, I think you would need individual neurological diversity on a level far higher than we currently have, even though we do see some negative impacts at our current level of neurological diversity.)
Ah. That makes a lot of sense.
True, but the key word here is “stated”.
That doesn’t look right to me. For most people (those who justify post-factum) the majority of their ethics is submerged, below their consciousness level. That’s why “stated” is a very important qualifier. People necessarily make decisions based on their “real” ethics but bringing the real reasons to the surface might not be psychologically acceptable to them, so post-factum justifications come into play.
I don’t think people making decisions in near mode apply rules that “eat agent-world pairs and output ethics”. I think that for many people factors like “convenience”, “lookin’ good”, and “let’s discount far future to zero” play considerably larger role in their real ethics than they are willing to admit or even realize.
I don’t see how this is so. Your meta-ethical system will still need to get that balancing parameter “just right” unless you start with the end result being known. Just because you divide the path from moral axioms to actions into several stages you don’t get to avoid sections of that path, you still need to walk it all.
Oh, and I don’t believe modern America has a “well-developed understanding of nutrition”, though it’s a separate discussion altogether.
I don’t understand. What does specialization of labor has to do with morality?
And perhaps I should clarify my reaction. When I saw “All the cool kids are vegetarian these days” called “an actual and strong reason” to adopt this morality—well, my first thought was “All the cool kids root out hidden Jews / string up uppity Negroes / find and kill Tutsi / denounce the educated agents of imperialism / etc.” That must be an actual and strong reason to adopt this set of morals as well, right?
I don’t know how to figure out whether social influence is a net positive given that in practice social influence is always there and you can’t find a control group. My point is that accepting morality because many other people seem to follow it is a very dubious heuristic for me.
Agreed that it’s a stretch; “hidden ethics” and “stated ethics” is a much more natural divide for the two. I do think that “convenience” and “lookin’ good” depends on the agent-world pair, but I think the adaption is opaque and slow (i.e. learn it when you’re young over a long period) rather than explicit and fast.
I was unclear there as well; I’m assuming that the “right” result is the one that maximizes the health and social standing of the implementer. Targeting that directly is easy; targeting it indirectly by using animal welfare is hard.
I was unclear; I meant that vegetarianism is safer for individuals with a well-developed understanding, not that urban America as a whole has a well-developed understanding.
Many moral questions are hard to figure out, especially when they rely on second or third order effects. Think of the parable of the broken window, of journalistic, clerical, or medical ethics which promise non-intervention or secrecy. There is strong value in the communication of moral claims, which I’m not sure how to distinguish from social pressure (and think social pressure may be a necessary part of communicating those claims).
It seems to me the issues of trust and credibility are dominant here. People get moral claims thrown at them constantly from different directions, many of them are incompatible or sometimes even direct opposites of each other. One needs some system of sorting them out, of evaluating them and deciding whether to accept them or not. Popularity is, of course, one such system but it has its problems, especially when moral claims come from those with power. There are obvious incentives in spreading moral memes advantageous to you.
I guess I see the social communication of moral claims to be strongly manipulated by those who stand to gain from it (which basically means those with power—political, commercial, religious, etc.) and so suspect.
I think we agree there, then.
I was thinking of a different kind of “validation”.
I try not to argue by definition, so it’s the latter: I have empirical concerns. See this post, point 4 (but also 3 and 5), for a near-perfect summary of my concerns.
That said, my overall objection to your view does not hinge on this point.
Well, firstly, I have to point out that I am not restricting my moral sphere to humans, per se. (Of known existing creatures, dolphins may qualify for membership; of imaginable creatures, aliens and AIs might.) In any case, the circle I draw seems quite non-arbitrary, even obvious, to me; but I suppose this only speaks to the non-universality of moral intuitions.
That would indeed seem weird and arbitrary. One objection I might raise to such a person is that it’s non-trivial, in many cases, to discern someone’s “whiteness”, not to mention one’s exact ancestry. “European” is not a sharp boundary where humans are concerned, and a great many factors confound such categorization. Most of my other objections would be aimed at drawing out the moral intuitions behind this person’s judgments about what sorts of beings are objects of morality (do they think “superficial” characteristics matter as much as functional ones? what is their response to various thought experiments such as brain transplant scenarios? etc.). It seems to me that there are both empirical facts and analytic arguments that would shift this person’s position closer to my own; a logically contradictory, empirically incoherent, or reflectively inconsistent moral position is generally bound to be less convincing.
(Of course, I might answer entirely differently. I might say: no, I would not be fine with that, because my own ancestry may or may not be classified as “European” or “white”, depending on who’s doing the classifying. So I would, quite naturally, argue against a moral circle drawn thus. Moral anti-realism notwithstanding, I might convince some people (and in fact that seems to be, in part, how the American civil rights movement, and similar social movements across the world, have succeeded: by means of people who were previously outside the moral circle arguing for their own inclusion). Cows, of course, cannot attempt to persuade us that we should include them in our moral considerations. I do not take this to be an irrelevant fact.)
I think that fights the hypothetical a bit much. Imagine something a bit sharper, like citizenship. Why not restrict our moral sphere to US citizens? Or take Derek Parfit’s within-a-mile altruism, where you only have concern for people within a mile of you. Weird, I agree. But irrational? Hard to demonstrate.
~
So do you think nonhuman animals may not suffer? I agree that’s a possibility, but it’s not likely. What do you think of the body of evidence provided in this post?
I don’t think there is a tidy resolution to this problem. We’ll have to take our best guess, and that involves thinking nonhuman animals suffer. We’d probably even want to err on the safe side, which would increase our consideration toward nonhuman animals. It would also be consistent with an Ocham’s razor approach.
~
What would you suggest?
The moral sphere needn’t work like a threshold, where one should extend equal concern to everyone within the sphere and no concern at all to anyone outside it. My moral beliefs are not cosmopolitan—I think it is morally right to care more for my family than for absolute strangers. In fact, I think it is a huge failing of standard utilitarianism that it doesn’t deliver this verdict (without having to rely on post-hoc contortions about long-term utility benefits). I also think it is morally acceptable to care more for people cognitively similar to me than for people cognitively distant (people with radically different interests/beliefs/cultural backgrounds).
This doesn’t mean that I don’t have any moral concern at all for the cognitively distant. I still think they’re owed the usual suite of liberal rights, and that I have obligations of assistance to them, etc. It’s just that I would save the life of one of my friends over the lives of, say, three random Japanese people, and I consider this the right thing to do.
I follow a similar heuristic when I move across species. I think we owe the great apes more moral consideration than we owe, say, dolphins. I don’t eat any mammals but I eat chicken.
The idea of a completely cosmopolitan ethic just seems bizarre to me. I can see why one would be motivated to adopt it if the only alternative were caring about some subset of people/sentient beings and not caring at all about anyone else. Then there would be something arbitrary about where one draws the line. But this is not the most plausible alternative. One could have a sphere of moral concern that doesn’t just stop suddenly but instead attenuates with distance.
The morality you suggest is what Derek Parfit calls collectively self-defeating. This means that if everyone were to follow it perfectly, there could be empirical situations where your actual goals, namely the well-being of those closest to you, are achieved less well than they would be if everyone followed a different moral view. So there could be situations in which people have more influence on the well-being of the family of strangers, and if they’d all favor their own relatives, everyone would end up worse off, despite everyone acting perfectly moral. Personally I want a world where everyone acts perfectly moral to be as close to Paradise as is empirically possible, but whether this is something you are concerned about is a different question (that depends on what question your seeking to answer by coming up with a coherent moral view).
By this reasoning everyone should give all their money and resources to charity (except to the extent that they need some of their resources to keep their job and make money).
That’s not much of a reductio ad absurdum. It would be much better if people did that, or at least moved a lot in that direction.
People are motivated to do things that make money because the money benefits themselves and their loved ones. Many such things are also beneficial to everyone, either directly (inventors, for instance, or people who manufacture useful goods), or indirectly (someone who is just willing to work hard because working hard benefits themselves, thus producing more and improiving the economy). In a world where everyone gave their money to random strangers and kept them at an equal level of wealth, nobody would be able to make any money (since 1) any money they made would be accompanied by a reduction by the money other people gave them, and 2) they would feel (by hypothesis) obligated to give away the proceeds anyway). This would mean that money as a motivation would no longer exist, and we would lose everything that we gain when money is a motivation. Thts would be bad.
Even if you modified the rule to “I should give money to people so as to arranbge an equal level of wealth except where necessary to provide motivation”, in deciding exactly who gets your money you’d essentially have a planned economy done piecemeal by billions of individual decisions. Unlike a normal planned economy, it wouldn’t be imposed from the top, but it would have the same problem as a normal planned economy in that there’s really nobody competent to plan such a thing. The result would be disaster. So overall it would be a better world if people kept the money they made even if someone else could use it more than they could.
Furthermore, the state where everyone acts this way is unstable. Even if your family would be better off if everyone acted that altruistically, your family would be worse off if half the world acted that way and you and they were part of that half.
Yes. At least as long as there are problems in the world. What’s wrong with that?
Everyone, including nonhumans, would have their interests/welfare-function fulfilled as well as possible. If I had to determine the utility function of moral agents before being placed into the world in any position at random, I would choose some form of utilitarianism from a selfish point of view because it maximizes my expected well-being. If doing the “morally right” thing doesn’t make the world a better place for the sentient beings in the world, I don’t see a reason to call it “right”. Also note that this is not an all-or-nothing issue, it seems unfruitful to single out only those actions that produce the perfect outcome, or the perfect outcome in expectation. Every improvement into the right direction counts, because every improvement leads to someone else being better off.
That’s a game theory/decision theory problem, not a problem with the utility function.
If all the agents in the situation acted according to utilitarianism, everyone would be better off. To the extent that everyone acting according to common sense morality predictably fails to bring about the best of all possible worlds in this situation, and to the extent that one cares about this fact, this constitutes an argument against common sense morality.
Of course, if decision theory or game theory could make those agents cooperate successfully (so they don’t do predictably worse than other moralities anymore) in all logically possible situations, then the objection disappears. I see no reason to assume this, though.
This seems nonsensical; a utility function does not prescribe actions. If I care about my family most, but acting in a certain way will cause them to be worse off, then I won’t act that way. In other words, if everyone acting perfectly moral causes everyone to end up worse off, then by definition, at least some people were not acting perfectly moral.
The problem is not with your actions, but with the actions of all the others (who are following the same general kind of utility function but because your utility function is agent-relative, they use different variables, i.e. they care primarily about their family and friend as opposed to yours). However, I was in fact wondering whether this problem disappears if we make the agents timeless (or whatever does the job), so they would cooperate with each other to avoid the suboptimal outcome. This is seems fair enough since acting “perfectly moral” seems to imply the best decision theory.
Does this solve the problem? I think not; we could tweak the thought experiment further to account for it: we could imagine that due to empirical circumstances, such cooperation is prohibited. Let’s assume that the agents lack the knowledge that the other agents are timeless. Is this an unfair addendum to the scenario? I don’t see why, because given the empirical situation (which seems perfectly logically possible) the agents find themselves in, the moral algorithm they collectively follow may still lead to results that are suboptimal for everyone concerned.
You don’t follow a utility function. Utility functions don’t prescribe actions.
… are you suggesting that we solve prisoner’s dilemmas and similar problems by modifying our utility function?
OK, bad choice of words.
No, but you need some decision theory to go with your utility function, and I was considering the possibility that Parfit merely pointed out a flaw of CDT and not a flaw of common sense morality. However, given that we can still think of situations where common sense morality (no matter the decision theory) executed by everyone does predictably worse for everyone concerned than some other theory, Parfit’s objection still stands.
(Incidentally, I suspect that there could be situations where modifying your utility function is a way to solve a prisoner’s dilemma, but that wasn’t what I meant here.)
It seems implausible to me that there is any ethical decision procedure that human beings (rather than idealized perfectly informed and perfectly rational super-beings) could follow that wouldn’t be collectively self-defeating in this sense. Do you (or Parfit) have an example of one that isn’t?
Anyway, I don’t see this as a huge problem. First, I’m pretty sure I’m never going to live in a world (or even a close approximation to one) where everyone adheres to my moral beliefs perfectly. So I don’t see why the state of such a world should be relevant to my moral beliefs. Second, my moral beliefs are ultimately beliefs about which consequences—which states of the world—are best, not beliefs about which actions are best. If there was good evidence that acting in a certain manner (in the aggregate) wasn’t effective at producing morally better states of affairs, then I wouldn’t advocate acting in that manner.
But I am not convinced that following a cosmopolitan decision procedure (or advocating that others follow one) would empirically be an effective means to achieving my decidedly non-cosmopolitan moral ends. Perhaps if everyone in the world mimicked my moral behavior (or did what I told them) it would be, but alas, that is not the case.
Utilitarianism is not collectively self-defeating, but then there’d be no room for non-cosmopolitan moral ends.
This part shouldn’t make a difference. If humans are too irrational to directly follow utilitarianism (U), then U implies they should come up with easier/less dangerous rules of thumb that will, on average, produce the most utility. This is termed “indirectly individually self defeating”, if you have a theory that implies it would be best to follow some other theory. Parfit concludes, and I agree with him here, that this is not a reason to reject U. U doesn’t imply that one ought to actively implement utilitarianism, it only wants you to bring about the best consequences regardless of how this happens.
This is a pretty dubious move. Why think there will be easy to follow rules that will maximize aggregate utility? And even if such rules exist, how would we go about discovering them, given that the reason we need them in the first place is due to our inability to fully predict the consequences of our actions and their attached utilities?
Do you just mean that we should pick easy to follow rules that tend to produce more utility than other sets of easy to follow rules (as far as we can figure out), but not necessarily ones that maximize utility relative to all possible patterns of behavior? In that case, I don’t see why your utilitarianism isn’t collectively self-defeating according to the definition you gave. A world in which everyone acts according to such rules will not be a world that is as close to the utilitarian Paradise as empirically possible. After all, it seems entirely empirically possible for people to accurately recognize particular situations where actions contrary to the rules produce higher utility.
Also note that the view you outlined is often concerned with the question of helping others. When it comes to not harming others, many people would agree with the declaration of human rights that inflicting suffering is equally bad regardless of one’s geographical or emotional proximity to the victims. Personal vegetarianism is an instance of not harming.
I disagree with cosmopolitanism when it comes to “not harming” as well. I think needlessly inflicting suffering on human beings is always really bad, but it is worse if, say, you do it to your own children rather than to a random stranger’s.
I basically agree with pragmatist’s response, with the caveat only that I think many (most?) people’s moral spheres have too steep a gradient between “family, for whom I would happily murder any ten strangers” and “strangers, who can go take a flying leap for all I care”. My own gradient is not nearly that steep, but the idea of a gradient rather than a sharp border is sound. (Of course, since it’s still the case that I would kill N chickens to save my grandmother, where N can be any number, it seems that chickens fall nowhere at all on this gradient.)
Well, you can phrase this as “nonhuman animals don’t suffer”, or as “nonhuman animal suffering is morally uninteresting”, as you see fit; I’m not here to dispute definitions, I assure you. As for the evidence, to be honest, I don’t see that you’ve provided any. What specifically do you think offers up evidence against points 3 through 5 of RobbBB’s post?
I don’t think so; or at least this is not obviously the case.
Well, just the stuff about boundaries and hypotheticals and such that you referred to as “fighting the hypothetical”. Is there something specific you’re looking for, here?
The essay cited the Cambridge Declaration of Consciousness, as well as a couple of other pieces of evidence.
Here is another (more informal) piece that I find compelling.
That’s not evidence, that’s a declaration of opinion.
In particular, reading things like “Evidence of near human-like levels of consciousness has been most dramatically observed in African grey parrots” (emphasis mine) makes me highly sceptical of that opinion.
It’s not scientific evidence, but it is rational evidence. In Bayesian terms, a consensus statement of experts in the field is probably much stronger evidence than, say, a single peer-reviewed study. Expert consensus statements are less likely to be wrong than almost any other form of evidence where I don’t have the necessary expertise to independently evaluate claims.
Not if I believe that this particular panel of experts is highly biased and is using this declaration instrumentally to further their undeclared goal.
That may or may not be true, but doesn’t seem to be particularly relevant here. The question is what constitutes “near human-like levels consciousness”. If you point to an African grey as your example, I’ll laugh and walk away. Maybe, if I were particularly polite, I’d ask in which meaning you’re using the word “near” here.
If I were in your place, I’d be skeptical of my own intuitions regarding the level of consciousness of African grey parrots. Reality sometimes is unintuitive, and I’d be more inclined to trust the expert consensus than my own intuition. Five hundred years ago, I probably would have laughed at someone who said we would travel to the moon one day.
I trust reality a great deal more than I trust the expert consensus. As has been pointed out, science advances one funeral at a time.
If you want to convince me, show me evidence from reality, not hearsay from a bunch of people I have no reason to trust.
This is evidence from reality. In reality, a bunch of neuroscientists organized by a highly respectable university all agree that many non-human animals are approximately as conscious as humans. This is very strong Bayesian evidence in favor of this proposition being true.
What form of evidence would you find more convincing than this?
No, I don’t think so.
That’s not a statement of fact. That’s just their preferred definition for the expression “approximately as conscious as humans”. I can define slugs to be “approximately as conscious as humans” and point out that compared with rocks, they are.
I have no way to respond to this.
That interpretation of the quoted expression strikes me as implausible, especially in the context of the other statements made in the declaration; for example: “Mammalian and avian emotional networks and cognitive microcircuitries appear to be far more homologous than previously thought.” This indicates that humans’ and birds’ consciousnesses are more similar than most people intuitively believe.
Again, I ask: What form of evidence would you find more convincing than the Cambridge Declaration of Consciousness?
Evidence of what?
It seems that you want to ask a question “Are human and non-human minds similar?” That question is essentially about the meaning of the word “similar” in this context—a definition of “similar” would be the answer.
There are no facts involved, it’s all a question of terminology, of what “approximately as conscious as humans” means.
Sure, you plausibly define some metric (or several of them) of similarity-to-human-mind and arrange various living creatures on the that scale. But that scale is continuous and unless you have a specific purpose in mind, thresholds are arbitrary. I don’t know why defining only a few mammals and birds as having a mind similar-to-human is more valid than defining everything up to a slug as having a mind similar-to-human.
I originally posted the Cambridge Declaration of Consciousness because Peter asked you, “What do you think of the body of evidence provided in this post [that nonhuman animals suffer]?” You said he hadn’t provided any, and I offered the Cambridge Declaration as evidence. The question is, in response to your original reply to Peter, what would you consider to be meaningful evidence that non-human animals suffer in a morally relevant way?
I freely admit that animals can and do feel pain. “Suffer” is a complicated word and it’s possible to debate whether it can properly be applied only to humans or not only. However for simplicity’s sake I’ll stipulate that animals can suffer.
Now, a “morally relevant way” is a much more iffy proposition. It depends on your morality which is not a matter of facts or evidence. In some moral systems animal suffering would be “morally relevant”, in others it would not be. No evidence would be capable of changing that.
Generally untrue.
African grays are pretty smart. I’m not sure I’d go so far as to call them near-human, but from what I’ve read there’s a case for putting them on par with cetaceans or nonhuman primates.
The real trouble is that the research into this sort of thing is fiendishly subjective and surprisingly sparse. Even a detailed ordering of relative animal intelligence involves a lot of decisions about which researchers to trust, and comparison with humans is worse.