A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don’t put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.
In actuality, different groups of people implicitly have different Schelling points and then argue whose Schelling point is morally right. A standard Schelling point, say, 100 years ago, was all humans or some subset of humans. The situation has gotten more complicated recently, with some including only humans, humans and cute baby seals, humans and dolphins, humans and pets, or just pets without humans, etc.
So a consequentialist question would be something like
Where does it make sense to put a boundary between caring and not caring, under what circumstances and for how long?
Note this is no longer a Schelling point, since no implicit agreement of any kind is assumed. Instead, one tests possible choices against some terminal goals, leaving morality aside.
All I am saying is that one has to make an arbitrary care/don’t care boundary somewhere. and “human/non-human” is a rather common and easily determined Schelling point in most cases. It fails in some, like the intelligent pig example from the OP, but then every boundary fails on some example.
A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don’t put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.
Indeed. I’ve alluded to this before as “how many chickens would I kill/torture to save my grandmother?” The answer, of course, is N, where N may be any number.
This means that, if we start with basic (total) utilitarianism, we have to throw out at least one of the following:
Additive aggregation of value.
Valuing my grandmother a finite amount (as opposed to an infinite amount).
Valuing a chicken a nonzero amount.
Throwing out #2 leads to incorrect results (it is not the case that I value my grandmother more than anything else — sorry, grandma). Throwing out #1 is possible, and I have serious skepticism about that one anyway… but it also leads to problems (don’t I think that killing or torturing two people is worse than killing or torturing one person? I sure do!).
The answer, of course, is N, where N may be any number. … Throwing out #3 seems unproblematic.
Relatedly, you could choose to throw out your ability to assess N. When you say N could be any number, that looks to me like scope neglect. I don’t have a good sense of what a billion chickens is like, or what a billionth chance of dying looks like, and so I don’t expect my intuitions to give good answers in that region. If you ask the question as “how many chickens would I kill/torture to extend my grandmother’s life by one second?”, then if you do actually value chickens at zero then the answer will again be N, but that seems much less intuitive.
So it looks like an answer to the ‘save’ question that avoids the incorrect results is something like “I don’t know how many, but I’m pretty sure it’s more than a million.”
If you ask the question as “how many chickens would I kill/torture to extend my grandmother’s life by one second?”, then if you do actually value chickens at zero then the answer will again be N, but that seems much less intuitive.
The answer is, indeed, still the same N.
Relatedly, you could choose to throw out your ability to assess N. When you say N could be any number, that looks to me like scope neglect.
I don’t find scope neglect to be a serious objection here. It’s certainly relevant in cases of inconsistencies, like the classic “how much would you pay to save a thousand / a million birds from oil slicks” scenario, but where is the inconsistency here? Scope neglect is a scaling error — what quantity is it you think I am scaling incorrectly?
The “scope neglect” objection also misconstrues what I am saying. When I say “I would kill/torture N chickens to save my grandmother”, I am here telling you what I would, in fact, do. Offer me this choice right now, and I will make it. This is the input to the discussion. I have a preference for my grandmother’s life over any N chickens, and this is a preference that I support on consideration — it is reflectively consistent.
For “scope neglect” to be a meaningful objection, you have to show that there’s some contradiction, like if I would torture up to a million chickens to give my grandmother an extra day of life, but also up to a million to give her an extra year… or something to that effect. But there’s no contradiction, no inconsistency.
Scope neglect is a scaling error — what quantity is it you think I am scaling incorrectly?
When I imagine sacrificing one chicken, it looks like a voodoo ritual or a few pounds of meat, worth maybe tens of dollars. When I imagine sacrificing a thousand chickens, it looks like feeding a person for several years, and maybe tens of thousand dollars. When I imagine sacrificing a million chickens, it looks like feeding a thousand people for several years, and maybe tens of millions of dollars. When I imagine sacrificing a billion chickens, it looks like feeding millions of people for several years, and a sizeable chunk of the US poultry industry. When I imagine sacrificing a trillion chickens, it looks like feeding the population of the US for a decade, and several times the global poultry industry. (I know this is in terms of their prey value, but since I view chickens as prey that’s how I imagine them, not in terms of individual subjective experience.)
And that’s only 1e9! There are lots of bigger numbers. What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you’re indifferent between them. When I imagine weighing one person against the global poultry industry, it’s not obvious to me that one person is the right choice, and it feels to me that if it’s not obvious, you can just increase the number of chickens.
One counterargument to this is “but chickens and humans are on different levels of moral value, and it’s wrong to trade off a higher level for a lower level.” I don’t think that’s a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).
I… don’t see how your examples/imagery answer my question.
When I imagine weighing one person against the global poultry industry, it’s not obvious to me that one person is the right choice, and it feels to me that if it’s not obvious, you can just increase the number of chickens.
It is completely obvious to me. (I assume by “global poultry industry” you mean “that number of chickens”, since if we literally eradicated global chicken production, lots of bad effects (on humans) would result.)
One counterargument to this is “but chickens and humans are on different levels of moral value, and it’s wrong to trade off a higher level for a lower level.” I don’t think that’s a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).
Don’t be so sure! Multi-level morality, by the way, does not necessarily mean that my grandmother occupies the top level all by herself. However, that’s a separate discussion; I started this subthread from an assumption of basic utilitarianism.
Anyway, I think — with apologies — that you are still misunderstanding me. Take this:
What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you’re indifferent between them.
There is no level where I’d be indifferent between them. That’s my point. Why would I try to find such a level? What moral intuition do you think I might have that would motivate me to try this?
Anyway, I think — with apologies — that you are still misunderstanding me.
Yes and no. I wasn’t aware that you were using a multi-level morality, but agree with you that it doesn’t obviously break and doesn’t require infinite utilities in any particular level.
That said, my experience has been that every multi-level morality I’ve looked at hard enough has turned out to map to the real line, but because of measurement difficulties it looked like there were clusters of incomparable utilities. It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it’s 0 I don’t take their confidence as informative. If they’re an expert in decision science and eliciting this sort of information, then I do take it seriously, but I’m still suspicious that This Time It’s Different.
Another big concern here is revealed preferences vs. stated preferences. Many people, when you ask them about it, will claim that they would not accept money in exchange for a risk to their life, but then in practice do that continually- but on the level where they accept $10 in exchange for a millionth chance of dying, for example. One interpretation is that they’re behaving irrationally, but I think the more plausible interpretation is that they’re acting rationally but talking irrationally. (Talking irrationally can be a rational act, like I talk about here.)
Well, as far as revealed vs. stated preferences go, I don’t think we have any way of subjecting my chicken vs. grandmother preference to a real-world test, so I suppose You’ll Just Have To Take My Word For It. As for the rest...
It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it’s 0 I don’t take their confidence as informative.
What would it mean for me to be mistaken about this? Are you suggesting that, despite my belief that I’d trade any number of chickens to save my grandmother, there’s some situation we might encounter, some really large number of chickens, faced with which I would say: “Well, shit. I guess I’ll take the chickens after all. Sorry, grandma”?
I find it very strange that you are taking my comments to be statements about which particular real number value I would assign to a single chicken. I certainly do not intend them that way. I intend them to be statements about what I would do in various situations; which choice, out of various sets of options, I would make.
Whether or not we can then transform those preferences into real-number valuations of single chickens, or sets of many chickens, is a question we certainly could ask, but the answer to that question is a conclusion that we would be drawing from the givens. That conclusion might be something like “my preferences do not coherently translate into assigning a real-number value to a chicken”! But even more importantly, we do not have to draw any conclusion, assign any values to anything, and it would still, nonetheless, be a fact about my preferences that I would trade any number of chickens for my grandmother. So it does not make any sense whatsoever to declare that I am mistaken about my valuation of a chicken, when I am not insisting on any such valuation to begin with.
What would it mean for me to be mistaken about this?
Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.
I should also make clear that I’m not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility. This is mostly useful when thinking about death / lifespan extension and other sacred values, where refusing to explicitly calculate means that you’re not certain the marginal value of additional expenditure will be equal across all possible means for expenditure. For this particular case, it’s unlikely that you will ever come across a situation where the value system “grandma first, then chickens” will disagree with “grandma is worth a really big number of chickens,” and separating the two will be unlikely to have any direct meaningful impact.
But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere. I also think it’s important to cultivate a mentality where a 1e-12 chance of saving grandma feels different from a 1e-6 chance of saving grandma, rather than your mind just interpreting them both as “a chance of saving grandma.”
Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.
Any chance of saving my grandmother is worth any number of chickens.
I should also make clear that I’m not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility.
Well, ok. I am not committed to a multi-level system; I was only formulating a bit of skepticism. That being said, if we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation. (Or perhaps even just giving up on utilitarianism as The One Unified Moral System. There are reasons to suspect we might have to do this anyway.)
For this particular case, it’s unlikely that you will ever come across a situation where the value system “grandma first, then chickens” will disagree with “grandma is worth a really big number of chickens,” and separating the two will be unlikely to have any direct meaningful impact.
Perhaps. But you yourself say:
But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere.
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn’t by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
If we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom. It’s just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.
As this comment points out, the additivity of the value of two events which have dependencies has no claim on their additivity when completely independent. Having two pillows isn’t having one pillow twice.
Any chance of saving my grandmother is worth any number of chickens.
So I actually don’t think you have to give this up to remain rational. Rationality is creating heuristics for the ideal version of yourself, a self of course which isn’t ideal in any fundamental sense but rather however you choose to define it. Let’s call this your preferred self. You should create heuristics that cause you to emulate your preferred self such that your preferred self would choose you out of any of your available options for doing metaethics, when applying you to the actual moral situations you’ll have in your lifetime (or a weighted-by-probability integral over expected moral situations).
What I’m saying is that I wouldn’t be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn’t check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.
This all to say, it’s not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn’t by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
Because, as you say:
This all to say, it’s not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
Indeed, and the right answer here is choosing my grandmother. (btw, it’s “googolplex”, not “googleplex”)
If we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom.
Indeed; but...
It’s just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
Something has to change. Setting M = 0 is easiest and most consistent with my moral intuitions, and leads to correct results in all choices involving humans. (Of course we might have other motivations for choosing a different path, such as abandoning real-valued utilities or abandoning additive aggregation.)
What I’m saying is that I wouldn’t be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn’t check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.
Now here, I am not actually sure what you’re saying. Could you clarify? What theory?
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you’re not willing to think there isn’t a consistent reconciliation.
My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren’t. The theory I refer to is the one that takes M = 0.
These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don’t think an ideal rational agent can reconcile them, but other point was that our actual selves aren’t required to (but that we should acknowledge this).
I see. I confess that I don’t find your “preferred ethical self” concept to be very compelling (and am highly skeptical about your claim that this is “what rationality is”), but I’m willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread.
You shouldn’t take me to have any kind of “theory that takes M = 0”; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else.
My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues.
Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we’re done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don’t know.
(For what it’s worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)
I suspect that those would be longer than should be posted deep in a tangential comment thread.
Yeah probably. To be honest I’m still rather new to the rodeo here, so I’m not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn’t listen to me :)
I’m sure it’s been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It’s probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn’t satisfy the criteria but that additive value still holds.
I don’t think it’d be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I’m not sure.
Suppose you’re walking down the street when you see a chicken trapped under a large rock. You can save it or not. If you save it, it costs you nothing except for your time. Would you save it?
If you would save the chicken, then you think its life is worth 10 seconds of your life, which means you value its life as about 1⁄200,000,000th of your life as a lower bound.
In your view, how much do I think the chicken’s life is worth if I would either save it or not save it, depending on factors I can’t reliably predict or control? If I would save it one day, but not save it the next? If I would save a chicken now, and eat a chicken later?
I don’t take such tendencies to be “revealed preferences” in any strong sense if they are not stable under reflective equilibrium. And I don’t have any belief that I should save the chicken.
Edit: Removed some stuff about tendencies, because it was actually tangential to the point.
It is problematic once you start fine-graining, exactly like in the dust specks/torture debate, where killing a chicken ~ dust speck and killing your grandma ~ torture. There is almost certainly an unbroken chain of comparables between the two extremes.
For what it’s worth, I also choose specks in specks/torture, and find the “chain of comparables” argument unconvincing. (I’d be happy to discuss this, but this is probably not the thread for it.)
That, however, is not all that relevant in practice: the human/nonhuman divide is wide (unless we decide to start uplifting nonhuman species, which I don’t think we should); the smartest nonhuman animals (probably dolphins) might qualify for moral consideration, but we don’t factory-farm dolphins (and I don’t think we should), and chickens and cows certainly don’t qualify; the question of which humans do or don’t qualify is tricky, but that’s why I think we shouldn’t actually kill/torture them with total impunity (cf. bright lines, Schelling fences, etc.).
In short, we do not actually have to do any fine-graining. In the case where we are deciding whether to torture, kill, and eat chickens — that is, the actual, real-world case — my reasoning does not encounter any problems.
Do you assign any negative weight to a suffering chicken? For example, is it OK to simply rip off a leg from a live one and make a dinner, while the injured bird is writhing on the ground slowly bleeding to death?
Sure. However, you raise what is in principle a very solid objection, and so I would like to address it.
Let’s say that I would, all else being equal, prefer that a dog not be tortured. Perhaps I am even willing to take certain actions to prevent a dog from being tortured. Perhaps I also think that two dogs being tortured is worse than one dog being tortured, etc.
However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.
What are we to make of this?
In that case, some component of our utilitarianism might have to be re-examined. Perhaps dogs have a nonzero value, and a lot of dogs have more value than only a few dogs, but no quantity of dogs adds up to one grandmother; but on the other hand, some things are worth more than one grandmother (two grandmothers? all of humanity?).
Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.
(Of course, it’s possible to suppose that we could, if we chose, construct various hypotheticals (perhaps involving some complex series of bets) which would tease out some inconsistency in that set of valuations. That may be the case here, but nothing obvious jumps out at me.)
However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.
This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity.
On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd. http://spot.colorado.edu/~norcross/2Dogmasdeontology.pdf
By the way, the dogs vs. grandma case differs in an important way from specks vs. torture:
The specks are happening to humans.
It is not actually inconsistent to choose TORTURE in specks/torture while choosing GRANDMA in dogs/grandma. All you have to do is value humans (and humans’ utility) while not valuing dogs (or placing dogs on a “lower moral tier” than your grandmother/humans in general).
In other words, “do many specks add up to torture” and “do many dogs add up to grandma” are not the same question.
That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don’t. People often come up with lexical constructs when they feel uncomfortable with the anticipation of having to change their behaviour. As a consequentialist, I figured out that I care a bit about dog welfare, and being aware of my scope insensitivity, I can see why some people dislike biting the bullet which results from simple additive reasoning. An option would be, though, to say that one’s brain (and anyhow therefore one‘s moral framework) is only capable of a certain amount of caring for dogs and that this variable is independent of the number of dogs. For me that wouldn’t work out though for I care about the content of sentient experience in a additive way. But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern? What would you think of the agent’s morality if it discounted your welfare lexically?
That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don’t.
You can’t tell me, first, that above all I must conform to a particular ritual of cognition, and then that, if I conform to that ritual, I must change my morality to avoid being Dutch-booked. Toss out the losing ritual; don’t change the definition of winning.
What you are doing here is insisting that I conform to your ritual of cognition (i.e. total utilitarianism with real-number valuation and additive aggregation). I see no reason to accede to such a demand.
The following are facts about what I do and don’t care about:
1) All else being equal, I prefer that a dog not be tortured. 2) All else being equal, I prefer that my grandmother not be tortured. 3) I prefer any number of dogs being tortured to my grandmother being tortured. 4 through ∞) Some other stuff about my preferences, skipped for brevity.
#s 2 and 3 are very strong preferences. #1 is less so.
Now I want to find a moral calculus that captures those facts. You, on the other hand, are telling me that, first, I must accept your moral calculus, and then, that if I do so, I must toss out one of the aforementioned preferences.
I decline to do either of those things. (As Eliezer says in the above link: The utility function is not up for grabs.)
But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern?
I don’t know. This is the kind of thing that demonstrates why we need FAI theory and CEV.
What would you think of the agent’s morality if it discounted your welfare lexically?
I would think that its morality is different from mine. Also, I would be sad, because presumably such a morality on the AI’s part would result in bad things for me. Your point?
Ok, let’s do some basic friendly AI theory: Would a friendly AI discount the welfare of “weaker” beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let’s assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages). For the scientist knows his experiment very well, none of the objects will die; in the end it’ll look like the two objects changed their places.
At which point does the mother stop counting lexically more than the dog?
Sometimes continuity arguments can be defeated by saying: “No I don’t draw an arbitrary line; I adjust gradually whereas in the beginning I care a lot about the grandmother and in the end just very little about the remaining dog.” But I think that this argument doesn’t work here for we deal with a lexical prioritization. How would you act in such a scenario?
A perfect scientist starts to exchange pairs of atoms (let’s assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages).
Identity isn’t in specific atoms. The effect of swapping a carbon atom in the grandma with a carbon atom in the dog is none at all.
Jiro’s response shows one good reason why I don’t find that thought experiment very interesting. Another obvious reason is its extreme implausibility and, I strongly suspect, actual incoherence (given what we know about physics and biology). I think I can safely say “I have no idea what I would prefer”, much like Eliezer finds no reason to answer how he would explain his arm being turned into a blue tentacle, and not have that be counted against me.
On to FAI theory:
Would a friendly AI discount the welfare of “weaker” beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI?
By definition, it would not, because if it did, then it would be an Unfriendly AI.
If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
How do you get from facts about the behavior of an FAI to claims about how we should act? I spy one of those pesky “is-ought” transitions that bedeviled Hume!
Corollary: why should we care that our behavior results in bad things for animals? Isn’t that the question in the first place, and doesn’t your statement beg said question?
As I’ve said elsewhere in this thread, I also choose SPECKS in specks/torture. As for the paper, I will read it when I have time, and try to get back to you with my thoughts.
Edit: And see this thread for a discussion of whether scope neglect applies to my views.
I examined this one, too, but the continuity axiom intuitively makes sense for comparables, except possibly in cases of extreme risk aversion. I am leaning more toward abandoning the transitivity chain when the options are too far apart. Something like A > B having some uncertainty increasing with the chain length between A and B or with some other quantifiable value.
Hours you spend helping dogs are hours you could have spent helping humans, e.g. having more money is associated with longer life.
This point is of course true, hence my “all else being equal” clause. I do not actually spend any time helping dogs, for pretty much exactly the reasons you list: there are matters of human benefit to attend to, and dogs are strictly less important.
Your last paragraph is mostly moot, since the behavior you allude to is not at all my actual behavior, but I would like to hear a bit more about the behavior model you refer to. (A link would suffice.)
I’m not entirely sure what the relevance of the speed limit example is.
The problem with throwing out #3 is you also have to throw out:
(4) How we value a being’s moral worth is a function of their abilities (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)
Which is a rather nice proposition.
Edit: As Said points out, this should be:
(4) How we value a being’s pain is a function of their ability to feel pain (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)
You don’t, actually. For example, the following is a function):
Let a be a variable representing the abilities of a being. Let E(a) be the ethical value of a being with abilities a. The domain is the nonnegative reals, the range is the reals. Let H be some level of abilities that we have chosen to identify as “human-level abilities”. We define E(a) thus:
a < H : E(a) = 0. a ≥ H: E(a) = f(a), where f(x) is some other function of our choice.
Uh, would you mind editing your statement back, or adding a note about what it said before? Otherwise I am left looking like a crazy person, spouting non sequiturs ;) Edit: Thanks!
Anyway, your updated statement is no longer vulnerable to my objection, but neither is it particularly “nice” anymore (that is, I don’t endorse it, and I don’t think most people here who take the “speciesist” position do either).
(By the way, letting H be “maleness” doesn’t make a whole lot of sense. It would be very awkward, to say the least, to represent “maleness” as some nonnegative real number; and it assumes that the abilities captured by a are somehow parallel to the gender spectrum; and it would make it so that we value male chickens but not human women; and calling “maleness” a “level of abilities” is pretty weird.)
But why don’t you think it’s “nice” to require abilities to be relevant? If you feel pain more strongly than others do, then I care more about when you’re in pain than when others are in pain.
… provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post).
[1] Well, at first glance. Actually, I’m not so sure; I don’t seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that’s what matters.
Well, if you follow that post far enough you’ll see that the author thinks animals feel something that’s morally equivalent to pain, s/he just doesn’t like calling it “pain”.
But assuming you genuinely don’t think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn’t list any supporting evidence.
But assuming you genuinely don’t think animals feel something morally equivalent to pain, why?
I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took.
I didn’t say anything about animals not feeling pain (what does it “morally equivalent to pain” mean?). I said I don’t care about animal pain.
… the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we’re talking past each other.
I apologize for the confusion. Let me attempt to summarize your position:
It is possible for subjectively bad things to happen to animals
Despite this fact, it is not possible for objectively bad things to happen to animals
Is that correct? If so, could you explain what “subjective” and “objective” mean here—usually, “objective” just means something like “the sum of subjective”, in which case #2 trivially follows from #1, which was the source of my confusion.
A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don’t put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.
In actuality, different groups of people implicitly have different Schelling points and then argue whose Schelling point is morally right. A standard Schelling point, say, 100 years ago, was all humans or some subset of humans. The situation has gotten more complicated recently, with some including only humans, humans and cute baby seals, humans and dolphins, humans and pets, or just pets without humans, etc.
So a consequentialist question would be something like
Note this is no longer a Schelling point, since no implicit agreement of any kind is assumed. Instead, one tests possible choices against some terminal goals, leaving morality aside.
I feel like you’re saying this:
“There are a great many sentient organisms, so we should discriminate against some of them”
Is this what you’re saying?
EDIT: Sorry, I don’t mean that bacteria or viruses are sentient. Still, my original question stands.
All I am saying is that one has to make an arbitrary care/don’t care boundary somewhere. and “human/non-human” is a rather common and easily determined Schelling point in most cases. It fails in some, like the intelligent pig example from the OP, but then every boundary fails on some example.
Where does sentience fail as a boundary?
if sentience isn’t a boolean condition.
Why do you say that? Bacteria, viruses etc. seem to lack not just one, but all of the capacities A-H the OP mentioned.
Indeed. I’ve alluded to this before as “how many chickens would I kill/torture to save my grandmother?” The answer, of course, is N, where N may be any number.
This means that, if we start with basic (total) utilitarianism, we have to throw out at least one of the following:
Additive aggregation of value.
Valuing my grandmother a finite amount (as opposed to an infinite amount).
Valuing a chicken a nonzero amount.
Throwing out #2 leads to incorrect results (it is not the case that I value my grandmother more than anything else — sorry, grandma). Throwing out #1 is possible, and I have serious skepticism about that one anyway… but it also leads to problems (don’t I think that killing or torturing two people is worse than killing or torturing one person? I sure do!).
Throwing out #3 seems unproblematic.
Relatedly, you could choose to throw out your ability to assess N. When you say N could be any number, that looks to me like scope neglect. I don’t have a good sense of what a billion chickens is like, or what a billionth chance of dying looks like, and so I don’t expect my intuitions to give good answers in that region. If you ask the question as “how many chickens would I kill/torture to extend my grandmother’s life by one second?”, then if you do actually value chickens at zero then the answer will again be N, but that seems much less intuitive.
So it looks like an answer to the ‘save’ question that avoids the incorrect results is something like “I don’t know how many, but I’m pretty sure it’s more than a million.”
The answer is, indeed, still the same N.
I don’t find scope neglect to be a serious objection here. It’s certainly relevant in cases of inconsistencies, like the classic “how much would you pay to save a thousand / a million birds from oil slicks” scenario, but where is the inconsistency here? Scope neglect is a scaling error — what quantity is it you think I am scaling incorrectly?
The “scope neglect” objection also misconstrues what I am saying. When I say “I would kill/torture N chickens to save my grandmother”, I am here telling you what I would, in fact, do. Offer me this choice right now, and I will make it. This is the input to the discussion. I have a preference for my grandmother’s life over any N chickens, and this is a preference that I support on consideration — it is reflectively consistent.
For “scope neglect” to be a meaningful objection, you have to show that there’s some contradiction, like if I would torture up to a million chickens to give my grandmother an extra day of life, but also up to a million to give her an extra year… or something to that effect. But there’s no contradiction, no inconsistency.
When I imagine sacrificing one chicken, it looks like a voodoo ritual or a few pounds of meat, worth maybe tens of dollars. When I imagine sacrificing a thousand chickens, it looks like feeding a person for several years, and maybe tens of thousand dollars. When I imagine sacrificing a million chickens, it looks like feeding a thousand people for several years, and maybe tens of millions of dollars. When I imagine sacrificing a billion chickens, it looks like feeding millions of people for several years, and a sizeable chunk of the US poultry industry. When I imagine sacrificing a trillion chickens, it looks like feeding the population of the US for a decade, and several times the global poultry industry. (I know this is in terms of their prey value, but since I view chickens as prey that’s how I imagine them, not in terms of individual subjective experience.)
And that’s only 1e9! There are lots of bigger numbers. What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you’re indifferent between them. When I imagine weighing one person against the global poultry industry, it’s not obvious to me that one person is the right choice, and it feels to me that if it’s not obvious, you can just increase the number of chickens.
One counterargument to this is “but chickens and humans are on different levels of moral value, and it’s wrong to trade off a higher level for a lower level.” I don’t think that’s a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).
I… don’t see how your examples/imagery answer my question.
It is completely obvious to me. (I assume by “global poultry industry” you mean “that number of chickens”, since if we literally eradicated global chicken production, lots of bad effects (on humans) would result.)
Don’t be so sure! Multi-level morality, by the way, does not necessarily mean that my grandmother occupies the top level all by herself. However, that’s a separate discussion; I started this subthread from an assumption of basic utilitarianism.
Anyway, I think — with apologies — that you are still misunderstanding me. Take this:
There is no level where I’d be indifferent between them. That’s my point. Why would I try to find such a level? What moral intuition do you think I might have that would motivate me to try this?
Yes and no. I wasn’t aware that you were using a multi-level morality, but agree with you that it doesn’t obviously break and doesn’t require infinite utilities in any particular level.
That said, my experience has been that every multi-level morality I’ve looked at hard enough has turned out to map to the real line, but because of measurement difficulties it looked like there were clusters of incomparable utilities. It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it’s 0 I don’t take their confidence as informative. If they’re an expert in decision science and eliciting this sort of information, then I do take it seriously, but I’m still suspicious that This Time It’s Different.
Another big concern here is revealed preferences vs. stated preferences. Many people, when you ask them about it, will claim that they would not accept money in exchange for a risk to their life, but then in practice do that continually- but on the level where they accept $10 in exchange for a millionth chance of dying, for example. One interpretation is that they’re behaving irrationally, but I think the more plausible interpretation is that they’re acting rationally but talking irrationally. (Talking irrationally can be a rational act, like I talk about here.)
Well, as far as revealed vs. stated preferences go, I don’t think we have any way of subjecting my chicken vs. grandmother preference to a real-world test, so I suppose You’ll Just Have To Take My Word For It. As for the rest...
What would it mean for me to be mistaken about this? Are you suggesting that, despite my belief that I’d trade any number of chickens to save my grandmother, there’s some situation we might encounter, some really large number of chickens, faced with which I would say: “Well, shit. I guess I’ll take the chickens after all. Sorry, grandma”?
I find it very strange that you are taking my comments to be statements about which particular real number value I would assign to a single chicken. I certainly do not intend them that way. I intend them to be statements about what I would do in various situations; which choice, out of various sets of options, I would make.
Whether or not we can then transform those preferences into real-number valuations of single chickens, or sets of many chickens, is a question we certainly could ask, but the answer to that question is a conclusion that we would be drawing from the givens. That conclusion might be something like “my preferences do not coherently translate into assigning a real-number value to a chicken”! But even more importantly, we do not have to draw any conclusion, assign any values to anything, and it would still, nonetheless, be a fact about my preferences that I would trade any number of chickens for my grandmother. So it does not make any sense whatsoever to declare that I am mistaken about my valuation of a chicken, when I am not insisting on any such valuation to begin with.
Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.
I should also make clear that I’m not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility. This is mostly useful when thinking about death / lifespan extension and other sacred values, where refusing to explicitly calculate means that you’re not certain the marginal value of additional expenditure will be equal across all possible means for expenditure. For this particular case, it’s unlikely that you will ever come across a situation where the value system “grandma first, then chickens” will disagree with “grandma is worth a really big number of chickens,” and separating the two will be unlikely to have any direct meaningful impact.
But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere. I also think it’s important to cultivate a mentality where a 1e-12 chance of saving grandma feels different from a 1e-6 chance of saving grandma, rather than your mind just interpreting them both as “a chance of saving grandma.”
Any chance of saving my grandmother is worth any number of chickens.
Well, ok. I am not committed to a multi-level system; I was only formulating a bit of skepticism. That being said, if we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation. (Or perhaps even just giving up on utilitarianism as The One Unified Moral System. There are reasons to suspect we might have to do this anyway.)
Perhaps. But you yourself say:
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn’t by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom. It’s just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.
As this comment points out, the additivity of the value of two events which have dependencies has no claim on their additivity when completely independent. Having two pillows isn’t having one pillow twice.
So I actually don’t think you have to give this up to remain rational. Rationality is creating heuristics for the ideal version of yourself, a self of course which isn’t ideal in any fundamental sense but rather however you choose to define it. Let’s call this your preferred self. You should create heuristics that cause you to emulate your preferred self such that your preferred self would choose you out of any of your available options for doing metaethics, when applying you to the actual moral situations you’ll have in your lifetime (or a weighted-by-probability integral over expected moral situations).
What I’m saying is that I wouldn’t be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn’t check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.
This all to say, it’s not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
Because, as you say:
Indeed, and the right answer here is choosing my grandmother. (btw, it’s “googolplex”, not “googleplex”)
Indeed; but...
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
Something has to change. Setting M = 0 is easiest and most consistent with my moral intuitions, and leads to correct results in all choices involving humans. (Of course we might have other motivations for choosing a different path, such as abandoning real-valued utilities or abandoning additive aggregation.)
Now here, I am not actually sure what you’re saying. Could you clarify? What theory?
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you’re not willing to think there isn’t a consistent reconciliation.
My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren’t. The theory I refer to is the one that takes M = 0.
These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don’t think an ideal rational agent can reconcile them, but other point was that our actual selves aren’t required to (but that we should acknowledge this).
I see. I confess that I don’t find your “preferred ethical self” concept to be very compelling (and am highly skeptical about your claim that this is “what rationality is”), but I’m willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread.
You shouldn’t take me to have any kind of “theory that takes M = 0”; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else.
My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues.
Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we’re done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don’t know.
(For what it’s worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)
Yeah probably. To be honest I’m still rather new to the rodeo here, so I’m not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn’t listen to me :)
I’m sure it’s been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It’s probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn’t satisfy the criteria but that additive value still holds.
I don’t think it’d be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I’m not sure.
Suppose you’re walking down the street when you see a chicken trapped under a large rock. You can save it or not. If you save it, it costs you nothing except for your time. Would you save it?
Maybe.
Realistically, it would depend on my mood, and any number of other factors.
Why?
If you would save the chicken, then you think its life is worth 10 seconds of your life, which means you value its life as about 1⁄200,000,000th of your life as a lower bound.
In your view, how much do I think the chicken’s life is worth if I would either save it or not save it, depending on factors I can’t reliably predict or control? If I would save it one day, but not save it the next? If I would save a chicken now, and eat a chicken later?
I don’t take such tendencies to be “revealed preferences” in any strong sense if they are not stable under reflective equilibrium. And I don’t have any belief that I should save the chicken.
Edit: Removed some stuff about tendencies, because it was actually tangential to the point.
It is problematic once you start fine-graining, exactly like in the dust specks/torture debate, where killing a chicken ~ dust speck and killing your grandma ~ torture. There is almost certainly an unbroken chain of comparables between the two extremes.
For what it’s worth, I also choose specks in specks/torture, and find the “chain of comparables” argument unconvincing. (I’d be happy to discuss this, but this is probably not the thread for it.)
That, however, is not all that relevant in practice: the human/nonhuman divide is wide (unless we decide to start uplifting nonhuman species, which I don’t think we should); the smartest nonhuman animals (probably dolphins) might qualify for moral consideration, but we don’t factory-farm dolphins (and I don’t think we should), and chickens and cows certainly don’t qualify; the question of which humans do or don’t qualify is tricky, but that’s why I think we shouldn’t actually kill/torture them with total impunity (cf. bright lines, Schelling fences, etc.).
In short, we do not actually have to do any fine-graining. In the case where we are deciding whether to torture, kill, and eat chickens — that is, the actual, real-world case — my reasoning does not encounter any problems.
Do you assign any negative weight to a suffering chicken? For example, is it OK to simply rip off a leg from a live one and make a dinner, while the injured bird is writhing on the ground slowly bleeding to death?
Sure. However, you raise what is in principle a very solid objection, and so I would like to address it.
Let’s say that I would, all else being equal, prefer that a dog not be tortured. Perhaps I am even willing to take certain actions to prevent a dog from being tortured. Perhaps I also think that two dogs being tortured is worse than one dog being tortured, etc.
However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.
What are we to make of this?
In that case, some component of our utilitarianism might have to be re-examined. Perhaps dogs have a nonzero value, and a lot of dogs have more value than only a few dogs, but no quantity of dogs adds up to one grandmother; but on the other hand, some things are worth more than one grandmother (two grandmothers? all of humanity?).
Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.
(Of course, it’s possible to suppose that we could, if we chose, construct various hypotheticals (perhaps involving some complex series of bets) which would tease out some inconsistency in that set of valuations. That may be the case here, but nothing obvious jumps out at me.)
This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity. On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd. http://spot.colorado.edu/~norcross/2Dogmasdeontology.pdf
By the way, the dogs vs. grandma case differs in an important way from specks vs. torture:
The specks are happening to humans.
It is not actually inconsistent to choose TORTURE in specks/torture while choosing GRANDMA in dogs/grandma. All you have to do is value humans (and humans’ utility) while not valuing dogs (or placing dogs on a “lower moral tier” than your grandmother/humans in general).
In other words, “do many specks add up to torture” and “do many dogs add up to grandma” are not the same question.
That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don’t. People often come up with lexical constructs when they feel uncomfortable with the anticipation of having to change their behaviour. As a consequentialist, I figured out that I care a bit about dog welfare, and being aware of my scope insensitivity, I can see why some people dislike biting the bullet which results from simple additive reasoning. An option would be, though, to say that one’s brain (and anyhow therefore one‘s moral framework) is only capable of a certain amount of caring for dogs and that this variable is independent of the number of dogs. For me that wouldn’t work out though for I care about the content of sentient experience in a additive way. But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern? What would you think of the agent’s morality if it discounted your welfare lexically?
Eliezer handled this sort of objection in Newcomb’s Problem and Regret of Rationality:
What you are doing here is insisting that I conform to your ritual of cognition (i.e. total utilitarianism with real-number valuation and additive aggregation). I see no reason to accede to such a demand.
The following are facts about what I do and don’t care about:
1) All else being equal, I prefer that a dog not be tortured.
2) All else being equal, I prefer that my grandmother not be tortured.
3) I prefer any number of dogs being tortured to my grandmother being tortured.
4 through ∞) Some other stuff about my preferences, skipped for brevity.
#s 2 and 3 are very strong preferences. #1 is less so.
Now I want to find a moral calculus that captures those facts. You, on the other hand, are telling me that, first, I must accept your moral calculus, and then, that if I do so, I must toss out one of the aforementioned preferences.
I decline to do either of those things. (As Eliezer says in the above link: The utility function is not up for grabs.)
I don’t know. This is the kind of thing that demonstrates why we need FAI theory and CEV.
I would think that its morality is different from mine. Also, I would be sad, because presumably such a morality on the AI’s part would result in bad things for me. Your point?
Ok, let’s do some basic friendly AI theory: Would a friendly AI discount the welfare of “weaker” beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let’s assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages). For the scientist knows his experiment very well, none of the objects will die; in the end it’ll look like the two objects changed their places. At which point does the mother stop counting lexically more than the dog? Sometimes continuity arguments can be defeated by saying: “No I don’t draw an arbitrary line; I adjust gradually whereas in the beginning I care a lot about the grandmother and in the end just very little about the remaining dog.” But I think that this argument doesn’t work here for we deal with a lexical prioritization. How would you act in such a scenario?
You can ask the same question with the grandmother turning into a tree instead of into a dog.
Identity isn’t in specific atoms. The effect of swapping a carbon atom in the grandma with a carbon atom in the dog is none at all.
Jiro’s response shows one good reason why I don’t find that thought experiment very interesting. Another obvious reason is its extreme implausibility and, I strongly suspect, actual incoherence (given what we know about physics and biology). I think I can safely say “I have no idea what I would prefer”, much like Eliezer finds no reason to answer how he would explain his arm being turned into a blue tentacle, and not have that be counted against me.
On to FAI theory:
By definition, it would not, because if it did, then it would be an Unfriendly AI.
How do you get from facts about the behavior of an FAI to claims about how we should act? I spy one of those pesky “is-ought” transitions that bedeviled Hume!
Corollary: why should we care that our behavior results in bad things for animals? Isn’t that the question in the first place, and doesn’t your statement beg said question?
As I’ve said elsewhere in this thread, I also choose SPECKS in specks/torture. As for the paper, I will read it when I have time, and try to get back to you with my thoughts.
Edit: And see this thread for a discussion of whether scope neglect applies to my views.
Hence this recent post on surreal utilities.
My suspicion that what has to give is the assumption of unlimited transitivity in VNM, but I never bothered to flesh out the details.
Actually, I believe it’s the continuity axiom that rules out lexicographic preferences.
I examined this one, too, but the continuity axiom intuitively makes sense for comparables, except possibly in cases of extreme risk aversion. I am leaning more toward abandoning the transitivity chain when the options are too far apart. Something like A > B having some uncertainty increasing with the chain length between A and B or with some other quantifiable value.
[Removed.]
This point is of course true, hence my “all else being equal” clause. I do not actually spend any time helping dogs, for pretty much exactly the reasons you list: there are matters of human benefit to attend to, and dogs are strictly less important.
Your last paragraph is mostly moot, since the behavior you allude to is not at all my actual behavior, but I would like to hear a bit more about the behavior model you refer to. (A link would suffice.)
I’m not entirely sure what the relevance of the speed limit example is.
The problem with throwing out #3 is you also have to throw out:
(4) How we value a being’s moral worth is a function of their abilities (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)
Which is a rather nice proposition.
Edit: As Said points out, this should be:
(4) How we value a being’s pain is a function of their ability to feel pain (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)
You don’t, actually. For example, the following is a function):
Let a be a variable representing the abilities of a being. Let E(a) be the ethical value of a being with abilities a. The domain is the nonnegative reals, the range is the reals. Let H be some level of abilities that we have chosen to identify as “human-level abilities”. We define E(a) thus:
a < H : E(a) = 0.
a ≥ H: E(a) = f(a), where f(x) is some other function of our choice.
Fair enough. I’ve updated my statement:
Otherwise we could let H be “maleness” and justify sexism, etc.
Uh, would you mind editing your statement back, or adding a note about what it said before? Otherwise I am left looking like a crazy person, spouting non sequiturs ;) Edit: Thanks!
Anyway, your updated statement is no longer vulnerable to my objection, but neither is it particularly “nice” anymore (that is, I don’t endorse it, and I don’t think most people here who take the “speciesist” position do either).
(By the way, letting H be “maleness” doesn’t make a whole lot of sense. It would be very awkward, to say the least, to represent “maleness” as some nonnegative real number; and it assumes that the abilities captured by a are somehow parallel to the gender spectrum; and it would make it so that we value male chickens but not human women; and calling “maleness” a “level of abilities” is pretty weird.)
Haha, sure, updated.
But why don’t you think it’s “nice” to require abilities to be relevant? If you feel pain more strongly than others do, then I care more about when you’re in pain than when others are in pain.
I probably[1] do as well…
… provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post).
[1] Well, at first glance. Actually, I’m not so sure; I don’t seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that’s what matters.
Well, if you follow that post far enough you’ll see that the author thinks animals feel something that’s morally equivalent to pain, s/he just doesn’t like calling it “pain”.
But assuming you genuinely don’t think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn’t list any supporting evidence.
I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took.
I didn’t say anything about animals not feeling pain (what does it “morally equivalent to pain” mean?). I said I don’t care about animal pain.
… the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we’re talking past each other.
I apologize for the confusion. Let me attempt to summarize your position:
It is possible for subjectively bad things to happen to animals
Despite this fact, it is not possible for objectively bad things to happen to animals
Is that correct? If so, could you explain what “subjective” and “objective” mean here—usually, “objective” just means something like “the sum of subjective”, in which case #2 trivially follows from #1, which was the source of my confusion.
I don’t know what “subjective” and “objective” mean here, because I am not the one using that wording.
What do you mean by “subjectively bad things”?
My intuition here is solid to an hilariously unjustified degree on “10^20”.