For what it’s worth, I also choose specks in specks/torture, and find the “chain of comparables” argument unconvincing. (I’d be happy to discuss this, but this is probably not the thread for it.)
That, however, is not all that relevant in practice: the human/nonhuman divide is wide (unless we decide to start uplifting nonhuman species, which I don’t think we should); the smartest nonhuman animals (probably dolphins) might qualify for moral consideration, but we don’t factory-farm dolphins (and I don’t think we should), and chickens and cows certainly don’t qualify; the question of which humans do or don’t qualify is tricky, but that’s why I think we shouldn’t actually kill/torture them with total impunity (cf. bright lines, Schelling fences, etc.).
In short, we do not actually have to do any fine-graining. In the case where we are deciding whether to torture, kill, and eat chickens — that is, the actual, real-world case — my reasoning does not encounter any problems.
Do you assign any negative weight to a suffering chicken? For example, is it OK to simply rip off a leg from a live one and make a dinner, while the injured bird is writhing on the ground slowly bleeding to death?
Sure. However, you raise what is in principle a very solid objection, and so I would like to address it.
Let’s say that I would, all else being equal, prefer that a dog not be tortured. Perhaps I am even willing to take certain actions to prevent a dog from being tortured. Perhaps I also think that two dogs being tortured is worse than one dog being tortured, etc.
However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.
What are we to make of this?
In that case, some component of our utilitarianism might have to be re-examined. Perhaps dogs have a nonzero value, and a lot of dogs have more value than only a few dogs, but no quantity of dogs adds up to one grandmother; but on the other hand, some things are worth more than one grandmother (two grandmothers? all of humanity?).
Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.
(Of course, it’s possible to suppose that we could, if we chose, construct various hypotheticals (perhaps involving some complex series of bets) which would tease out some inconsistency in that set of valuations. That may be the case here, but nothing obvious jumps out at me.)
However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.
This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity.
On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd. http://spot.colorado.edu/~norcross/2Dogmasdeontology.pdf
By the way, the dogs vs. grandma case differs in an important way from specks vs. torture:
The specks are happening to humans.
It is not actually inconsistent to choose TORTURE in specks/torture while choosing GRANDMA in dogs/grandma. All you have to do is value humans (and humans’ utility) while not valuing dogs (or placing dogs on a “lower moral tier” than your grandmother/humans in general).
In other words, “do many specks add up to torture” and “do many dogs add up to grandma” are not the same question.
That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don’t. People often come up with lexical constructs when they feel uncomfortable with the anticipation of having to change their behaviour. As a consequentialist, I figured out that I care a bit about dog welfare, and being aware of my scope insensitivity, I can see why some people dislike biting the bullet which results from simple additive reasoning. An option would be, though, to say that one’s brain (and anyhow therefore one‘s moral framework) is only capable of a certain amount of caring for dogs and that this variable is independent of the number of dogs. For me that wouldn’t work out though for I care about the content of sentient experience in a additive way. But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern? What would you think of the agent’s morality if it discounted your welfare lexically?
That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don’t.
You can’t tell me, first, that above all I must conform to a particular ritual of cognition, and then that, if I conform to that ritual, I must change my morality to avoid being Dutch-booked. Toss out the losing ritual; don’t change the definition of winning.
What you are doing here is insisting that I conform to your ritual of cognition (i.e. total utilitarianism with real-number valuation and additive aggregation). I see no reason to accede to such a demand.
The following are facts about what I do and don’t care about:
1) All else being equal, I prefer that a dog not be tortured. 2) All else being equal, I prefer that my grandmother not be tortured. 3) I prefer any number of dogs being tortured to my grandmother being tortured. 4 through ∞) Some other stuff about my preferences, skipped for brevity.
#s 2 and 3 are very strong preferences. #1 is less so.
Now I want to find a moral calculus that captures those facts. You, on the other hand, are telling me that, first, I must accept your moral calculus, and then, that if I do so, I must toss out one of the aforementioned preferences.
I decline to do either of those things. (As Eliezer says in the above link: The utility function is not up for grabs.)
But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern?
I don’t know. This is the kind of thing that demonstrates why we need FAI theory and CEV.
What would you think of the agent’s morality if it discounted your welfare lexically?
I would think that its morality is different from mine. Also, I would be sad, because presumably such a morality on the AI’s part would result in bad things for me. Your point?
Ok, let’s do some basic friendly AI theory: Would a friendly AI discount the welfare of “weaker” beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let’s assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages). For the scientist knows his experiment very well, none of the objects will die; in the end it’ll look like the two objects changed their places.
At which point does the mother stop counting lexically more than the dog?
Sometimes continuity arguments can be defeated by saying: “No I don’t draw an arbitrary line; I adjust gradually whereas in the beginning I care a lot about the grandmother and in the end just very little about the remaining dog.” But I think that this argument doesn’t work here for we deal with a lexical prioritization. How would you act in such a scenario?
A perfect scientist starts to exchange pairs of atoms (let’s assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages).
Identity isn’t in specific atoms. The effect of swapping a carbon atom in the grandma with a carbon atom in the dog is none at all.
Jiro’s response shows one good reason why I don’t find that thought experiment very interesting. Another obvious reason is its extreme implausibility and, I strongly suspect, actual incoherence (given what we know about physics and biology). I think I can safely say “I have no idea what I would prefer”, much like Eliezer finds no reason to answer how he would explain his arm being turned into a blue tentacle, and not have that be counted against me.
On to FAI theory:
Would a friendly AI discount the welfare of “weaker” beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI?
By definition, it would not, because if it did, then it would be an Unfriendly AI.
If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
How do you get from facts about the behavior of an FAI to claims about how we should act? I spy one of those pesky “is-ought” transitions that bedeviled Hume!
Corollary: why should we care that our behavior results in bad things for animals? Isn’t that the question in the first place, and doesn’t your statement beg said question?
As I’ve said elsewhere in this thread, I also choose SPECKS in specks/torture. As for the paper, I will read it when I have time, and try to get back to you with my thoughts.
Edit: And see this thread for a discussion of whether scope neglect applies to my views.
I examined this one, too, but the continuity axiom intuitively makes sense for comparables, except possibly in cases of extreme risk aversion. I am leaning more toward abandoning the transitivity chain when the options are too far apart. Something like A > B having some uncertainty increasing with the chain length between A and B or with some other quantifiable value.
Hours you spend helping dogs are hours you could have spent helping humans, e.g. having more money is associated with longer life.
This point is of course true, hence my “all else being equal” clause. I do not actually spend any time helping dogs, for pretty much exactly the reasons you list: there are matters of human benefit to attend to, and dogs are strictly less important.
Your last paragraph is mostly moot, since the behavior you allude to is not at all my actual behavior, but I would like to hear a bit more about the behavior model you refer to. (A link would suffice.)
I’m not entirely sure what the relevance of the speed limit example is.
For what it’s worth, I also choose specks in specks/torture, and find the “chain of comparables” argument unconvincing. (I’d be happy to discuss this, but this is probably not the thread for it.)
That, however, is not all that relevant in practice: the human/nonhuman divide is wide (unless we decide to start uplifting nonhuman species, which I don’t think we should); the smartest nonhuman animals (probably dolphins) might qualify for moral consideration, but we don’t factory-farm dolphins (and I don’t think we should), and chickens and cows certainly don’t qualify; the question of which humans do or don’t qualify is tricky, but that’s why I think we shouldn’t actually kill/torture them with total impunity (cf. bright lines, Schelling fences, etc.).
In short, we do not actually have to do any fine-graining. In the case where we are deciding whether to torture, kill, and eat chickens — that is, the actual, real-world case — my reasoning does not encounter any problems.
Do you assign any negative weight to a suffering chicken? For example, is it OK to simply rip off a leg from a live one and make a dinner, while the injured bird is writhing on the ground slowly bleeding to death?
Sure. However, you raise what is in principle a very solid objection, and so I would like to address it.
Let’s say that I would, all else being equal, prefer that a dog not be tortured. Perhaps I am even willing to take certain actions to prevent a dog from being tortured. Perhaps I also think that two dogs being tortured is worse than one dog being tortured, etc.
However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.
What are we to make of this?
In that case, some component of our utilitarianism might have to be re-examined. Perhaps dogs have a nonzero value, and a lot of dogs have more value than only a few dogs, but no quantity of dogs adds up to one grandmother; but on the other hand, some things are worth more than one grandmother (two grandmothers? all of humanity?).
Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.
(Of course, it’s possible to suppose that we could, if we chose, construct various hypotheticals (perhaps involving some complex series of bets) which would tease out some inconsistency in that set of valuations. That may be the case here, but nothing obvious jumps out at me.)
This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity. On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd. http://spot.colorado.edu/~norcross/2Dogmasdeontology.pdf
By the way, the dogs vs. grandma case differs in an important way from specks vs. torture:
The specks are happening to humans.
It is not actually inconsistent to choose TORTURE in specks/torture while choosing GRANDMA in dogs/grandma. All you have to do is value humans (and humans’ utility) while not valuing dogs (or placing dogs on a “lower moral tier” than your grandmother/humans in general).
In other words, “do many specks add up to torture” and “do many dogs add up to grandma” are not the same question.
That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don’t. People often come up with lexical constructs when they feel uncomfortable with the anticipation of having to change their behaviour. As a consequentialist, I figured out that I care a bit about dog welfare, and being aware of my scope insensitivity, I can see why some people dislike biting the bullet which results from simple additive reasoning. An option would be, though, to say that one’s brain (and anyhow therefore one‘s moral framework) is only capable of a certain amount of caring for dogs and that this variable is independent of the number of dogs. For me that wouldn’t work out though for I care about the content of sentient experience in a additive way. But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern? What would you think of the agent’s morality if it discounted your welfare lexically?
Eliezer handled this sort of objection in Newcomb’s Problem and Regret of Rationality:
What you are doing here is insisting that I conform to your ritual of cognition (i.e. total utilitarianism with real-number valuation and additive aggregation). I see no reason to accede to such a demand.
The following are facts about what I do and don’t care about:
1) All else being equal, I prefer that a dog not be tortured.
2) All else being equal, I prefer that my grandmother not be tortured.
3) I prefer any number of dogs being tortured to my grandmother being tortured.
4 through ∞) Some other stuff about my preferences, skipped for brevity.
#s 2 and 3 are very strong preferences. #1 is less so.
Now I want to find a moral calculus that captures those facts. You, on the other hand, are telling me that, first, I must accept your moral calculus, and then, that if I do so, I must toss out one of the aforementioned preferences.
I decline to do either of those things. (As Eliezer says in the above link: The utility function is not up for grabs.)
I don’t know. This is the kind of thing that demonstrates why we need FAI theory and CEV.
I would think that its morality is different from mine. Also, I would be sad, because presumably such a morality on the AI’s part would result in bad things for me. Your point?
Ok, let’s do some basic friendly AI theory: Would a friendly AI discount the welfare of “weaker” beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let’s assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages). For the scientist knows his experiment very well, none of the objects will die; in the end it’ll look like the two objects changed their places. At which point does the mother stop counting lexically more than the dog? Sometimes continuity arguments can be defeated by saying: “No I don’t draw an arbitrary line; I adjust gradually whereas in the beginning I care a lot about the grandmother and in the end just very little about the remaining dog.” But I think that this argument doesn’t work here for we deal with a lexical prioritization. How would you act in such a scenario?
You can ask the same question with the grandmother turning into a tree instead of into a dog.
Identity isn’t in specific atoms. The effect of swapping a carbon atom in the grandma with a carbon atom in the dog is none at all.
Jiro’s response shows one good reason why I don’t find that thought experiment very interesting. Another obvious reason is its extreme implausibility and, I strongly suspect, actual incoherence (given what we know about physics and biology). I think I can safely say “I have no idea what I would prefer”, much like Eliezer finds no reason to answer how he would explain his arm being turned into a blue tentacle, and not have that be counted against me.
On to FAI theory:
By definition, it would not, because if it did, then it would be an Unfriendly AI.
How do you get from facts about the behavior of an FAI to claims about how we should act? I spy one of those pesky “is-ought” transitions that bedeviled Hume!
Corollary: why should we care that our behavior results in bad things for animals? Isn’t that the question in the first place, and doesn’t your statement beg said question?
As I’ve said elsewhere in this thread, I also choose SPECKS in specks/torture. As for the paper, I will read it when I have time, and try to get back to you with my thoughts.
Edit: And see this thread for a discussion of whether scope neglect applies to my views.
Hence this recent post on surreal utilities.
My suspicion that what has to give is the assumption of unlimited transitivity in VNM, but I never bothered to flesh out the details.
Actually, I believe it’s the continuity axiom that rules out lexicographic preferences.
I examined this one, too, but the continuity axiom intuitively makes sense for comparables, except possibly in cases of extreme risk aversion. I am leaning more toward abandoning the transitivity chain when the options are too far apart. Something like A > B having some uncertainty increasing with the chain length between A and B or with some other quantifiable value.
[Removed.]
This point is of course true, hence my “all else being equal” clause. I do not actually spend any time helping dogs, for pretty much exactly the reasons you list: there are matters of human benefit to attend to, and dogs are strictly less important.
Your last paragraph is mostly moot, since the behavior you allude to is not at all my actual behavior, but I would like to hear a bit more about the behavior model you refer to. (A link would suffice.)
I’m not entirely sure what the relevance of the speed limit example is.