This seems like a weird preference to have. This de-facto means that you would never pay any attention whatsoever to the live’s of chicken, since any infinitesimally small change to the probability of your grandmother dying will outweigh any potential moral relevance. For all practical purposes in our world (which is interconnected to a degree that almost all actions will have some potential consequences to your grandmother), an agent following this preference would be indistinguishable from someone who does not care at all about chickens.
This de-facto means that you would never pay any attention whatsoever to the live’s of chicken
Only if that agent has a grandmother.
Suppose my grandmother (may she live to be 120) were to die. My preferences about the survival of chickens would now come into play. This is hardly an exotic scenario! There are many parallel constructions we can imagine. (Or do you propose that we decline to have preferences that bear only on possible future situations, not currently possible ones?)
Edited to add:
This is called “lexicographic preferences”, and it too is hardly exotic or unprecedented.
(end edit)
+++
Of course, even that is moot if we reject the proposition that “our world … is interconnected to a degree that almost all actions will have some potential consequences to your grandmother”.
And there are good reasons to reject it. If nothing else, it’s a fact that given sufficiently small probabilities, we humans are not capable of considering numbers of such precision, and so it seems strange to speak of basing our choices on them! There is also noise in measurement, errors in calculation, inaccuracies in the model, uncertainty, and a host of other factors that add up to the fact that in practice, “almost all actions” will, in fact, have no (foreseeable) consequences for my grandmother.
The value of information of finding out the consequences that any action has on the life of your grandmother is infinitely larger than the value you would assign to any number of chickens. De-facto this means that even if your grandmother is dead, as long as you are not literally 100% certain that she is dead and forever gone and could not possibly be brought back, you completely ignore the plight of chickens.
The fact that he is not willing to kill his grandmother to save the chickens doesn’t imply that chickens have 0 value or that his grandmother has infinite value.
Consider the problem from an egocentric point of view: to be responsible for one’s grandmother’s death feels awful, but also dedicating your life to a very unlikely possibility to save someone who has been declared dead, seems awful.
This seems like a weird preference to have. This de-facto means that you would never pay any attention whatsoever to the live’s of chicken, since any infinitesimally small change to the probability of your grandmother dying will outweigh any potential moral relevance. For all practical purposes in our world (which is interconnected to a degree that almost all actions will have some potential consequences to your grandmother), an agent following this preference would be indistinguishable from someone who does not care at all about chickens.
Only if that agent has a grandmother.
Suppose my grandmother (may she live to be 120) were to die. My preferences about the survival of chickens would now come into play. This is hardly an exotic scenario! There are many parallel constructions we can imagine. (Or do you propose that we decline to have preferences that bear only on possible future situations, not currently possible ones?)
Edited to add:
This is called “lexicographic preferences”, and it too is hardly exotic or unprecedented.
(end edit)
+++
Of course, even that is moot if we reject the proposition that “our world … is interconnected to a degree that almost all actions will have some potential consequences to your grandmother”.
And there are good reasons to reject it. If nothing else, it’s a fact that given sufficiently small probabilities, we humans are not capable of considering numbers of such precision, and so it seems strange to speak of basing our choices on them! There is also noise in measurement, errors in calculation, inaccuracies in the model, uncertainty, and a host of other factors that add up to the fact that in practice, “almost all actions” will, in fact, have no (foreseeable) consequences for my grandmother.
The value of information of finding out the consequences that any action has on the life of your grandmother is infinitely larger than the value you would assign to any number of chickens. De-facto this means that even if your grandmother is dead, as long as you are not literally 100% certain that she is dead and forever gone and could not possibly be brought back, you completely ignore the plight of chickens.
The fact that he is not willing to kill his grandmother to save the chickens doesn’t imply that chickens have 0 value or that his grandmother has infinite value.
Consider the problem from an egocentric point of view: to be responsible for one’s grandmother’s death feels awful, but also dedicating your life to a very unlikely possibility to save someone who has been declared dead, seems awful.
Stuart wrote a post about this a while ago, though it’s not the most understandable.