They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you’re not willing to think there isn’t a consistent reconciliation.
My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren’t. The theory I refer to is the one that takes M = 0.
These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don’t think an ideal rational agent can reconcile them, but other point was that our actual selves aren’t required to (but that we should acknowledge this).
I see. I confess that I don’t find your “preferred ethical self” concept to be very compelling (and am highly skeptical about your claim that this is “what rationality is”), but I’m willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread.
You shouldn’t take me to have any kind of “theory that takes M = 0”; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else.
My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues.
Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we’re done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don’t know.
(For what it’s worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)
I suspect that those would be longer than should be posted deep in a tangential comment thread.
Yeah probably. To be honest I’m still rather new to the rodeo here, so I’m not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn’t listen to me :)
I’m sure it’s been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It’s probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn’t satisfy the criteria but that additive value still holds.
I don’t think it’d be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I’m not sure.
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you’re not willing to think there isn’t a consistent reconciliation.
My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren’t. The theory I refer to is the one that takes M = 0.
These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don’t think an ideal rational agent can reconcile them, but other point was that our actual selves aren’t required to (but that we should acknowledge this).
I see. I confess that I don’t find your “preferred ethical self” concept to be very compelling (and am highly skeptical about your claim that this is “what rationality is”), but I’m willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread.
You shouldn’t take me to have any kind of “theory that takes M = 0”; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else.
My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues.
Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we’re done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don’t know.
(For what it’s worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)
Yeah probably. To be honest I’m still rather new to the rodeo here, so I’m not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn’t listen to me :)
I’m sure it’s been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It’s probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn’t satisfy the criteria but that additive value still holds.
I don’t think it’d be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I’m not sure.