Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.
Any chance of saving my grandmother is worth any number of chickens.
I should also make clear that I’m not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility.
Well, ok. I am not committed to a multi-level system; I was only formulating a bit of skepticism. That being said, if we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation. (Or perhaps even just giving up on utilitarianism as The One Unified Moral System. There are reasons to suspect we might have to do this anyway.)
For this particular case, it’s unlikely that you will ever come across a situation where the value system “grandma first, then chickens” will disagree with “grandma is worth a really big number of chickens,” and separating the two will be unlikely to have any direct meaningful impact.
Perhaps. But you yourself say:
But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere.
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn’t by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
If we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom. It’s just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.
As this comment points out, the additivity of the value of two events which have dependencies has no claim on their additivity when completely independent. Having two pillows isn’t having one pillow twice.
Any chance of saving my grandmother is worth any number of chickens.
So I actually don’t think you have to give this up to remain rational. Rationality is creating heuristics for the ideal version of yourself, a self of course which isn’t ideal in any fundamental sense but rather however you choose to define it. Let’s call this your preferred self. You should create heuristics that cause you to emulate your preferred self such that your preferred self would choose you out of any of your available options for doing metaethics, when applying you to the actual moral situations you’ll have in your lifetime (or a weighted-by-probability integral over expected moral situations).
What I’m saying is that I wouldn’t be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn’t check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.
This all to say, it’s not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn’t by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
Because, as you say:
This all to say, it’s not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
Indeed, and the right answer here is choosing my grandmother. (btw, it’s “googolplex”, not “googleplex”)
If we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom.
Indeed; but...
It’s just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
Something has to change. Setting M = 0 is easiest and most consistent with my moral intuitions, and leads to correct results in all choices involving humans. (Of course we might have other motivations for choosing a different path, such as abandoning real-valued utilities or abandoning additive aggregation.)
What I’m saying is that I wouldn’t be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn’t check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.
Now here, I am not actually sure what you’re saying. Could you clarify? What theory?
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you’re not willing to think there isn’t a consistent reconciliation.
My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren’t. The theory I refer to is the one that takes M = 0.
These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don’t think an ideal rational agent can reconcile them, but other point was that our actual selves aren’t required to (but that we should acknowledge this).
I see. I confess that I don’t find your “preferred ethical self” concept to be very compelling (and am highly skeptical about your claim that this is “what rationality is”), but I’m willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread.
You shouldn’t take me to have any kind of “theory that takes M = 0”; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else.
My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues.
Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we’re done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don’t know.
(For what it’s worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)
I suspect that those would be longer than should be posted deep in a tangential comment thread.
Yeah probably. To be honest I’m still rather new to the rodeo here, so I’m not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn’t listen to me :)
I’m sure it’s been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It’s probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn’t satisfy the criteria but that additive value still holds.
I don’t think it’d be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I’m not sure.
Any chance of saving my grandmother is worth any number of chickens.
Well, ok. I am not committed to a multi-level system; I was only formulating a bit of skepticism. That being said, if we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation. (Or perhaps even just giving up on utilitarianism as The One Unified Moral System. There are reasons to suspect we might have to do this anyway.)
Perhaps. But you yourself say:
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn’t by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom. It’s just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.
As this comment points out, the additivity of the value of two events which have dependencies has no claim on their additivity when completely independent. Having two pillows isn’t having one pillow twice.
So I actually don’t think you have to give this up to remain rational. Rationality is creating heuristics for the ideal version of yourself, a self of course which isn’t ideal in any fundamental sense but rather however you choose to define it. Let’s call this your preferred self. You should create heuristics that cause you to emulate your preferred self such that your preferred self would choose you out of any of your available options for doing metaethics, when applying you to the actual moral situations you’ll have in your lifetime (or a weighted-by-probability integral over expected moral situations).
What I’m saying is that I wouldn’t be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn’t check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.
This all to say, it’s not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
Because, as you say:
Indeed, and the right answer here is choosing my grandmother. (btw, it’s “googolplex”, not “googleplex”)
Indeed; but...
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
Something has to change. Setting M = 0 is easiest and most consistent with my moral intuitions, and leads to correct results in all choices involving humans. (Of course we might have other motivations for choosing a different path, such as abandoning real-valued utilities or abandoning additive aggregation.)
Now here, I am not actually sure what you’re saying. Could you clarify? What theory?
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you’re not willing to think there isn’t a consistent reconciliation.
My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren’t. The theory I refer to is the one that takes M = 0.
These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don’t think an ideal rational agent can reconcile them, but other point was that our actual selves aren’t required to (but that we should acknowledge this).
I see. I confess that I don’t find your “preferred ethical self” concept to be very compelling (and am highly skeptical about your claim that this is “what rationality is”), but I’m willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread.
You shouldn’t take me to have any kind of “theory that takes M = 0”; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else.
My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues.
Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we’re done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don’t know.
(For what it’s worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)
Yeah probably. To be honest I’m still rather new to the rodeo here, so I’m not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn’t listen to me :)
I’m sure it’s been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It’s probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn’t satisfy the criteria but that additive value still holds.
I don’t think it’d be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I’m not sure.