Scope neglect is a scaling error — what quantity is it you think I am scaling incorrectly?
When I imagine sacrificing one chicken, it looks like a voodoo ritual or a few pounds of meat, worth maybe tens of dollars. When I imagine sacrificing a thousand chickens, it looks like feeding a person for several years, and maybe tens of thousand dollars. When I imagine sacrificing a million chickens, it looks like feeding a thousand people for several years, and maybe tens of millions of dollars. When I imagine sacrificing a billion chickens, it looks like feeding millions of people for several years, and a sizeable chunk of the US poultry industry. When I imagine sacrificing a trillion chickens, it looks like feeding the population of the US for a decade, and several times the global poultry industry. (I know this is in terms of their prey value, but since I view chickens as prey that’s how I imagine them, not in terms of individual subjective experience.)
And that’s only 1e9! There are lots of bigger numbers. What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you’re indifferent between them. When I imagine weighing one person against the global poultry industry, it’s not obvious to me that one person is the right choice, and it feels to me that if it’s not obvious, you can just increase the number of chickens.
One counterargument to this is “but chickens and humans are on different levels of moral value, and it’s wrong to trade off a higher level for a lower level.” I don’t think that’s a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).
I… don’t see how your examples/imagery answer my question.
When I imagine weighing one person against the global poultry industry, it’s not obvious to me that one person is the right choice, and it feels to me that if it’s not obvious, you can just increase the number of chickens.
It is completely obvious to me. (I assume by “global poultry industry” you mean “that number of chickens”, since if we literally eradicated global chicken production, lots of bad effects (on humans) would result.)
One counterargument to this is “but chickens and humans are on different levels of moral value, and it’s wrong to trade off a higher level for a lower level.” I don’t think that’s a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).
Don’t be so sure! Multi-level morality, by the way, does not necessarily mean that my grandmother occupies the top level all by herself. However, that’s a separate discussion; I started this subthread from an assumption of basic utilitarianism.
Anyway, I think — with apologies — that you are still misunderstanding me. Take this:
What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you’re indifferent between them.
There is no level where I’d be indifferent between them. That’s my point. Why would I try to find such a level? What moral intuition do you think I might have that would motivate me to try this?
Anyway, I think — with apologies — that you are still misunderstanding me.
Yes and no. I wasn’t aware that you were using a multi-level morality, but agree with you that it doesn’t obviously break and doesn’t require infinite utilities in any particular level.
That said, my experience has been that every multi-level morality I’ve looked at hard enough has turned out to map to the real line, but because of measurement difficulties it looked like there were clusters of incomparable utilities. It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it’s 0 I don’t take their confidence as informative. If they’re an expert in decision science and eliciting this sort of information, then I do take it seriously, but I’m still suspicious that This Time It’s Different.
Another big concern here is revealed preferences vs. stated preferences. Many people, when you ask them about it, will claim that they would not accept money in exchange for a risk to their life, but then in practice do that continually- but on the level where they accept $10 in exchange for a millionth chance of dying, for example. One interpretation is that they’re behaving irrationally, but I think the more plausible interpretation is that they’re acting rationally but talking irrationally. (Talking irrationally can be a rational act, like I talk about here.)
Well, as far as revealed vs. stated preferences go, I don’t think we have any way of subjecting my chicken vs. grandmother preference to a real-world test, so I suppose You’ll Just Have To Take My Word For It. As for the rest...
It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it’s 0 I don’t take their confidence as informative.
What would it mean for me to be mistaken about this? Are you suggesting that, despite my belief that I’d trade any number of chickens to save my grandmother, there’s some situation we might encounter, some really large number of chickens, faced with which I would say: “Well, shit. I guess I’ll take the chickens after all. Sorry, grandma”?
I find it very strange that you are taking my comments to be statements about which particular real number value I would assign to a single chicken. I certainly do not intend them that way. I intend them to be statements about what I would do in various situations; which choice, out of various sets of options, I would make.
Whether or not we can then transform those preferences into real-number valuations of single chickens, or sets of many chickens, is a question we certainly could ask, but the answer to that question is a conclusion that we would be drawing from the givens. That conclusion might be something like “my preferences do not coherently translate into assigning a real-number value to a chicken”! But even more importantly, we do not have to draw any conclusion, assign any values to anything, and it would still, nonetheless, be a fact about my preferences that I would trade any number of chickens for my grandmother. So it does not make any sense whatsoever to declare that I am mistaken about my valuation of a chicken, when I am not insisting on any such valuation to begin with.
What would it mean for me to be mistaken about this?
Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.
I should also make clear that I’m not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility. This is mostly useful when thinking about death / lifespan extension and other sacred values, where refusing to explicitly calculate means that you’re not certain the marginal value of additional expenditure will be equal across all possible means for expenditure. For this particular case, it’s unlikely that you will ever come across a situation where the value system “grandma first, then chickens” will disagree with “grandma is worth a really big number of chickens,” and separating the two will be unlikely to have any direct meaningful impact.
But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere. I also think it’s important to cultivate a mentality where a 1e-12 chance of saving grandma feels different from a 1e-6 chance of saving grandma, rather than your mind just interpreting them both as “a chance of saving grandma.”
Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.
Any chance of saving my grandmother is worth any number of chickens.
I should also make clear that I’m not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility.
Well, ok. I am not committed to a multi-level system; I was only formulating a bit of skepticism. That being said, if we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation. (Or perhaps even just giving up on utilitarianism as The One Unified Moral System. There are reasons to suspect we might have to do this anyway.)
For this particular case, it’s unlikely that you will ever come across a situation where the value system “grandma first, then chickens” will disagree with “grandma is worth a really big number of chickens,” and separating the two will be unlikely to have any direct meaningful impact.
Perhaps. But you yourself say:
But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere.
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn’t by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
If we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom. It’s just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.
As this comment points out, the additivity of the value of two events which have dependencies has no claim on their additivity when completely independent. Having two pillows isn’t having one pillow twice.
Any chance of saving my grandmother is worth any number of chickens.
So I actually don’t think you have to give this up to remain rational. Rationality is creating heuristics for the ideal version of yourself, a self of course which isn’t ideal in any fundamental sense but rather however you choose to define it. Let’s call this your preferred self. You should create heuristics that cause you to emulate your preferred self such that your preferred self would choose you out of any of your available options for doing metaethics, when applying you to the actual moral situations you’ll have in your lifetime (or a weighted-by-probability integral over expected moral situations).
What I’m saying is that I wouldn’t be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn’t check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.
This all to say, it’s not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn’t by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
Because, as you say:
This all to say, it’s not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
Indeed, and the right answer here is choosing my grandmother. (btw, it’s “googolplex”, not “googleplex”)
If we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom.
Indeed; but...
It’s just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
Something has to change. Setting M = 0 is easiest and most consistent with my moral intuitions, and leads to correct results in all choices involving humans. (Of course we might have other motivations for choosing a different path, such as abandoning real-valued utilities or abandoning additive aggregation.)
What I’m saying is that I wouldn’t be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn’t check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.
Now here, I am not actually sure what you’re saying. Could you clarify? What theory?
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you’re not willing to think there isn’t a consistent reconciliation.
My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren’t. The theory I refer to is the one that takes M = 0.
These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don’t think an ideal rational agent can reconcile them, but other point was that our actual selves aren’t required to (but that we should acknowledge this).
I see. I confess that I don’t find your “preferred ethical self” concept to be very compelling (and am highly skeptical about your claim that this is “what rationality is”), but I’m willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread.
You shouldn’t take me to have any kind of “theory that takes M = 0”; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else.
My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues.
Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we’re done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don’t know.
(For what it’s worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)
I suspect that those would be longer than should be posted deep in a tangential comment thread.
Yeah probably. To be honest I’m still rather new to the rodeo here, so I’m not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn’t listen to me :)
I’m sure it’s been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It’s probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn’t satisfy the criteria but that additive value still holds.
I don’t think it’d be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I’m not sure.
When I imagine sacrificing one chicken, it looks like a voodoo ritual or a few pounds of meat, worth maybe tens of dollars. When I imagine sacrificing a thousand chickens, it looks like feeding a person for several years, and maybe tens of thousand dollars. When I imagine sacrificing a million chickens, it looks like feeding a thousand people for several years, and maybe tens of millions of dollars. When I imagine sacrificing a billion chickens, it looks like feeding millions of people for several years, and a sizeable chunk of the US poultry industry. When I imagine sacrificing a trillion chickens, it looks like feeding the population of the US for a decade, and several times the global poultry industry. (I know this is in terms of their prey value, but since I view chickens as prey that’s how I imagine them, not in terms of individual subjective experience.)
And that’s only 1e9! There are lots of bigger numbers. What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you’re indifferent between them. When I imagine weighing one person against the global poultry industry, it’s not obvious to me that one person is the right choice, and it feels to me that if it’s not obvious, you can just increase the number of chickens.
One counterargument to this is “but chickens and humans are on different levels of moral value, and it’s wrong to trade off a higher level for a lower level.” I don’t think that’s a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).
I… don’t see how your examples/imagery answer my question.
It is completely obvious to me. (I assume by “global poultry industry” you mean “that number of chickens”, since if we literally eradicated global chicken production, lots of bad effects (on humans) would result.)
Don’t be so sure! Multi-level morality, by the way, does not necessarily mean that my grandmother occupies the top level all by herself. However, that’s a separate discussion; I started this subthread from an assumption of basic utilitarianism.
Anyway, I think — with apologies — that you are still misunderstanding me. Take this:
There is no level where I’d be indifferent between them. That’s my point. Why would I try to find such a level? What moral intuition do you think I might have that would motivate me to try this?
Yes and no. I wasn’t aware that you were using a multi-level morality, but agree with you that it doesn’t obviously break and doesn’t require infinite utilities in any particular level.
That said, my experience has been that every multi-level morality I’ve looked at hard enough has turned out to map to the real line, but because of measurement difficulties it looked like there were clusters of incomparable utilities. It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it’s 0 I don’t take their confidence as informative. If they’re an expert in decision science and eliciting this sort of information, then I do take it seriously, but I’m still suspicious that This Time It’s Different.
Another big concern here is revealed preferences vs. stated preferences. Many people, when you ask them about it, will claim that they would not accept money in exchange for a risk to their life, but then in practice do that continually- but on the level where they accept $10 in exchange for a millionth chance of dying, for example. One interpretation is that they’re behaving irrationally, but I think the more plausible interpretation is that they’re acting rationally but talking irrationally. (Talking irrationally can be a rational act, like I talk about here.)
Well, as far as revealed vs. stated preferences go, I don’t think we have any way of subjecting my chicken vs. grandmother preference to a real-world test, so I suppose You’ll Just Have To Take My Word For It. As for the rest...
What would it mean for me to be mistaken about this? Are you suggesting that, despite my belief that I’d trade any number of chickens to save my grandmother, there’s some situation we might encounter, some really large number of chickens, faced with which I would say: “Well, shit. I guess I’ll take the chickens after all. Sorry, grandma”?
I find it very strange that you are taking my comments to be statements about which particular real number value I would assign to a single chicken. I certainly do not intend them that way. I intend them to be statements about what I would do in various situations; which choice, out of various sets of options, I would make.
Whether or not we can then transform those preferences into real-number valuations of single chickens, or sets of many chickens, is a question we certainly could ask, but the answer to that question is a conclusion that we would be drawing from the givens. That conclusion might be something like “my preferences do not coherently translate into assigning a real-number value to a chicken”! But even more importantly, we do not have to draw any conclusion, assign any values to anything, and it would still, nonetheless, be a fact about my preferences that I would trade any number of chickens for my grandmother. So it does not make any sense whatsoever to declare that I am mistaken about my valuation of a chicken, when I am not insisting on any such valuation to begin with.
Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.
I should also make clear that I’m not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility. This is mostly useful when thinking about death / lifespan extension and other sacred values, where refusing to explicitly calculate means that you’re not certain the marginal value of additional expenditure will be equal across all possible means for expenditure. For this particular case, it’s unlikely that you will ever come across a situation where the value system “grandma first, then chickens” will disagree with “grandma is worth a really big number of chickens,” and separating the two will be unlikely to have any direct meaningful impact.
But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere. I also think it’s important to cultivate a mentality where a 1e-12 chance of saving grandma feels different from a 1e-6 chance of saving grandma, rather than your mind just interpreting them both as “a chance of saving grandma.”
Any chance of saving my grandmother is worth any number of chickens.
Well, ok. I am not committed to a multi-level system; I was only formulating a bit of skepticism. That being said, if we are using real-valued utilities, then we’re back to either assigning chickens 0 value or abandoning additive aggregation. (Or perhaps even just giving up on utilitarianism as The One Unified Moral System. There are reasons to suspect we might have to do this anyway.)
Perhaps. But you yourself say:
So I don’t think I ought to just say “eh, let’s call grandma’s worth a googolplex of chickens and call it a day”.
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn’t by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.
Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom. It’s just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states.
As this comment points out, the additivity of the value of two events which have dependencies has no claim on their additivity when completely independent. Having two pillows isn’t having one pillow twice.
So I actually don’t think you have to give this up to remain rational. Rationality is creating heuristics for the ideal version of yourself, a self of course which isn’t ideal in any fundamental sense but rather however you choose to define it. Let’s call this your preferred self. You should create heuristics that cause you to emulate your preferred self such that your preferred self would choose you out of any of your available options for doing metaethics, when applying you to the actual moral situations you’ll have in your lifetime (or a weighted-by-probability integral over expected moral situations).
What I’m saying is that I wouldn’t be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn’t check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment.
This all to say, it’s not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
Because, as you say:
Indeed, and the right answer here is choosing my grandmother. (btw, it’s “googolplex”, not “googleplex”)
Indeed; but...
They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.
Something has to change. Setting M = 0 is easiest and most consistent with my moral intuitions, and leads to correct results in all choices involving humans. (Of course we might have other motivations for choosing a different path, such as abandoning real-valued utilities or abandoning additive aggregation.)
Now here, I am not actually sure what you’re saying. Could you clarify? What theory?
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you’re not willing to think there isn’t a consistent reconciliation.
My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren’t. The theory I refer to is the one that takes M = 0.
These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don’t think an ideal rational agent can reconcile them, but other point was that our actual selves aren’t required to (but that we should acknowledge this).
I see. I confess that I don’t find your “preferred ethical self” concept to be very compelling (and am highly skeptical about your claim that this is “what rationality is”), but I’m willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread.
You shouldn’t take me to have any kind of “theory that takes M = 0”; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else.
My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues.
Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we’re done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don’t know.
(For what it’s worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)
Yeah probably. To be honest I’m still rather new to the rodeo here, so I’m not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn’t listen to me :)
I’m sure it’s been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It’s probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn’t satisfy the criteria but that additive value still holds.
I don’t think it’d be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I’m not sure.