failing to remember that an existential catastrophe would be roughly infinitely worse than just losing the paltry sum of seven billion people
That seems like it might be true for someone fanatically committed to an unbounded aggregative social welfare function combined with a lot of adjustments to deal with infinities, etc. Given any moral uncertainty, mixed motivations, etc (with an aggregation rule that doesn’t automatically hand the decision to the internal component that names the biggest number) the claim doesn’t go through. Also, it’s an annoying assertion of the supremacy of one’s nominal values (as sometimes verbally expressed, not revealed preference) to most people.
Given any moral uncertainty, mixed motivations, etc (with an aggregation rule that doesn’t automatically hand the decision to the internal component that names the biggest number) the claim doesn’t go through.
This isn’t clear to me, especially given that Will only said roughly infinite.
An aggregation rule that says “follow the prescription of any moral hypothesis to which you assign at least 80% probability” might well make Will’s claim go through, and yet does not “automatically hand the decision to the internal component that names the biggest number” as I understand that phrase; after all, the hypothesis won out by being 80% probable and not by naming the biggest number. Some other hypothesis could have won out by naming a smaller number (than the numbers that turn up in discussions of astronomical waste), if it had seemed true.
I don’t actually endorse that particular aggregation rule, but for me to be convinced that all plausible candidates avoid Will’s conclusion that the relevant value here is “roughly infinite” (or the much weaker conclusion that LW is irrationally scope-insensitive here) would require some further argument.
(Also, your comment mixes mentions of predictions about future policies, predictions about predictions about future policies, and future policies in a way that makes it near-impossible to evaluate your explicit message (if someone wanted to do that instead of ignoring it because its foundations are weak). I point this out because if you want to imply that someone or some coalition of parts of someone (some group) is “fanatically committed” to some apparently-indefensible position you should be extra careful to make sure your explicit argument is particularly strong, or at least coherent. You also might not want to imply that one is implicitly assuming obviously absurd things (e.g. “with an aggregation rule that doesn’t automatically hand the decision to the internal component that names the biggest number”) even if you insist on implying that they are implicitly assuming non-obviously absurd things.)
That seems like it might be true for someone fanatically committed to an unbounded aggregative social welfare function combined with a lot of adjustments to deal with infinities, etc.
One does not have to be “fanatically committed” to some ad hoc conjunct of abstract moral positions in order to justifiably make the antiprediction that humanity or its descendants might (that is, high enough probability (given moral uncertainty) that it still dominates the calculation) have some use for all those shiny lights in the sky (and especially the blackholes). It seems to me that your points about mixed motivations etc. are in favor of this antiprediction given many plausible aggregation rules. Sure, most parts of me/humanity might not have any ambitions or drives that require lots of resources to fulfill, but I know for certain that some parts of me/humanity at least nominally do and at least partially act/think accordingly. If those parts end up being acknowledged in moral calculations then a probable default position would be for those parts to take over (at least) the known universe while the other parts stay at home and enjoy themselves. For this not to happen would probably require (again dependent on aggregation rule) that the other parts of humanity actively value to a significant extent not using those resources (again to a significant extent). Given moral uncertainty, I am working under the provisional assumption that some non-negligible fraction of resources in the known universe is going to matter enough to at least some parts of something that guaranteeing its future access should be a very non-negligible goal as one of the few godshatter-coalitions that is able to recognize its potential importance (i.e. comparative advantage).
(Some of a lot of other reasoning I didn’t mention involves a prediction that a singleton won’t be eternally confined to an aggregation rule that is blatantly stupid in a few of the many, many ways an aggregation rule can be blatantly stupid as I judge them (or at least won’t be eternally confined in a huge majority of possible futures). (E.g. CEV has the annoyingly vague necessity of coherence among other things which could easily be called blatantly stupid upon implementation.))
Also, it’s an annoying assertion of the supremacy of one’s nominal values (as sometimes verbally expressed, not revealed preference) to most people.
It’s meant as a guess at a potentially oft-forgotten single-step implication of standard Less Wrong (epistemic and moral) beliefs; not any kind of assertion of supremacy. I have seen enough of the subtleties, complexities, and depth of morality to know that we are much too confused for anyone (any part of one) to be asserting such supremacy.
That seems like it might be true for someone fanatically committed to an unbounded aggregative social welfare function combined with a lot of adjustments to deal with infinities, etc. Given any moral uncertainty, mixed motivations, etc (with an aggregation rule that doesn’t automatically hand the decision to the internal component that names the biggest number) the claim doesn’t go through. Also, it’s an annoying assertion of the supremacy of one’s nominal values (as sometimes verbally expressed, not revealed preference) to most people.
This isn’t clear to me, especially given that Will only said roughly infinite.
An aggregation rule that says “follow the prescription of any moral hypothesis to which you assign at least 80% probability” might well make Will’s claim go through, and yet does not “automatically hand the decision to the internal component that names the biggest number” as I understand that phrase; after all, the hypothesis won out by being 80% probable and not by naming the biggest number. Some other hypothesis could have won out by naming a smaller number (than the numbers that turn up in discussions of astronomical waste), if it had seemed true.
I don’t actually endorse that particular aggregation rule, but for me to be convinced that all plausible candidates avoid Will’s conclusion that the relevant value here is “roughly infinite” (or the much weaker conclusion that LW is irrationally scope-insensitive here) would require some further argument.
(Also, your comment mixes mentions of predictions about future policies, predictions about predictions about future policies, and future policies in a way that makes it near-impossible to evaluate your explicit message (if someone wanted to do that instead of ignoring it because its foundations are weak). I point this out because if you want to imply that someone or some coalition of parts of someone (some group) is “fanatically committed” to some apparently-indefensible position you should be extra careful to make sure your explicit argument is particularly strong, or at least coherent. You also might not want to imply that one is implicitly assuming obviously absurd things (e.g. “with an aggregation rule that doesn’t automatically hand the decision to the internal component that names the biggest number”) even if you insist on implying that they are implicitly assuming non-obviously absurd things.)
One does not have to be “fanatically committed” to some ad hoc conjunct of abstract moral positions in order to justifiably make the antiprediction that humanity or its descendants might (that is, high enough probability (given moral uncertainty) that it still dominates the calculation) have some use for all those shiny lights in the sky (and especially the blackholes). It seems to me that your points about mixed motivations etc. are in favor of this antiprediction given many plausible aggregation rules. Sure, most parts of me/humanity might not have any ambitions or drives that require lots of resources to fulfill, but I know for certain that some parts of me/humanity at least nominally do and at least partially act/think accordingly. If those parts end up being acknowledged in moral calculations then a probable default position would be for those parts to take over (at least) the known universe while the other parts stay at home and enjoy themselves. For this not to happen would probably require (again dependent on aggregation rule) that the other parts of humanity actively value to a significant extent not using those resources (again to a significant extent). Given moral uncertainty, I am working under the provisional assumption that some non-negligible fraction of resources in the known universe is going to matter enough to at least some parts of something that guaranteeing its future access should be a very non-negligible goal as one of the few godshatter-coalitions that is able to recognize its potential importance (i.e. comparative advantage).
(Some of a lot of other reasoning I didn’t mention involves a prediction that a singleton won’t be eternally confined to an aggregation rule that is blatantly stupid in a few of the many, many ways an aggregation rule can be blatantly stupid as I judge them (or at least won’t be eternally confined in a huge majority of possible futures). (E.g. CEV has the annoyingly vague necessity of coherence among other things which could easily be called blatantly stupid upon implementation.))
It’s meant as a guess at a potentially oft-forgotten single-step implication of standard Less Wrong (epistemic and moral) beliefs; not any kind of assertion of supremacy. I have seen enough of the subtleties, complexities, and depth of morality to know that we are much too confused for anyone (any part of one) to be asserting such supremacy.