Here’s a simple argument that I find quite persuasive for why you should have linear returns to whatever your final source of utility is (e.g. human experience of a fulfilling life, which I’ll just call “happy humans”). Note that this is not an argument that you should have linear returns to resources (e.g. money). The argument goes like this:
You have some returns to happy humans (or whatever else you’re using as your final source of utility) in terms of how much utility you get from some number of happy humans existing.
In most cases, I think those returns are likely to be diminishing, but nevertheless monotonically increasing and differentiable. For example, maybe you have logarithmic returns to happy humans.
We happen to live in a massive multiverse. (Imo the Everett interpretation is settled science, and I don’t think you need to accept anything else to make this go through, but note that we’re only depending on the existence of any sort of big multiverse here—the one that the Everett interpretation gives you is just the only one that we know is guaranteed to actually exist.)
In a massive multiverse, the total number of happy humans is absolutely gigantic (let’s ignore infinite ethics problems, though, and assume it’s finite—though I think this argument still goes through in the infinite case, it just then depends on whatever infinite ethics framework you like).
Furthermore, the total number of happy humans is mostly insensitive to anything you can do, or anything happening locally within this universe, since this universe is only a tiny fraction of the overall multiverse. (Though you could get out of this by claiming that what you really care about is happy humans per universe, that’s a pretty strange thing to care about—it’s like caring about happy humans per acre.)
As a result, the effective returns to happy humans that you are exposed to within this universe reflect only the local behavior of your overall returns. (Note that this assumes “happy humans” are fungible, which I don’t actually believe—I care about the overall diversity of human experience throughout the multiverse. However, I don’t think that changes the bottom line conclusion, since, if anything, centralizing the happy humans rather than spreading them out seems like it would make it easier to ensure that their experiences are as diverse as possible.)
As anyone who has taken any introductory calculus will know, the local behavior of any differentiable function is linear.
Since we assumed that your overall returns were differentiable and monotonically increasing, the local returns must be linear with a positive slope.
You’re assuming that your utility function should have the general form of valuing each “source of utility” independently and then aggregating those values (such that when aggregating you no longer need the details of each “source” but just their values). But in The Moral Status of Independent Identical Copies I found this questionable (i.e., conflicting with other intuitions).
This is the fungibility objection I address above:
Note that this assumes “happy humans” are fungible, which I don’t actually believe—I care about the overall diversity of human experience throughout the multiverse. However, I don’t think that changes the bottom line conclusion, since, if anything, centralizing the happy humans rather than spreading them out seems like it would make it easier to ensure that their experiences are as diverse as possible.
Ah, I think I didn’t understand that parenthetical remark and skipped over it. Questions:
I thought your bottom line conclusion was “you should have linear returns to whatever your final source of utility is” and I’m not sure how “centralizing the happy humans rather than spreading them out seems like it would make it easier to ensure that their experiences are as diverse as possible” relates to that.
I’m not sure that the way my utility function deviates from fungibility is “I care about overall diversity of human experience throughout the multiverse”. What if it’s “I care about diversity of human experience in this Everett branch” then I could get a non-linear diminishing returns effect where as humans colonize more stars or galaxies, each new human experience is more likely to duplicate an existing human experience or be too similar to an existing experience so that its value has to be discounted.
The thing I was trying to say there is that I think the non-fungibility concern pushes in the direction of superlinear rather than sublinear local returns to “happy humans” per universe. (Since concentrating the “happy humans” likely makes it easier to ensure that they’re all different.)
I agree that this will depend on exactly in what way you think your final source of utility is non-fungible. I would argue that “diversity of human experience in this Everett branch” is a pretty silly thing to care about, though. I don’t see any reason why spatial distance should behave differently than being in separate Everett branches here.
I read it, and I think I broadly agree with it, but I don’t know why you think it’s a reason to treat physical distance differently to Everett branch distance, holding diversity constant. The only reason that you would want to treat them differently, I think, is if the Everett branch happy humans are very similar, whereas the physically separated happy humans are highly diverse. But, in that case, that’s an argument for superlinear local returns to happy humans, since it favors concentrating them so that it’s easier to make them as diverse as possible.
but I don’t know why you think it’s a reason to treat physical distance differently to Everett branch distance
I have a stronger intuition for “identical copy immortality” when the copies are separated spatially instead of across Everett branches (the latter also called quantum immortality). For example if you told me there are 2 identical copies of Earth spread across the galaxy and 1 of them will instantly disintegrate, I would be much less sad than if you told me that you’ll flip a quantum coin and disintegrate Earth if it comes up heads.
I’m not sure if this is actually a correct intuition, but I’m also not sure that it’s not, so I’m not willing to make assumptions that contradict it.
Furthermore, the total number of happy humans is mostly insensitive to anything you can do, or anything happening locally within this universe, since this universe is only a tiny fraction of the overall multiverse.
Not sure about this. Even if I think I am only acting locally, my actions and decisions could have an effect on the larger multiverse. When I do something to increase happy humans in my own local universe, I am potentially deciding / acting for everyone in my multiverse neighborhood who is similar enough to me to make similar decisions for similar reasons.
I agree that this is the main way that this argument could fail. Still, I think the multiverse is too large and the correlation not strong enough across very different versions of the Earth for this objection to substantially change the bottom line.
(Though you could get out of this by claiming that what you really care about is happy humans per universe, that’s a pretty strange thing to care about—it’s like caring about happy humans per acre.)
My sense is that many solutions to infinite ethics look a bit like this. For example, if you use UDASSA, then a single human who is alone in a big universe will have a shorter description length than a single human who is surrounded by many other humans in a big universe. Because for the former, you can use pointers that specify the universe and then describe sufficient criteria to recognise a human, but for the latter, you need to nail down exact physical location or some other exact criteria that distinguishes a specific human from every other human.
I agree that UDASSA might introduce a small effect like this, but my guess is that the overall effect isn’t enough to substantially change the bottom line. Fundamentally, being separated in space vs. being separated across different branches of the wavefunction seem pretty similar in terms of specification difficulty.
being separated in space vs. being separated across different branches of the wavefunction seem pretty similar in terms of specification difficulty
Maybe? I don’t really know how to reason about this.
If that’s true, that still only means that you should be linear for gambles that give different results in different quantum branches. C.f. logical vs. physical risk aversion.
Some objection like that might work more generally, since some logical facts will mean that there are far less humans in the universe-at-large, meaning that you’re at a different point in the risk-returns curve. So when comparing different logical ways the universe could be, you should not always care about the worlds where you can affect more sentient beings. If you have diminishing marginal returns, you need to be thinking about some more complicated function that is about whether you have a comparative advantage at affecting more sentient beings in worlds where there is overall fewer sentient beings (as measured by some measure that can handle infinities). Which matters for stuff like whether you should bet on the universe being large.
Here’s a simple argument that I find quite persuasive for why you should have linear returns to whatever your final source of utility is (e.g. human experience of a fulfilling life, which I’ll just call “happy humans”). Note that this is not an argument that you should have linear returns to resources (e.g. money). The argument goes like this:
You have some returns to happy humans (or whatever else you’re using as your final source of utility) in terms of how much utility you get from some number of happy humans existing.
In most cases, I think those returns are likely to be diminishing, but nevertheless monotonically increasing and differentiable. For example, maybe you have logarithmic returns to happy humans.
We happen to live in a massive multiverse. (Imo the Everett interpretation is settled science, and I don’t think you need to accept anything else to make this go through, but note that we’re only depending on the existence of any sort of big multiverse here—the one that the Everett interpretation gives you is just the only one that we know is guaranteed to actually exist.)
In a massive multiverse, the total number of happy humans is absolutely gigantic (let’s ignore infinite ethics problems, though, and assume it’s finite—though I think this argument still goes through in the infinite case, it just then depends on whatever infinite ethics framework you like).
Furthermore, the total number of happy humans is mostly insensitive to anything you can do, or anything happening locally within this universe, since this universe is only a tiny fraction of the overall multiverse. (Though you could get out of this by claiming that what you really care about is happy humans per universe, that’s a pretty strange thing to care about—it’s like caring about happy humans per acre.)
As a result, the effective returns to happy humans that you are exposed to within this universe reflect only the local behavior of your overall returns. (Note that this assumes “happy humans” are fungible, which I don’t actually believe—I care about the overall diversity of human experience throughout the multiverse. However, I don’t think that changes the bottom line conclusion, since, if anything, centralizing the happy humans rather than spreading them out seems like it would make it easier to ensure that their experiences are as diverse as possible.)
As anyone who has taken any introductory calculus will know, the local behavior of any differentiable function is linear.
Since we assumed that your overall returns were differentiable and monotonically increasing, the local returns must be linear with a positive slope.
You’re assuming that your utility function should have the general form of valuing each “source of utility” independently and then aggregating those values (such that when aggregating you no longer need the details of each “source” but just their values). But in The Moral Status of Independent Identical Copies I found this questionable (i.e., conflicting with other intuitions).
This is the fungibility objection I address above:
Ah, I think I didn’t understand that parenthetical remark and skipped over it. Questions:
I thought your bottom line conclusion was “you should have linear returns to whatever your final source of utility is” and I’m not sure how “centralizing the happy humans rather than spreading them out seems like it would make it easier to ensure that their experiences are as diverse as possible” relates to that.
I’m not sure that the way my utility function deviates from fungibility is “I care about overall diversity of human experience throughout the multiverse”. What if it’s “I care about diversity of human experience in this Everett branch” then I could get a non-linear diminishing returns effect where as humans colonize more stars or galaxies, each new human experience is more likely to duplicate an existing human experience or be too similar to an existing experience so that its value has to be discounted.
The thing I was trying to say there is that I think the non-fungibility concern pushes in the direction of superlinear rather than sublinear local returns to “happy humans” per universe. (Since concentrating the “happy humans” likely makes it easier to ensure that they’re all different.)
I agree that this will depend on exactly in what way you think your final source of utility is non-fungible. I would argue that “diversity of human experience in this Everett branch” is a pretty silly thing to care about, though. I don’t see any reason why spatial distance should behave differently than being in separate Everett branches here.
I tried to explain my intuitions/uncertainty about this in The Moral Status of Independent Identical Copies (it was linked earlier in this thread).
I read it, and I think I broadly agree with it, but I don’t know why you think it’s a reason to treat physical distance differently to Everett branch distance, holding diversity constant. The only reason that you would want to treat them differently, I think, is if the Everett branch happy humans are very similar, whereas the physically separated happy humans are highly diverse. But, in that case, that’s an argument for superlinear local returns to happy humans, since it favors concentrating them so that it’s easier to make them as diverse as possible.
I have a stronger intuition for “identical copy immortality” when the copies are separated spatially instead of across Everett branches (the latter also called quantum immortality). For example if you told me there are 2 identical copies of Earth spread across the galaxy and 1 of them will instantly disintegrate, I would be much less sad than if you told me that you’ll flip a quantum coin and disintegrate Earth if it comes up heads.
I’m not sure if this is actually a correct intuition, but I’m also not sure that it’s not, so I’m not willing to make assumptions that contradict it.
Not sure about this. Even if I think I am only acting locally, my actions and decisions could have an effect on the larger multiverse. When I do something to increase happy humans in my own local universe, I am potentially deciding / acting for everyone in my multiverse neighborhood who is similar enough to me to make similar decisions for similar reasons.
I agree that this is the main way that this argument could fail. Still, I think the multiverse is too large and the correlation not strong enough across very different versions of the Earth for this objection to substantially change the bottom line.
My sense is that many solutions to infinite ethics look a bit like this. For example, if you use UDASSA, then a single human who is alone in a big universe will have a shorter description length than a single human who is surrounded by many other humans in a big universe. Because for the former, you can use pointers that specify the universe and then describe sufficient criteria to recognise a human, but for the latter, you need to nail down exact physical location or some other exact criteria that distinguishes a specific human from every other human.
I agree that UDASSA might introduce a small effect like this, but my guess is that the overall effect isn’t enough to substantially change the bottom line. Fundamentally, being separated in space vs. being separated across different branches of the wavefunction seem pretty similar in terms of specification difficulty.
Maybe? I don’t really know how to reason about this.
If that’s true, that still only means that you should be linear for gambles that give different results in different quantum branches. C.f. logical vs. physical risk aversion.
Some objection like that might work more generally, since some logical facts will mean that there are far less humans in the universe-at-large, meaning that you’re at a different point in the risk-returns curve. So when comparing different logical ways the universe could be, you should not always care about the worlds where you can affect more sentient beings. If you have diminishing marginal returns, you need to be thinking about some more complicated function that is about whether you have a comparative advantage at affecting more sentient beings in worlds where there is overall fewer sentient beings (as measured by some measure that can handle infinities). Which matters for stuff like whether you should bet on the universe being large.
Reply to the objections before you say that.