I don’t see the statement of the negation of LUH as a compelling argument for LUH.
LUH+ as you described it is somewhat underspecified, but it’s an idea that might be worth looking into. That said, I think it’s still not viable in its current form because of the other counterexamples.
maybe you value experiences for their own sake, but once an experience is instantiated somewhere in the universe you don’t care about instantiating it again.
I don’t think the marginal value of an experience occuring should be 0 if it also occurs elsewhere. That assumption (together with the many-worlds interpretation) would suggest that quantum suicide should be perfectly fine, but even people who are extremely confident in the many-worlds interpretation typically typically feel that quantum suicide is about as bad an idea as regular suicide.
Furthermore, I don’t think it makes sense to classify pairs of experiences as “the same” or “not the same”. There may be two conscious experiences that are so close to identical that they may as well be identical as far as your preferences over how much each conscious experience occurs are concerned. But there may be a long chain of conscious experiences, each of which differing imperceptibly from the previous, such that the experiences on each end of the chain are radically different. I don’t think it makes sense to have a discontinuous jump in the marginal utility of some conscious experience occuring as it crosses some artificial boundary between those that are “the same” and those that are “different” from some other conscious experience. I do think it makes sense to discount the marginal value of conscious experiences occuring based on similarity to other conscious experiences that are instantiated elsewhere.
I think usually when the question is whether one thing affects another, the burden of proof is on the person who says it does affect the other. Anyway, the real point is that people do claim to value experiences for their own sake as well as for their relations to other experiences, and translated into utility terms this seems to imply LUH+.
I’m not sure what you mean by “the other counterexamples”: I think your first example is just demonstrating that people’s System 1s mostly replace the numbers 10^100 and 10^102 by 100 and 102 respectively, not indicating any deep fact about morality. (To be clear, I mean that people’s sense of how “different” the 10^100 and 10^102 outcomes are seems to be based on the closeness of 100 and 102, not that people don’t realize that one is 100 times bigger.)
LUH+ is certainly underspecified as a utility function, but as a hypothesis about a utility function it seems to be reasonably well specified?
Introducing MWI seems to make the whole discussion a lot more complicated: to get a bounded utility function you need the marginal utility of an experience to decrease with respect to how many similar experiences there are, but MWI says there are always a lot of similar experiences, so marginal utility is always small? And if you are dealing with quantum randomness then LUH just says you want to maximize the total quantum measure of experiences, which is a monotonicity hypothesis rather than a linearity hypothesis, so it’s harder to see how you can deny it. Personally I don’t think that MWI is relevant to decision theory: I think I only care about what happens in this quantum branch.
I agree with you about the problems that arise if you want a utility function to depend on whether experiences are “the same”.
I think the counterexamples I gave are clear proof that LUH is false, but as long as we’re playing burden of proof tennis, I disagree that the burden of proof should lie on those who think LUH is false. Human value is complicated, and shouldn’t be assumed by default to satisfy whatever nice properties you think up. A random function from states of the universe to real numbers will not satisfy LUH, and while human preferences are far from random, if you claim that they satisfy any particular strong structural constraint, it’s on you to explain why. I also disagree that “valuing experiences for their own sake” implies LUH; that somewhat vague expression still sounds compatible with the marginal value of experiences decreasing to zero as the number of them that have already occured increases.
Preferences and bias are deeply intertwined in humans, and there’s no objective way to determine whether an expressed preference is due primarily to one or the other. That said, at some point, if an expression of preference is sufficiently strongly held, and the arguments that it is irrational are sufficiently weak, it gets hard to deny that preference has anything to do with it, even if similar thought patterns can be shown to be bias. This is where I’m at with 10^100 lives versus a gamble on 10^102. I’m aware scope insensitivity is a bias that can be shown to be irrational in most contexts, and in those contexts, I tend to be receptive to scope sensitivity arguments, but I just cannot accept that I “really should” take a gamble that has a 90% chance of destroying everything, just to try to increase population 100-fold after already winning pretty decisively. Do you see this differently? In any case, that example was intended as an analog of Pascal’s mugging with more normal probabilities and ratios between the stakes, and virtually no one actually thinks it’s rational to pay Pascal’s mugger.
And if you are dealing with quantum randomness then LUH just says you want to maximize the total quantum measure of experiences, which is a monotonicity hypothesis rather than a linearity hypothesis, so it’s harder to see how you can deny it.
What are you trying to say here? If you only pay attention to ordinal preferences among sure outcomes, then the restriction of LUH to this context is a monotonicity hypothesis, which is a lot more plausible. But you shouldn’t restrict your attention only to sure outcomes like that, since pretty much every choice you make involves uncertainty. Perhaps you are under the misconception that epistemic uncertainty and the Born rule are the same thing?
this quantum branch
That doesn’t actually mean anything. You are instantiated in many Everette branches, and those branches will each in turn split into many further branches.
Okay. To be fair I have two conflicting intuitions about this. One is that if you upload someone and manage to give him a perfect experience, then filling the universe with computronium in order to run that program again and again with the same input isn’t particularly valuable; in fact I want to say it’s not any more valuable than just having the experience occur once.
The other intuition is that in “normal” situations, people are just different from each other. And if I want to evaluate how good someone’s experience is, it seems condescending to say that it’s less important because someone else already had a similar experience. How similar can such an experience be, in the real world? I mean they are different people.
There’s also the fact that the person having the experience likely doesn’t care about whether it’s happened before. A pig on a factory farm doesn’t care that its experience is basically the same as the experience of any other pig on the farm, it just wants to stop suffering. On the other hand it seems like this argument could apply to the upload as well, and I’m not sure how to resolve that.
Regarding 10^100 vs 10^102, I recognize that there are a lot of ways in which having such an advanced civilization could be counted as “winning”. For example there’s a good chance we’ve solved Hilbert’s sixth problem by then which in my book is a pretty nice acheivement. And of course you can only do it once. But does it, or any similar metric that depends on the boolean existence of civilization, really compare to the 10^100 lives that are at stake here? It seems like the answer is no, so tentatively I would take the gamble, though I could imagine being convinced out of it. Of course, I’m aware I’m probably in a minority here.
It seems like people’s response to Pascal’s Mugging is partially dependent on framing; e.g. the “astronomical waste” argument seems to get taken at least slightly more seriously. I think there is also a non-utilitarian flinching at the idea of a decision process that could be taken advantage of so easily—I think I agree with the flinching, but recognize that it’s not a utilitarian instinct.
I do realize that epistemic uncertainty isn’t the same as the Born rule; that’s why I wrote “if you are dealing with quantum randomness”—i.e. if you are dealing with a situation where all of your epistemic uncertainty is caused by quantum randomness (or in MWI language, where all of your epistemic uncertainty is indexical uncertainty). Anyway, it sounds like you agree with me that under this hypothesis MWI seems to imply LUH, but you think that the hypothesis isn’t satisfied very often. Nevertheless, it’s interesting that whether randomness is quantum or not seems to be having consequences for our decision theory. Does it mean that we want more of our uncertainty to be quantum, or less? Anyway, it’s hard for me to take these questions too seriously since I don’t think MWI has decision-theoretic consequences, but I thought I would at least raise them.
Almost every ordinary use of language presumes that it makes sense to talk about the Everett branch that one is currently in, and about the branch that one will be in in the future. Of course these are not perfectly well-defined concepts, but since when have we restricted our language to well-defined concepts?
Okay. To be fair I have two conflicting intuitions about this. One is that if you upload someone and manage to give him a perfect experience, then filling the universe with computronium in order to run that program again and again with the same input isn’t particularly valuable; in fact I want to say it’s not any more valuable than just having the experience occur once.
The other intuition is that in “normal” situations, people are just different from each other. And if I want to evaluate how good someone’s experience is, it seems condescending to say that it’s less important because someone else already had a similar experience. How similar can such an experience be, in the real world? I mean they are different people.
My intuition here is that the more similar the experiences are to each other, the faster their marginal utility diminishes.
There’s also the fact that the person having the experience likely doesn’t care about whether it’s happened before.
That’s not clear. As I was saying, an agent having an experience has no way of refering to one instantiation of itself separately from other identical instantiations of the agent having the same experience, so presumably the agent cares about all instantiations of itself in the same way, and I don’t see why that way must be linear.
Regarding 10^100 vs 10^102, I recognize that there are a lot of ways in which having such an advanced civilization could be counted as “winning”. For example there’s a good chance we’ve solved Hilbert’s sixth problem by then which in my book is a pretty nice acheivement. And of course you can only do it once. But does it, or any similar metric that depends on the boolean existence of civilization, really compare to the 10^100 lives that are at stake here? It seems like the answer is no, so tentatively I would take the gamble, though I could imagine being convinced out of it.
To be clear, by “winning”, I was refering to the 10^100 flourishing humans being brought into existence, not glorious intellectual achievements that would be made by this civilization. Those are also nice, but I agree that they are insignificant in comparison.
Anyway, it sounds like you agree with me that under this hypothesis MWI seems to imply LUH, but you think that the hypothesis isn’t satisfied very often.
I didn’t totally agree; I said it was more plausible. Someone could care how their copies are distributed among Everette branches.
Nevertheless, it’s interesting that whether randomness is quantum or not seems to be having consequences for our decision theory. Does it mean that we want more of our uncertainty to be quantum, or less?
Yes, I am interested in this. I think the answer to your latter question probably depends on what the uncertainty is about, but I’ll have to think about how it depends on that.
Hmm. I’m not sure that reference works the way you say it does. If an upload points at itself and the experience of pointing is copied, it seems fair to say that you have a bunch of individuals pointing at themselves, not all pointing at each other. Not sure why other forms of reference should be any different. Though if it does work the way you say, maybe it would explain why uploads seem to be different from pigs… unless you think that the pigs can’t refer to themselves except as a group either.
I don’t see the statement of the negation of LUH as a compelling argument for LUH.
LUH+ as you described it is somewhat underspecified, but it’s an idea that might be worth looking into. That said, I think it’s still not viable in its current form because of the other counterexamples.
I don’t think the marginal value of an experience occuring should be 0 if it also occurs elsewhere. That assumption (together with the many-worlds interpretation) would suggest that quantum suicide should be perfectly fine, but even people who are extremely confident in the many-worlds interpretation typically typically feel that quantum suicide is about as bad an idea as regular suicide.
Furthermore, I don’t think it makes sense to classify pairs of experiences as “the same” or “not the same”. There may be two conscious experiences that are so close to identical that they may as well be identical as far as your preferences over how much each conscious experience occurs are concerned. But there may be a long chain of conscious experiences, each of which differing imperceptibly from the previous, such that the experiences on each end of the chain are radically different. I don’t think it makes sense to have a discontinuous jump in the marginal utility of some conscious experience occuring as it crosses some artificial boundary between those that are “the same” and those that are “different” from some other conscious experience. I do think it makes sense to discount the marginal value of conscious experiences occuring based on similarity to other conscious experiences that are instantiated elsewhere.
I think usually when the question is whether one thing affects another, the burden of proof is on the person who says it does affect the other. Anyway, the real point is that people do claim to value experiences for their own sake as well as for their relations to other experiences, and translated into utility terms this seems to imply LUH+.
I’m not sure what you mean by “the other counterexamples”: I think your first example is just demonstrating that people’s System 1s mostly replace the numbers 10^100 and 10^102 by 100 and 102 respectively, not indicating any deep fact about morality. (To be clear, I mean that people’s sense of how “different” the 10^100 and 10^102 outcomes are seems to be based on the closeness of 100 and 102, not that people don’t realize that one is 100 times bigger.)
LUH+ is certainly underspecified as a utility function, but as a hypothesis about a utility function it seems to be reasonably well specified?
Introducing MWI seems to make the whole discussion a lot more complicated: to get a bounded utility function you need the marginal utility of an experience to decrease with respect to how many similar experiences there are, but MWI says there are always a lot of similar experiences, so marginal utility is always small? And if you are dealing with quantum randomness then LUH just says you want to maximize the total quantum measure of experiences, which is a monotonicity hypothesis rather than a linearity hypothesis, so it’s harder to see how you can deny it. Personally I don’t think that MWI is relevant to decision theory: I think I only care about what happens in this quantum branch.
I agree with you about the problems that arise if you want a utility function to depend on whether experiences are “the same”.
I think the counterexamples I gave are clear proof that LUH is false, but as long as we’re playing burden of proof tennis, I disagree that the burden of proof should lie on those who think LUH is false. Human value is complicated, and shouldn’t be assumed by default to satisfy whatever nice properties you think up. A random function from states of the universe to real numbers will not satisfy LUH, and while human preferences are far from random, if you claim that they satisfy any particular strong structural constraint, it’s on you to explain why. I also disagree that “valuing experiences for their own sake” implies LUH; that somewhat vague expression still sounds compatible with the marginal value of experiences decreasing to zero as the number of them that have already occured increases.
Preferences and bias are deeply intertwined in humans, and there’s no objective way to determine whether an expressed preference is due primarily to one or the other. That said, at some point, if an expression of preference is sufficiently strongly held, and the arguments that it is irrational are sufficiently weak, it gets hard to deny that preference has anything to do with it, even if similar thought patterns can be shown to be bias. This is where I’m at with 10^100 lives versus a gamble on 10^102. I’m aware scope insensitivity is a bias that can be shown to be irrational in most contexts, and in those contexts, I tend to be receptive to scope sensitivity arguments, but I just cannot accept that I “really should” take a gamble that has a 90% chance of destroying everything, just to try to increase population 100-fold after already winning pretty decisively. Do you see this differently? In any case, that example was intended as an analog of Pascal’s mugging with more normal probabilities and ratios between the stakes, and virtually no one actually thinks it’s rational to pay Pascal’s mugger.
What are you trying to say here? If you only pay attention to ordinal preferences among sure outcomes, then the restriction of LUH to this context is a monotonicity hypothesis, which is a lot more plausible. But you shouldn’t restrict your attention only to sure outcomes like that, since pretty much every choice you make involves uncertainty. Perhaps you are under the misconception that epistemic uncertainty and the Born rule are the same thing?
That doesn’t actually mean anything. You are instantiated in many Everette branches, and those branches will each in turn split into many further branches.
Okay. To be fair I have two conflicting intuitions about this. One is that if you upload someone and manage to give him a perfect experience, then filling the universe with computronium in order to run that program again and again with the same input isn’t particularly valuable; in fact I want to say it’s not any more valuable than just having the experience occur once.
The other intuition is that in “normal” situations, people are just different from each other. And if I want to evaluate how good someone’s experience is, it seems condescending to say that it’s less important because someone else already had a similar experience. How similar can such an experience be, in the real world? I mean they are different people.
There’s also the fact that the person having the experience likely doesn’t care about whether it’s happened before. A pig on a factory farm doesn’t care that its experience is basically the same as the experience of any other pig on the farm, it just wants to stop suffering. On the other hand it seems like this argument could apply to the upload as well, and I’m not sure how to resolve that.
Regarding 10^100 vs 10^102, I recognize that there are a lot of ways in which having such an advanced civilization could be counted as “winning”. For example there’s a good chance we’ve solved Hilbert’s sixth problem by then which in my book is a pretty nice acheivement. And of course you can only do it once. But does it, or any similar metric that depends on the boolean existence of civilization, really compare to the 10^100 lives that are at stake here? It seems like the answer is no, so tentatively I would take the gamble, though I could imagine being convinced out of it. Of course, I’m aware I’m probably in a minority here.
It seems like people’s response to Pascal’s Mugging is partially dependent on framing; e.g. the “astronomical waste” argument seems to get taken at least slightly more seriously. I think there is also a non-utilitarian flinching at the idea of a decision process that could be taken advantage of so easily—I think I agree with the flinching, but recognize that it’s not a utilitarian instinct.
I do realize that epistemic uncertainty isn’t the same as the Born rule; that’s why I wrote “if you are dealing with quantum randomness”—i.e. if you are dealing with a situation where all of your epistemic uncertainty is caused by quantum randomness (or in MWI language, where all of your epistemic uncertainty is indexical uncertainty). Anyway, it sounds like you agree with me that under this hypothesis MWI seems to imply LUH, but you think that the hypothesis isn’t satisfied very often. Nevertheless, it’s interesting that whether randomness is quantum or not seems to be having consequences for our decision theory. Does it mean that we want more of our uncertainty to be quantum, or less? Anyway, it’s hard for me to take these questions too seriously since I don’t think MWI has decision-theoretic consequences, but I thought I would at least raise them.
Almost every ordinary use of language presumes that it makes sense to talk about the Everett branch that one is currently in, and about the branch that one will be in in the future. Of course these are not perfectly well-defined concepts, but since when have we restricted our language to well-defined concepts?
My intuition here is that the more similar the experiences are to each other, the faster their marginal utility diminishes.
That’s not clear. As I was saying, an agent having an experience has no way of refering to one instantiation of itself separately from other identical instantiations of the agent having the same experience, so presumably the agent cares about all instantiations of itself in the same way, and I don’t see why that way must be linear.
To be clear, by “winning”, I was refering to the 10^100 flourishing humans being brought into existence, not glorious intellectual achievements that would be made by this civilization. Those are also nice, but I agree that they are insignificant in comparison.
I didn’t totally agree; I said it was more plausible. Someone could care how their copies are distributed among Everette branches.
Yes, I am interested in this. I think the answer to your latter question probably depends on what the uncertainty is about, but I’ll have to think about how it depends on that.
Hmm. I’m not sure that reference works the way you say it does. If an upload points at itself and the experience of pointing is copied, it seems fair to say that you have a bunch of individuals pointing at themselves, not all pointing at each other. Not sure why other forms of reference should be any different. Though if it does work the way you say, maybe it would explain why uploads seem to be different from pigs… unless you think that the pigs can’t refer to themselves except as a group either.