and, i’d guess that one big universe is more than twice as Fun as two small universes, so even if there were no transaction costs it wouldn’t be worth it. (humans can have more fun when there’s two people in the same room, than one person each in two separate rooms.)
This sounds astronomically wrong to me. I think that my personal utility function gets close to saturation with a tiny fraction of the resources in universe-shard. Two people is one room is better than two people in separate rooms, yes. But, two rooms with trillion people each is virtually the same as one room with two trillion. The returns on interactions with additional people fall off exponentially past the Dunbar number.
In other words, I would gladly take a 100% probability of utopia with (say) 100 million people that include me and my loved ones over 99% human extinction and 1% anything at all. (In terms of raw utility calculus, i.e. ignoring trades with other factual or counterfactual minds.)
But, two rooms with trillion people each is virtually the same as one room with two trillion. The returns on interactions with additional people fall off exponentially past the Dunbar number.
You’re conflating “would I enjoy interacting with X?” with “is it good for X to exist?”. Which is almost understandable given that Nate used the “two people can have more fun in the same room” example to illustrate why utility isn’t linear in population. But this comment has an IMO bizarre amount of agreekarma (26 net agreement, with 11 votes), which makes me wonder if people are missing that this comment is leaning on a premise like “stuff only matters if it adds to my own life and experiences”?
Replacing the probabilistic hypothetical with a deterministic one: the reason I wouldn’t advocate killing a Graham’s number of humans in order to save 100 million people (myself and my loved ones included) is that my utility function isn’t saturated when my life gets saturated. Analogously, I still care about humans living on the other side of Earth even though I’ve never met them, and never expect to meet them. I value good experiences happening, even if they don’t affect me in any way (and even if I’ve never met the person who they’re happening to).
First, you can consider preferences that are impartial but sublinear in the number of people. So, you can disagree with Nate’s room analogy without the premise “stuff only matters if it adds to my own life and experiences”.
Second, my preferences are indeed partial. But even that doesn’t mean “stuff only matters if it adds to my own life and experiences”. I do think that stuff only matters (to me) if it’s in some sense causally connected to my life and experiences. More details here.
Third, I don’t know what do you mean by “good”. The questions that I understand are:
Do I want X as an end in itself?
Would I choose X in order for someone to (causally or acausally) reciprocate by choosing Y which I want as an end in itself?
Do I support a system of social norms that incentives X?
My example with the 100 million referred to question 1. Obviously, in certain scenarios my actual choice would be the opposite on game-theoretic cooperation grounds (I would make a disproportionate sacrifice to save “far away” people in order for them to save me and/or my loved ones in the counterfactual in which they are making the choice).
Also, reminder that unbounded utility functions are incoherent because their expected values under Solomonoff-like priors diverge (a.k.a. Pascal mugging).
My example with the 100 million referred to question 1.
Yeah, I’m also talking about question 1.
I do think that stuff only matters (to me) if it’s in some sense causally connected to my life and experiences.
Seems obviously false as a description of my values (and, I’d guess, just about every human’s).
Consider the simple example of a universe that consists of two planets: mine, and another person’s. We don’t have spaceships, so we can’t interact. I am not therefore indifferent to whether the other person is being horribly tortured for thousands of years.
If I spontaneously consider the hypothetical, I will very strongly prefer that my neighbor not be tortured. If we add the claims that I can’t affect it and can’t ever know about it, I don’t suddenly go “Oh, never mind, fuck that guy”. Stuff that happens to other people is real, even if I don’t interact with it.
I’m curious what is the evidence you see that this is false as a description of the values of just about every human, given that
I, a human [citation needed] tell you that this seems to be a description of my values.
Almost every culture that ever existed had norms that prioritized helping family, friends and neighbors over helping random strangers, not to mention strangers that you never met.
Most people don’t do much to help random strangers they never met, with the notable exception of effective altruists, but even most effective altruists only go that far[1].
Evolutionary psychology can fairly easily explain helping your family and tribe, but it seems hard to explain impartial altruism towards all humans.
The common wisdom in EA is, you shouldn’t donate 90% of your salary or deny yourself every luxury because if you live a fun life you will be more effective at helping others. However, this strikes me as suspiciously convenient and self-serving.
I think that in your example, if a person is given a button that can save a person on a different planet from being tortured, they will have a direct incentive to press the button, because the button is a causal connection in itself, and consciously reasoning about the person on the other planet is a causal[1] connection in the other direction. That said, a person still has a limited budget of such causal connections (you cannot reason about a group of arbitrarily many people, with fixed non-zero amount of paying attention to the individual details of every person, in a fixed time-frame). Therefore, while the incentive is positive, its magnitude saturates as the number of saved people grows s.t. e.g. a button that saves a million people is virtually the same as a button that saves a billion people.
I’m modeling this via Turing RL, where conscious reasoning can be regarded as a form of observation. Ofc this means we are talking about “logical” rather than “physical” causality.
This sounds astronomically wrong to me. I think that my personal utility function gets close to saturation with a tiny fraction of the resources in universe-shard. Two people is one room is better than two people in separate rooms, yes. But, two rooms with trillion people each is virtually the same as one room with two trillion. The returns on interactions with additional people fall off exponentially past the Dunbar number.
In other words, I would gladly take a 100% probability of utopia with (say) 100 million people that include me and my loved ones over 99% human extinction and 1% anything at all. (In terms of raw utility calculus, i.e. ignoring trades with other factual or counterfactual minds.)
You’re conflating “would I enjoy interacting with X?” with “is it good for X to exist?”. Which is almost understandable given that Nate used the “two people can have more fun in the same room” example to illustrate why utility isn’t linear in population. But this comment has an IMO bizarre amount of agreekarma (26 net agreement, with 11 votes), which makes me wonder if people are missing that this comment is leaning on a premise like “stuff only matters if it adds to my own life and experiences”?
Replacing the probabilistic hypothetical with a deterministic one: the reason I wouldn’t advocate killing a Graham’s number of humans in order to save 100 million people (myself and my loved ones included) is that my utility function isn’t saturated when my life gets saturated. Analogously, I still care about humans living on the other side of Earth even though I’ve never met them, and never expect to meet them. I value good experiences happening, even if they don’t affect me in any way (and even if I’ve never met the person who they’re happening to).
First, you can consider preferences that are impartial but sublinear in the number of people. So, you can disagree with Nate’s room analogy without the premise “stuff only matters if it adds to my own life and experiences”.
Second, my preferences are indeed partial. But even that doesn’t mean “stuff only matters if it adds to my own life and experiences”. I do think that stuff only matters (to me) if it’s in some sense causally connected to my life and experiences. More details here.
Third, I don’t know what do you mean by “good”. The questions that I understand are:
Do I want X as an end in itself?
Would I choose X in order for someone to (causally or acausally) reciprocate by choosing Y which I want as an end in itself?
Do I support a system of social norms that incentives X?
My example with the 100 million referred to question 1. Obviously, in certain scenarios my actual choice would be the opposite on game-theoretic cooperation grounds (I would make a disproportionate sacrifice to save “far away” people in order for them to save me and/or my loved ones in the counterfactual in which they are making the choice).
Also, reminder that unbounded utility functions are incoherent because their expected values under Solomonoff-like priors diverge (a.k.a. Pascal mugging).
Yeah, I’m also talking about question 1.
Seems obviously false as a description of my values (and, I’d guess, just about every human’s).
Consider the simple example of a universe that consists of two planets: mine, and another person’s. We don’t have spaceships, so we can’t interact. I am not therefore indifferent to whether the other person is being horribly tortured for thousands of years.
If I spontaneously consider the hypothetical, I will very strongly prefer that my neighbor not be tortured. If we add the claims that I can’t affect it and can’t ever know about it, I don’t suddenly go “Oh, never mind, fuck that guy”. Stuff that happens to other people is real, even if I don’t interact with it.
I’m curious what is the evidence you see that this is false as a description of the values of just about every human, given that
I, a human [citation needed] tell you that this seems to be a description of my values.
Almost every culture that ever existed had norms that prioritized helping family, friends and neighbors over helping random strangers, not to mention strangers that you never met.
Most people don’t do much to help random strangers they never met, with the notable exception of effective altruists, but even most effective altruists only go that far[1].
Evolutionary psychology can fairly easily explain helping your family and tribe, but it seems hard to explain impartial altruism towards all humans.
The common wisdom in EA is, you shouldn’t donate 90% of your salary or deny yourself every luxury because if you live a fun life you will be more effective at helping others. However, this strikes me as suspiciously convenient and self-serving.
P.S.
I think that in your example, if a person is given a button that can save a person on a different planet from being tortured, they will have a direct incentive to press the button, because the button is a causal connection in itself, and consciously reasoning about the person on the other planet is a causal[1] connection in the other direction. That said, a person still has a limited budget of such causal connections (you cannot reason about a group of arbitrarily many people, with fixed non-zero amount of paying attention to the individual details of every person, in a fixed time-frame). Therefore, while the incentive is positive, its magnitude saturates as the number of saved people grows s.t. e.g. a button that saves a million people is virtually the same as a button that saves a billion people.
I’m modeling this via Turing RL, where conscious reasoning can be regarded as a form of observation. Ofc this means we are talking about “logical” rather than “physical” causality.