First, you can consider preferences that are impartial but sublinear in the number of people. So, you can disagree with Nate’s room analogy without the premise “stuff only matters if it adds to my own life and experiences”.
Second, my preferences are indeed partial. But even that doesn’t mean “stuff only matters if it adds to my own life and experiences”. I do think that stuff only matters (to me) if it’s in some sense causally connected to my life and experiences. More details here.
Third, I don’t know what do you mean by “good”. The questions that I understand are:
Do I want X as an end in itself?
Would I choose X in order for someone to (causally or acausally) reciprocate by choosing Y which I want as an end in itself?
Do I support a system of social norms that incentives X?
My example with the 100 million referred to question 1. Obviously, in certain scenarios my actual choice would be the opposite on game-theoretic cooperation grounds (I would make a disproportionate sacrifice to save “far away” people in order for them to save me and/or my loved ones in the counterfactual in which they are making the choice).
Also, reminder that unbounded utility functions are incoherent because their expected values under Solomonoff-like priors diverge (a.k.a. Pascal mugging).
My example with the 100 million referred to question 1.
Yeah, I’m also talking about question 1.
I do think that stuff only matters (to me) if it’s in some sense causally connected to my life and experiences.
Seems obviously false as a description of my values (and, I’d guess, just about every human’s).
Consider the simple example of a universe that consists of two planets: mine, and another person’s. We don’t have spaceships, so we can’t interact. I am not therefore indifferent to whether the other person is being horribly tortured for thousands of years.
If I spontaneously consider the hypothetical, I will very strongly prefer that my neighbor not be tortured. If we add the claims that I can’t affect it and can’t ever know about it, I don’t suddenly go “Oh, never mind, fuck that guy”. Stuff that happens to other people is real, even if I don’t interact with it.
I’m curious what is the evidence you see that this is false as a description of the values of just about every human, given that
I, a human [citation needed] tell you that this seems to be a description of my values.
Almost every culture that ever existed had norms that prioritized helping family, friends and neighbors over helping random strangers, not to mention strangers that you never met.
Most people don’t do much to help random strangers they never met, with the notable exception of effective altruists, but even most effective altruists only go that far[1].
Evolutionary psychology can fairly easily explain helping your family and tribe, but it seems hard to explain impartial altruism towards all humans.
The common wisdom in EA is, you shouldn’t donate 90% of your salary or deny yourself every luxury because if you live a fun life you will be more effective at helping others. However, this strikes me as suspiciously convenient and self-serving.
I think that in your example, if a person is given a button that can save a person on a different planet from being tortured, they will have a direct incentive to press the button, because the button is a causal connection in itself, and consciously reasoning about the person on the other planet is a causal[1] connection in the other direction. That said, a person still has a limited budget of such causal connections (you cannot reason about a group of arbitrarily many people, with fixed non-zero amount of paying attention to the individual details of every person, in a fixed time-frame). Therefore, while the incentive is positive, its magnitude saturates as the number of saved people grows s.t. e.g. a button that saves a million people is virtually the same as a button that saves a billion people.
I’m modeling this via Turing RL, where conscious reasoning can be regarded as a form of observation. Ofc this means we are talking about “logical” rather than “physical” causality.
First, you can consider preferences that are impartial but sublinear in the number of people. So, you can disagree with Nate’s room analogy without the premise “stuff only matters if it adds to my own life and experiences”.
Second, my preferences are indeed partial. But even that doesn’t mean “stuff only matters if it adds to my own life and experiences”. I do think that stuff only matters (to me) if it’s in some sense causally connected to my life and experiences. More details here.
Third, I don’t know what do you mean by “good”. The questions that I understand are:
Do I want X as an end in itself?
Would I choose X in order for someone to (causally or acausally) reciprocate by choosing Y which I want as an end in itself?
Do I support a system of social norms that incentives X?
My example with the 100 million referred to question 1. Obviously, in certain scenarios my actual choice would be the opposite on game-theoretic cooperation grounds (I would make a disproportionate sacrifice to save “far away” people in order for them to save me and/or my loved ones in the counterfactual in which they are making the choice).
Also, reminder that unbounded utility functions are incoherent because their expected values under Solomonoff-like priors diverge (a.k.a. Pascal mugging).
Yeah, I’m also talking about question 1.
Seems obviously false as a description of my values (and, I’d guess, just about every human’s).
Consider the simple example of a universe that consists of two planets: mine, and another person’s. We don’t have spaceships, so we can’t interact. I am not therefore indifferent to whether the other person is being horribly tortured for thousands of years.
If I spontaneously consider the hypothetical, I will very strongly prefer that my neighbor not be tortured. If we add the claims that I can’t affect it and can’t ever know about it, I don’t suddenly go “Oh, never mind, fuck that guy”. Stuff that happens to other people is real, even if I don’t interact with it.
I’m curious what is the evidence you see that this is false as a description of the values of just about every human, given that
I, a human [citation needed] tell you that this seems to be a description of my values.
Almost every culture that ever existed had norms that prioritized helping family, friends and neighbors over helping random strangers, not to mention strangers that you never met.
Most people don’t do much to help random strangers they never met, with the notable exception of effective altruists, but even most effective altruists only go that far[1].
Evolutionary psychology can fairly easily explain helping your family and tribe, but it seems hard to explain impartial altruism towards all humans.
The common wisdom in EA is, you shouldn’t donate 90% of your salary or deny yourself every luxury because if you live a fun life you will be more effective at helping others. However, this strikes me as suspiciously convenient and self-serving.
P.S.
I think that in your example, if a person is given a button that can save a person on a different planet from being tortured, they will have a direct incentive to press the button, because the button is a causal connection in itself, and consciously reasoning about the person on the other planet is a causal[1] connection in the other direction. That said, a person still has a limited budget of such causal connections (you cannot reason about a group of arbitrarily many people, with fixed non-zero amount of paying attention to the individual details of every person, in a fixed time-frame). Therefore, while the incentive is positive, its magnitude saturates as the number of saved people grows s.t. e.g. a button that saves a million people is virtually the same as a button that saves a billion people.
I’m modeling this via Turing RL, where conscious reasoning can be regarded as a form of observation. Ofc this means we are talking about “logical” rather than “physical” causality.