Agreed. I should have had disclaimer that I was talking about preference utilitarianism.
I am not sure what is true about what most people think.
My guess is that most philosophers who identify with utilitarianism mean welfare.
I would guess that most readers of LessWrong would not identify with utilitarianism, but would say they identify more with preference utilitarianism than welfare utilitarianism.
My guess is that a larger (relative to LW) proportion of EAs identify with utilitarianism, and also they identify with the welfare version (relative to preference version) more than LW, but I have a lot of uncertainty about how much. (There is probably some survey data that could answer this question. I haven’t checked.)
Also, I am not sure that “controlling for game-theoretic instrumental reasons” is actually a move that is well defined/makes sense.
I am not sure that “controlling for game-theoretic instrumental reasons” is actually a move that is well defined/makes sense.
I don’t have a crisp definition of this, but I just mean that, e.g., we compare the following two worlds: (1) 99.99% of agents are non-sentient paperclippers, and each agent has equal (bargaining) power. (2) 99.99% of agents are non-sentient paperclippers, and the paperclippers are all confined to some box. According to plenty of intuitive-to-me value systems, you only (maybe) have reason to increase paperclips in (1), not (2). But if the paperclippers felt really sad about the world not having more paperclips, I’d care—to an extent that depends on the details of the situation—about increasing paperclips even in (2).
Agreed. I should have had disclaimer that I was talking about preference utilitarianism.
I am not sure what is true about what most people think.
My guess is that most philosophers who identify with utilitarianism mean welfare.
I would guess that most readers of LessWrong would not identify with utilitarianism, but would say they identify more with preference utilitarianism than welfare utilitarianism.
My guess is that a larger (relative to LW) proportion of EAs identify with utilitarianism, and also they identify with the welfare version (relative to preference version) more than LW, but I have a lot of uncertainty about how much. (There is probably some survey data that could answer this question. I haven’t checked.)
Also, I am not sure that “controlling for game-theoretic instrumental reasons” is actually a move that is well defined/makes sense.
I agree with your guesses.
I don’t have a crisp definition of this, but I just mean that, e.g., we compare the following two worlds: (1) 99.99% of agents are non-sentient paperclippers, and each agent has equal (bargaining) power. (2) 99.99% of agents are non-sentient paperclippers, and the paperclippers are all confined to some box. According to plenty of intuitive-to-me value systems, you only (maybe) have reason to increase paperclips in (1), not (2). But if the paperclippers felt really sad about the world not having more paperclips, I’d care—to an extent that depends on the details of the situation—about increasing paperclips even in (2).