I assume that you do think it makes sense to care about the welfare of animals and future people, and you’re just questioning why we shouldn’t care more about their agency?
The reductio for caring more about animals’ agency is when they’re in environments where they’ll very obviously make bad decisions—e.g. there are lots of things which are poisonous and they don’t know; there are lots of cars that would kill them, but they keep running onto the road anyway; etc. (The more general principle is that the preferences of dumb agents aren’t necessarily well-defined from the perspective of smart agents, who can elicit very different preferences by changing the inputs slightly.)
The reductio for caring more about future peoples’ agency is in cases where you can just choose their preferences for them. If the main thing you care about is their ability to fulfil their preferences, then you can just make sure that only people with easily-satisfied preferences (like: the preference that grass is green) come into existence.
The other issue I have with focusing primarily on agency is that, as we think about creatures which are increasingly different from humans, my intuitions about why I care about their agency start to fade away. If I think about a universe full of paperclip maximizers with very high agency… I’m just not feeling it. Whereas at least if it’s a universe full of very happy paperclip maximizers, that feels more compelling.
(I do care somewhat about future peoples’ agency; and I personally define welfare in a way which includes some component of agency, such that wireheading isn’t maximum-welfare. But I don’t think it should be the main thing.)
(Also, as I wrote this comment, I realized that the phrasing in the original sentence you quoted is infelicitous, and so will edit it now.)
Thank you! This is helpful. I’ll start with the bit where I still disagree and/or am still confused, which is the future people. You write:
The reductio for caring more about future peoples’ agency is in cases where you can just choose their preferences for them. If the main thing you care about is their ability to fulfil their preferences, then you can just make sure that only people with easily-satisfied preferences (like: the preference that grass is green) come into existence.
Sure. But also, if the main thing you care about is their ability to be happy, you can just make sure that only people whom green grass sends to the heights of ecstasy come into existence? This reasoning seems like it proves too much.
I’d guess that your reply is going to involve your kludgier, non-wireheading-friendly idea of “welfare”. And that’s fair enough in terms of handling this kind of dilemma in the real world; but running with a definition of “welfare” that smuggles in that we also care about agency a bit… seems, to me, like it muddles the original point of wanting to cleanly separate the three “primary colours” of morality.
That aside:
Re: animals, I think most of our disagreement just dissolves into semantics. (Yay!) IMO, keeping animals away from situations which they don’t realize would kill them just falls under the umbrella of using our superior knowledge/technology to help them fulfill their own extrapolated preference to not-get-run-over-by-a-car. In your map this probably taken care of by your including some component of agency in “welfare”, so it all works out.
Re: caring about paperclip paximizers: intuitively I care about creatures’ agencies iff they’re conscious/sentient, and I care more if they have feelings and emotions I can grok. So, I care a little about the paperclip-maximizers getting to maximize paperclips to their heart’s content if I am assured that they are conscious; and I care a bit more if I am assured that they feel what I would recognise as joy and sadness based on the current number of paperclips. I care not at all otherwise.
If I think about a universe full of paperclip maximizers with very high agency… I’m just not feeling it. Whereas at least if it’s a universe full of very happy paperclip maximizers, that feels more compelling.
This is really the old utilitarian argument that we value things (like agency) in addition to utility because they are instrumentally useful (which agency is). But if agency had never given us utility, we would never have valued it.
I assume that you do think it makes sense to care about the welfare of animals and future people, and you’re just questioning why we shouldn’t care more about their agency?
The reductio for caring more about animals’ agency is when they’re in environments where they’ll very obviously make bad decisions—e.g. there are lots of things which are poisonous and they don’t know; there are lots of cars that would kill them, but they keep running onto the road anyway; etc. (The more general principle is that the preferences of dumb agents aren’t necessarily well-defined from the perspective of smart agents, who can elicit very different preferences by changing the inputs slightly.)
The reductio for caring more about future peoples’ agency is in cases where you can just choose their preferences for them. If the main thing you care about is their ability to fulfil their preferences, then you can just make sure that only people with easily-satisfied preferences (like: the preference that grass is green) come into existence.
The other issue I have with focusing primarily on agency is that, as we think about creatures which are increasingly different from humans, my intuitions about why I care about their agency start to fade away. If I think about a universe full of paperclip maximizers with very high agency… I’m just not feeling it. Whereas at least if it’s a universe full of very happy paperclip maximizers, that feels more compelling.
(I do care somewhat about future peoples’ agency; and I personally define welfare in a way which includes some component of agency, such that wireheading isn’t maximum-welfare. But I don’t think it should be the main thing.)
(Also, as I wrote this comment, I realized that the phrasing in the original sentence you quoted is infelicitous, and so will edit it now.)
Thank you! This is helpful. I’ll start with the bit where I still disagree and/or am still confused, which is the future people. You write:
Sure. But also, if the main thing you care about is their ability to be happy, you can just make sure that only people whom green grass sends to the heights of ecstasy come into existence? This reasoning seems like it proves too much.
I’d guess that your reply is going to involve your kludgier, non-wireheading-friendly idea of “welfare”. And that’s fair enough in terms of handling this kind of dilemma in the real world; but running with a definition of “welfare” that smuggles in that we also care about agency a bit… seems, to me, like it muddles the original point of wanting to cleanly separate the three “primary colours” of morality.
That aside:
Re: animals, I think most of our disagreement just dissolves into semantics. (Yay!) IMO, keeping animals away from situations which they don’t realize would kill them just falls under the umbrella of using our superior knowledge/technology to help them fulfill their own extrapolated preference to not-get-run-over-by-a-car. In your map this probably taken care of by your including some component of agency in “welfare”, so it all works out.
Re: caring about paperclip paximizers: intuitively I care about creatures’ agencies iff they’re conscious/sentient, and I care more if they have feelings and emotions I can grok. So, I care a little about the paperclip-maximizers getting to maximize paperclips to their heart’s content if I am assured that they are conscious; and I care a bit more if I am assured that they feel what I would recognise as joy and sadness based on the current number of paperclips. I care not at all otherwise.
This is really the old utilitarian argument that we value things (like agency) in addition to utility because they are instrumentally useful (which agency is). But if agency had never given us utility, we would never have valued it.