Thank you! This is helpful. I’ll start with the bit where I still disagree and/or am still confused, which is the future people. You write:
The reductio for caring more about future peoples’ agency is in cases where you can just choose their preferences for them. If the main thing you care about is their ability to fulfil their preferences, then you can just make sure that only people with easily-satisfied preferences (like: the preference that grass is green) come into existence.
Sure. But also, if the main thing you care about is their ability to be happy, you can just make sure that only people whom green grass sends to the heights of ecstasy come into existence? This reasoning seems like it proves too much.
I’d guess that your reply is going to involve your kludgier, non-wireheading-friendly idea of “welfare”. And that’s fair enough in terms of handling this kind of dilemma in the real world; but running with a definition of “welfare” that smuggles in that we also care about agency a bit… seems, to me, like it muddles the original point of wanting to cleanly separate the three “primary colours” of morality.
That aside:
Re: animals, I think most of our disagreement just dissolves into semantics. (Yay!) IMO, keeping animals away from situations which they don’t realize would kill them just falls under the umbrella of using our superior knowledge/technology to help them fulfill their own extrapolated preference to not-get-run-over-by-a-car. In your map this probably taken care of by your including some component of agency in “welfare”, so it all works out.
Re: caring about paperclip paximizers: intuitively I care about creatures’ agencies iff they’re conscious/sentient, and I care more if they have feelings and emotions I can grok. So, I care a little about the paperclip-maximizers getting to maximize paperclips to their heart’s content if I am assured that they are conscious; and I care a bit more if I am assured that they feel what I would recognise as joy and sadness based on the current number of paperclips. I care not at all otherwise.
Thank you! This is helpful. I’ll start with the bit where I still disagree and/or am still confused, which is the future people. You write:
Sure. But also, if the main thing you care about is their ability to be happy, you can just make sure that only people whom green grass sends to the heights of ecstasy come into existence? This reasoning seems like it proves too much.
I’d guess that your reply is going to involve your kludgier, non-wireheading-friendly idea of “welfare”. And that’s fair enough in terms of handling this kind of dilemma in the real world; but running with a definition of “welfare” that smuggles in that we also care about agency a bit… seems, to me, like it muddles the original point of wanting to cleanly separate the three “primary colours” of morality.
That aside:
Re: animals, I think most of our disagreement just dissolves into semantics. (Yay!) IMO, keeping animals away from situations which they don’t realize would kill them just falls under the umbrella of using our superior knowledge/technology to help them fulfill their own extrapolated preference to not-get-run-over-by-a-car. In your map this probably taken care of by your including some component of agency in “welfare”, so it all works out.
Re: caring about paperclip paximizers: intuitively I care about creatures’ agencies iff they’re conscious/sentient, and I care more if they have feelings and emotions I can grok. So, I care a little about the paperclip-maximizers getting to maximize paperclips to their heart’s content if I am assured that they are conscious; and I care a bit more if I am assured that they feel what I would recognise as joy and sadness based on the current number of paperclips. I care not at all otherwise.