I don’t understand how that answers my question, or whether it was intended to.
I mean, OK, let’s say the genuinely self-aware systems are real people. Then we can rephrase my question as:
Like, if we had a highly reliable test for real personhood, but it turned out that interest groups could manufacture large numbers of real people that would reliably vote and/or fight for their side of political questions, would that be better? Why?
Conversely, if we can’t reliably test for real personhood, but we don’t have a reliable way to manufacture apparently real people that vote or fight a particular way, would that be better? Why?
But I still don’t know your answer.
I also disagree that matters of ethics are therefore matters of taste.
We have votes because we want to maximize utility for the voters. Allowing easily manufactured people to vote creates incentives to manufacture people.
So the answer to this depends on your belief about utilitarianism. If you aggregate utility in such a way that adding more people increases utility in an unbounded way, then you should do whatever you can to encourage the creation of more people regardless of whether their votes cause harm to existing people, so it is good to create incentives for their creation and you should let them vote. (You also get the Repugnant Conclusion.) If you aggregate utility in some way that produces diminishing returns and avoids the Repugnant Conclusion, then it is possible that at some point creating more new people is a net negative. If so, you’d be better off precommitting to not let them vote because not letting them vote prevents them from being created, increasing utility.
Note: Most people, insofar as they can be described as utilitarian at all, will fall into the second category (with the precommitment being enforced by their inherent inability to care much for people who they cannot see as individuals).
This also works when you substitute “allowing unlimited immigration” for “creating unlimited amounts of people”, Your choice of how to aggregate utility also affects whether it is good to trade off utility among already existing people just like it affects whether it is good to create new people.
And yes, like most people, I don’t have a coherent understanding of how to aggregate intersubjective utility but I certainly don’t aggregate it in ways that cause me to embrace the Repugnant Conclusion. (By contrast, on consideration I do seem to embrace Utility Monsters, distasteful as the prospect feels on its face.)
Your choice of how to aggregate utility also affects whether it is good to trade off utility among already existing people just like it affects whether it is good to create new people.
Well, not “just like.” That is, I might have a mechanism for aggregating utility that treats N existing people in other countries differently from N people who don’t exist, and makes different tradeoffs for the two cases. But, yes, those are both examples of tradeoffs which a utility-aggregating mechanism affects.
I don’t understand how that answers my question, or whether it was intended to.
I mean, OK, let’s say the genuinely self-aware systems are real people. Then we can rephrase my question as:
Like, if we had a highly reliable test for real personhood, but it turned out that interest groups could manufacture large numbers of real people that would reliably vote and/or fight for their side of political questions, would that be better? Why?
Conversely, if we can’t reliably test for real personhood, but we don’t have a reliable way to manufacture apparently real people that vote or fight a particular way, would that be better? Why?
But I still don’t know your answer.
I also disagree that matters of ethics are therefore matters of taste.
We have votes because we want to maximize utility for the voters. Allowing easily manufactured people to vote creates incentives to manufacture people.
So the answer to this depends on your belief about utilitarianism. If you aggregate utility in such a way that adding more people increases utility in an unbounded way, then you should do whatever you can to encourage the creation of more people regardless of whether their votes cause harm to existing people, so it is good to create incentives for their creation and you should let them vote. (You also get the Repugnant Conclusion.) If you aggregate utility in some way that produces diminishing returns and avoids the Repugnant Conclusion, then it is possible that at some point creating more new people is a net negative. If so, you’d be better off precommitting to not let them vote because not letting them vote prevents them from being created, increasing utility.
Note: Most people, insofar as they can be described as utilitarian at all, will fall into the second category (with the precommitment being enforced by their inherent inability to care much for people who they cannot see as individuals).
This also works when you substitute “allowing unlimited immigration” for “creating unlimited amounts of people”, Your choice of how to aggregate utility also affects whether it is good to trade off utility among already existing people just like it affects whether it is good to create new people.
Yes, agreed with all this.
And yes, like most people, I don’t have a coherent understanding of how to aggregate intersubjective utility but I certainly don’t aggregate it in ways that cause me to embrace the Repugnant Conclusion. (By contrast, on consideration I do seem to embrace Utility Monsters, distasteful as the prospect feels on its face.)
Well, not “just like.” That is, I might have a mechanism for aggregating utility that treats N existing people in other countries differently from N people who don’t exist, and makes different tradeoffs for the two cases. But, yes, those are both examples of tradeoffs which a utility-aggregating mechanism affects.