I take it that you don’t think that political correctness (of the sort needed to get popular approval of a project now) requires taking into account the preferences of future people.
Yes. Also, giving space in the aggregated utility function to potentially large numbers of future people with unknown preferences could be dangerous to us, whereas giving space in the utility function to the only 7 billion-ish other people in the world who have preferences that are human and thus not too radically different from ours costs us fairly little.
But I also suspect that it doesn’t really require taking into account the preferences of poor third-worlders either. It’s enough if the rich first-worlders assume that their own preferences are universal among humankind, and this seems to be a common opinion among them.
List rich-first-worlder values and say that you’re maximizing those, and you can get applauded by rich first-worlders. Make a list of people whose values get to be included in the aggregation, which includes only rich first-worlders, and it won’t go over well.
But if the rich first-worlders whose support you really need care about future people as well as about poor third-worlders (and I don’t mean in the utility functions that you aggregate but in the political matters that you need to satisfy to get things to actually happen), then they’ll insist that you take their preferences into account as well.
You may be right about what would be needed to get support for an AI project, that you’ll need to explicitly take into account contemporary people who can’t support or undermine your project but not future people. But it’s not automatic. You’d really have to do surveys or something to establish it.
Yes, it is not automatically true that the class of people whose utility functions would have to be included for political reasons is necessarily the set of currently existing people. But again, including the utility functions of potentially large numbers of beings with potentially radically different values from ours could decrease the value to us of a universe maximized under the aggregation by a significant and maybe catastrophic amount, so if there does end up being substantial resistance to the idea of only including currently existing people, I think it would be worth arguing for instead of giving in on. I also think we live in the convenient world where that won’t be a problem.
All right, I can buy that. Although it may be that a small compromise is possible: taking future people into consideration with a time discount large enough that radically different people won’t muck things up. To be more specific (hence also more hypothetical), current people may insist on taking into account their children and grandchildren but not worry so much about what comes after. (Again, talking about what has to be explicitly included for political reasons, separate from what gets included via the actual utility functions of included people.) This is probably getting too hairsplitting to worry about any further. (^_^)
Yes. Also, giving space in the aggregated utility function to potentially large numbers of future people with unknown preferences could be dangerous to us, whereas giving space in the utility function to the only 7 billion-ish other people in the world who have preferences that are human and thus not too radically different from ours costs us fairly little.
List rich-first-worlder values and say that you’re maximizing those, and you can get applauded by rich first-worlders. Make a list of people whose values get to be included in the aggregation, which includes only rich first-worlders, and it won’t go over well.
But if the rich first-worlders whose support you really need care about future people as well as about poor third-worlders (and I don’t mean in the utility functions that you aggregate but in the political matters that you need to satisfy to get things to actually happen), then they’ll insist that you take their preferences into account as well.
You may be right about what would be needed to get support for an AI project, that you’ll need to explicitly take into account contemporary people who can’t support or undermine your project but not future people. But it’s not automatic. You’d really have to do surveys or something to establish it.
Yes, it is not automatically true that the class of people whose utility functions would have to be included for political reasons is necessarily the set of currently existing people. But again, including the utility functions of potentially large numbers of beings with potentially radically different values from ours could decrease the value to us of a universe maximized under the aggregation by a significant and maybe catastrophic amount, so if there does end up being substantial resistance to the idea of only including currently existing people, I think it would be worth arguing for instead of giving in on. I also think we live in the convenient world where that won’t be a problem.
All right, I can buy that. Although it may be that a small compromise is possible: taking future people into consideration with a time discount large enough that radically different people won’t muck things up. To be more specific (hence also more hypothetical), current people may insist on taking into account their children and grandchildren but not worry so much about what comes after. (Again, talking about what has to be explicitly included for political reasons, separate from what gets included via the actual utility functions of included people.) This is probably getting too hairsplitting to worry about any further. (^_^)