Cool, someone is finally doing the “metaethical policing” that I talked about in this post. :)
you might end up violating obligations re: the sorts of resources, rights, welfare, and so on you need to give to agents you create, even conditional on them being happy-to-exist overall;
Concern about this must be motivating you to want to give power/resources to AIs-with-different-values (as you talked about in the last post), but I’m having trouble understanding your apparent degree of concern. My intuition is that (conditional on them being happy-to-exist overall) it’s pretty unlikely to be a big moral catastrophe if we did have such obligations and temporarily violate them until we definitively solved moral philosophy, compared to if we shouldn’t give power/resources to such AIs and did, which ends up causing astronomical waste. Maybe my credence for having such obligations in the first place is also lower than yours.
Could you talk more about this concern? Anything to explain why we might have such obligations? Why is it a big deal to even temporarily violate them? Can we not “make it up to them” later, through some sort of compensation/restitution, if it does turn out to be kind of a big deal?
(I do wish we could have a Long Reflection to thoroughly hash out these issues before creating such agents.)
To expand on my own view, by creating an agent and making them happy to exist overall, we’ve already helped them relative to not creating them in the first place. There are still countless potential agents who do not even exist at all in our world and who would want to exist. Why would we have an obligation to further help the former set of agents (by giving them more resources/rights/welfare), and not the latter (by bringing them into existence)? That would seem rather unfair to the latter.
But if we did have an obligation to help the latter, where does that obligation stop? We can obviously spend an unlimited amount of resources to bring additional agents into existence and giving them things, and there’s no obvious stopping point, nor an obvious way to split resources between giving existing agent more things and bringing new agents into existence. Whatever stopping point and split we decide could turn out to be a bad mistake. Given all this, I don’t think we can be blamed too much if we say “we’re pretty confused about what our values and/or obligations are; let’s conserve our resources and keep our options open until we’re not so confused anymore.”
Cool, someone is finally doing the “metaethical policing” that I talked about in this post. :)
Concern about this must be motivating you to want to give power/resources to AIs-with-different-values (as you talked about in the last post), but I’m having trouble understanding your apparent degree of concern. My intuition is that (conditional on them being happy-to-exist overall) it’s pretty unlikely to be a big moral catastrophe if we did have such obligations and temporarily violate them until we definitively solved moral philosophy, compared to if we shouldn’t give power/resources to such AIs and did, which ends up causing astronomical waste. Maybe my credence for having such obligations in the first place is also lower than yours.
Could you talk more about this concern? Anything to explain why we might have such obligations? Why is it a big deal to even temporarily violate them? Can we not “make it up to them” later, through some sort of compensation/restitution, if it does turn out to be kind of a big deal?
(I do wish we could have a Long Reflection to thoroughly hash out these issues before creating such agents.)
To expand on my own view, by creating an agent and making them happy to exist overall, we’ve already helped them relative to not creating them in the first place. There are still countless potential agents who do not even exist at all in our world and who would want to exist. Why would we have an obligation to further help the former set of agents (by giving them more resources/rights/welfare), and not the latter (by bringing them into existence)? That would seem rather unfair to the latter.
But if we did have an obligation to help the latter, where does that obligation stop? We can obviously spend an unlimited amount of resources to bring additional agents into existence and giving them things, and there’s no obvious stopping point, nor an obvious way to split resources between giving existing agent more things and bringing new agents into existence. Whatever stopping point and split we decide could turn out to be a bad mistake. Given all this, I don’t think we can be blamed too much if we say “we’re pretty confused about what our values and/or obligations are; let’s conserve our resources and keep our options open until we’re not so confused anymore.”