Why can’t it weight actions based on what we as a society want/like/approve/consent/condone?
Human society would not do a good job being directly in charge of a naive omnipotent genie. Insert your own nightmare scenario examples here, there are plenty to choose from.
What I’m describing isn’t really a utility function, it’s more like a policy, or policy function. Its policy would be volatile, or at least, more volatile than the common understanding LW has of a set-in-stone utility function.
Why can’t it weight actions based on what we as a society w/l/a/c/c?
Human society would not do a good job being directly in charge of a naive omnipotent genie. Insert your own nightmare scenario examples here, there are plenty to choose from.
But that doesn’t describe humanity being directly in charge. It only describes a small bit of influence for each person, and while groups would have leverage, that doesn’t mean a majority rejecting, say, homosexuality, gets to say what LGB people can and can’t do/be.
What I’m describing isn’t really a utility function, it’s more like a policy, or policy function. Its policy would be volatile, or at least, more volatile than the common understanding LW has of a set-in-stone utility function.
What would be in charge of changing the policy?
The metautility function I described.
What is a society’s intent? What should a society’s goals be, and how should it relate to the goals of its constituents?
Good point. I think I was reluctant to use pedophilia as an example because I’m trying to defend this argument, and claiming it could allow pedophilia is not usually convincing. RAT − 1 for me.
I’ll concede that point. But my questions aren’t rhetorical, I think. There is no objective morality, and EY seems to be trying to get around that. Concessions must be made.
I’m thinking that the closest thing we could have to CEV is a social contract based on Rawls’ veil of ignorance, adjusted with live runoff of supply/demand (i.e. the less people want slavery, the more likely that someone who wants slavery would become a slave, so prospective slaveowners would be less likely to approve of slavery on the grounds that they themselves do not want to be slaves. Meanwhile, people who want to become slaves get what they want as well. By no means is this a rigorous definition or claim.), in a post-scarcity economy, with sharding of some sort (as in CelestAI sharding, where parts of society that contribute negative utility to an individual are effectively invisible to said individual. There was an argument on LW that CEV would be impossible without some elements of separation similar to this).
The less people want aristocracy, the more likely that someone who wants aristocracy would become a noble, so prospective nobles would be more like to approve of aristocracy on the grounds that they themselves want to be nobles?
The less people want aristocracy, the more likely that someone who wants aristocracy would become a peon, so prospective nobles would be less likely to approve of aristocracy on the grounds that they themselves want to be peons.
Human society would not do a good job being directly in charge of a naive omnipotent genie. Insert your own nightmare scenario examples here, there are plenty to choose from.
What would be in charge of changing the policy?
But that doesn’t describe humanity being directly in charge. It only describes a small bit of influence for each person, and while groups would have leverage, that doesn’t mean a majority rejecting, say, homosexuality, gets to say what LGB people can and can’t do/be.
The metautility function I described.
What is a society’s intent? What should a society’s goals be, and how should it relate to the goals of its constituents?
I think it means precisely that if the majority feels strongly enough about it.
For a quick example s/homosexuality/pedophilia/
Good point. I think I was reluctant to use pedophilia as an example because I’m trying to defend this argument, and claiming it could allow pedophilia is not usually convincing. RAT − 1 for me.
I’ll concede that point. But my questions aren’t rhetorical, I think. There is no objective morality, and EY seems to be trying to get around that. Concessions must be made.
I’m thinking that the closest thing we could have to CEV is a social contract based on Rawls’ veil of ignorance, adjusted with live runoff of supply/demand (i.e. the less people want slavery, the more likely that someone who wants slavery would become a slave, so prospective slaveowners would be less likely to approve of slavery on the grounds that they themselves do not want to be slaves. Meanwhile, people who want to become slaves get what they want as well. By no means is this a rigorous definition or claim.), in a post-scarcity economy, with sharding of some sort (as in CelestAI sharding, where parts of society that contribute negative utility to an individual are effectively invisible to said individual. There was an argument on LW that CEV would be impossible without some elements of separation similar to this).
The less people want aristocracy, the more likely that someone who wants aristocracy would become a noble, so prospective nobles would be more like to approve of aristocracy on the grounds that they themselves want to be nobles?
The less people want aristocracy, the more likely that someone who wants aristocracy would become a peon, so prospective nobles would be less likely to approve of aristocracy on the grounds that they themselves want to be peons.
I have to work this out. You have a good point.