Human values may not be consistent, but this is a separate failure mode.
How is a AGI supposed to optimize for values that aren’t consistent?
Much of the time this statement could be taken at face value
Does that mean that the AGI should start doing genetic manipulation that prevents people from being gay? Is that what the person who made the claim means?
How is a AGI supposed to optimize for values that aren’t consistent?
I am not saying this is a trivial problem, but it is a separate problem from ‘the hidden complexity of wishes’ problem.
Does that mean that the AGI should start doing genetic manipulation that prevents people from being gay?
Well, if the CEV of the anti-gay, pro-genetic manipulation people exceeds the CEV of the pro-gay/anti-genetic manipulation people then I suppose it would, although I’m not sure whether your question means genetic manipulation with or without consent (also, if a gay person wants to be straight, some would say that should be banned, so consent cuts both ways), and so you also have to take into account the CEV on the issue of consent. Its also true that a super intelligence might be able to talk someone into consenting to almost anything.
Yes, a CEV FAI would forcibly alter people’s sexualities if the aggrigated preferences in favour of that were strong enough. A democratic system will be a tyranny of the majority if the majority are tyrants.
Is that what the person who made the claim means?
I dunno, since I’ve only heard one sentence from this hypothetical person. But I would imagine that this sort of person would probably think that genetic manipulation is playing god, and moreover that superintelligent AI is playing god. Their strongest wish might be for the AI to turn itself off.
EDIT: how to react to the god hates fags people also depends upon whether being anti gay is a terminal value to these people, or whether it is predicated upon the existance of god. I’m assuming the FAI would not beleive in god, but then again some people might have faith as a terminal value, so… its complicated.
and so you also have to take into account the CEV on the issue of consent. Its also true that a super intelligence might be able to talk someone into consenting to almost anything.
Consent is a concept that get’s easily complicated. Is it wrong to burn coal when the asthmatics who die because of it aren’t consenting? Are the asthmatics in the US consenting by virtue of electing a government that allows coal to be burned?
If a AGI does thinks in a very complicated way it might not meaningfully get consent for anything because it can’t explain it’s reasoning to humans.
If a AGI does thinks in a very complicated way it might not meaningfully get consent for anything because it can’t explain it’s reasoning to humans.
Is that necessary for consent? I mean, one does not have to understand the rationale for undergoing a medical procedure in order to consent to it. Its more important to know the potential risks.
How is a AGI supposed to optimize for values that aren’t consistent?
Does that mean that the AGI should start doing genetic manipulation that prevents people from being gay? Is that what the person who made the claim means?
I am not saying this is a trivial problem, but it is a separate problem from ‘the hidden complexity of wishes’ problem.
Well, if the CEV of the anti-gay, pro-genetic manipulation people exceeds the CEV of the pro-gay/anti-genetic manipulation people then I suppose it would, although I’m not sure whether your question means genetic manipulation with or without consent (also, if a gay person wants to be straight, some would say that should be banned, so consent cuts both ways), and so you also have to take into account the CEV on the issue of consent. Its also true that a super intelligence might be able to talk someone into consenting to almost anything.
Yes, a CEV FAI would forcibly alter people’s sexualities if the aggrigated preferences in favour of that were strong enough. A democratic system will be a tyranny of the majority if the majority are tyrants.
I dunno, since I’ve only heard one sentence from this hypothetical person. But I would imagine that this sort of person would probably think that genetic manipulation is playing god, and moreover that superintelligent AI is playing god. Their strongest wish might be for the AI to turn itself off.
EDIT: how to react to the god hates fags people also depends upon whether being anti gay is a terminal value to these people, or whether it is predicated upon the existance of god. I’m assuming the FAI would not beleive in god, but then again some people might have faith as a terminal value, so… its complicated.
Consent is a concept that get’s easily complicated. Is it wrong to burn coal when the asthmatics who die because of it aren’t consenting? Are the asthmatics in the US consenting by virtue of electing a government that allows coal to be burned?
If a AGI does thinks in a very complicated way it might not meaningfully get consent for anything because it can’t explain it’s reasoning to humans.
Is that necessary for consent? I mean, one does not have to understand the rationale for undergoing a medical procedure in order to consent to it. Its more important to know the potential risks.
In the same way it’s supposed to deal with real live people.