If the way an AGI treats us would depend upon the way we treat animals, the problem of a Friendly AI would already be partially solved. But there’s no way to think it will: if you don’t want an AI to treat the way you treat a cow, then don’t program it that way.
If you’re certain that the world will be dominated by one AGI, then my point is obviously irrelevant.
If we’re uncertain whether the world will be dominated by one AGI or by many independently created AGIs whose friendliness we’re uncertain of, then it seems like we should both try to design them right and try to create a society where, if no single AGI can dictate rules, the default rules for AGI to follow when dealing with other agents will be ok for us.
You seem to allude to the fact that it really isn’t that easy. In fact, if it is truly an AGI then by definition we can’t just box in its values in that way/make one arbitrary change to its values.
Instead, I would say if you don’t want an AI to treat us like we treat cows, then just stop eating cow flesh/bodily fluids. This seems a more robust strategy to shape the values of an AI we create, and furthermore it prevents an enormous amount of suffering and improves our own health.
If the way an AGI treats us would depend upon the way we treat animals, the problem of a Friendly AI would already be partially solved. But there’s no way to think it will: if you don’t want an AI to treat the way you treat a cow, then don’t program it that way.
If you’re certain that the world will be dominated by one AGI, then my point is obviously irrelevant.
If we’re uncertain whether the world will be dominated by one AGI or by many independently created AGIs whose friendliness we’re uncertain of, then it seems like we should both try to design them right and try to create a society where, if no single AGI can dictate rules, the default rules for AGI to follow when dealing with other agents will be ok for us.
You seem to allude to the fact that it really isn’t that easy. In fact, if it is truly an AGI then by definition we can’t just box in its values in that way/make one arbitrary change to its values.
Instead, I would say if you don’t want an AI to treat us like we treat cows, then just stop eating cow flesh/bodily fluids. This seems a more robust strategy to shape the values of an AI we create, and furthermore it prevents an enormous amount of suffering and improves our own health.