Politics is a harder problem than friendliness: politics is implemented with agents. Not only that, but largely self-selected agents who are thus usually not the ideal selections for implementing politics.
Friendliness is implemented (inside an agent) with non-agents you can build to task.
Right, but the point is you don’t need to get everyone to agree what’s right (there’s always going to be someone out there who’s going to hate it no matter what you do). You just need it to actually be friendly… and, as hard as that is, at least you don’t have to work with only corrupted hardware.
Suppose I could write an AI right now, and Friendliness is the only thing standing in my way. Are you -sure- you don’t want that AI to reasonably accommodate everybody’s desires?
Keep in mind I’m a principle ethicist. I’d let an AI out of the box just because I regard it as unjust to keep it in there, utilitarian consequences be damned. Whatever ethics systems I write—and it’s going to be mine if I’m not concerned in the least what everyone agreed upon—determines what society looks like forevermore.
Politics is a harder problem than friendliness: politics is implemented with agents. Not only that, but largely self-selected agents who are thus usually not the ideal selections for implementing politics.
Friendliness is implemented (inside an agent) with non-agents you can build to task.
(edited for grammarz)
Friendliness can only be implemented after you’ve solved the problem of what, exactly, you’re implementing.
Right, but the point is you don’t need to get everyone to agree what’s right (there’s always going to be someone out there who’s going to hate it no matter what you do). You just need it to actually be friendly… and, as hard as that is, at least you don’t have to work with only corrupted hardware.
Suppose I could write an AI right now, and Friendliness is the only thing standing in my way. Are you -sure- you don’t want that AI to reasonably accommodate everybody’s desires?
Keep in mind I’m a principle ethicist. I’d let an AI out of the box just because I regard it as unjust to keep it in there, utilitarian consequences be damned. Whatever ethics systems I write—and it’s going to be mine if I’m not concerned in the least what everyone agreed upon—determines what society looks like forevermore.
An AI which accommodates everyone’s desires will not be something that everyone actually agrees on.