How could an AI be compassionate? Perhaps an AI could be empathetic if it could perceive, through its sensors, the desires (or empirical goals, or reflective goals) of other agents and internalize them as its own.
In other words, it tries to maximize human values. Isn’t this the standard way of programming a Friendly AI?
In other words, it tries to maximize human values. Isn’t this the standard way of programming a Friendly AI?
I don’t think it makes sense to speak about a standard way of programming a Friendly AI.
“Designing” would probably be a better word. The standard idea for how you could make an AI friendly.