I’ve seen two definitions.
1: AI that is carefully designed to not do bad things. This is exemplified in sentences like “don’t build this AI, it’s not Friendly.”
2: AI that does good things and not bad things because it wants the same things we want. “Friendly AI will have to learn human values.”
I’ve seen two definitions.
1: AI that is carefully designed to not do bad things. This is exemplified in sentences like “don’t build this AI, it’s not Friendly.”
2: AI that does good things and not bad things because it wants the same things we want. “Friendly AI will have to learn human values.”