Why isn’t incremental progress at instilling human-like behavior into machines, incremental progress on AGI alignment?
It kind of is, but unfortunately treating others badly when you have lots of power is also part of human nature. And there’s no real limit to how bad it could get, see the Belgian Congo for example.
It kind of is, but unfortunately treating others badly when you have lots of power is also part of human nature. And there’s no real limit to how bad it could get, see the Belgian Congo for example.