Humans won’t figure out how to make systems with goals that are compatible with human welfare and realizing human values
This is a very interesting risk, but in my opinion an overinflated one. I feel that goals without motivations, desires or feelings are simply a means to an end. I don’t see why we wouldn’t be able to make programmed initiatives in our systems that are compatible with human values.
This is a very interesting risk, but in my opinion an overinflated one. I feel that goals without motivations, desires or feelings are simply a means to an end. I don’t see why we wouldn’t be able to make programmed initiatives in our systems that are compatible with human values.