But if you believed that setting fire to everything around you was good, and by showing you that hurting ecosystems by fire would be bad, you would change your values, would that really be “changing your values?”
A lot of values update based on information, so perhaps one could realign such AI with such information.
But if you believed that setting fire to everything around you was good, and by showing you that hurting ecosystems by fire would be bad, you would change your values, would that really be “changing your values?”
A lot of values update based on information, so perhaps one could realign such AI with such information.
It’s not changing my values, it’s changing my beliefs?