I’m actually more interested in corrigibility than values alignment, so I don’t think that AI should be solving moral dilemmas every time it takes an action. I think values should be worked out in the post-ASI period, by humans in a democratic political system.
I’m actually more interested in corrigibility than values alignment, so I don’t think that AI should be solving moral dilemmas every time it takes an action. I think values should be worked out in the post-ASI period, by humans in a democratic political system.