I talk about something related in self and no-self; the outward-flowing ‘attempt to control’ and the inward-flowing ‘attempt to perceive’ are simultaneously in conflict (something being still makes it easier to see where it is, but also makes it harder to move it to where it should be) and mutually reinforcing (being able to tell where something is makes it easier to move it precisely where it needs to be).
Similarly, you can make an argument that control without understanding is impossible, that getting AI systems to do what we want is one task instead of two. I think I agree the “two progress bars” frame is incorrect but I think the typical AGI developer at a lab is not grappling with the philosophical problems behind alignment difficulties, and is trying to make something that ‘works at all’ instead of ‘works understandably’ in the sort of way that would actually lead to understanding which would enable control.
I talk about something related in self and no-self; the outward-flowing ‘attempt to control’ and the inward-flowing ‘attempt to perceive’ are simultaneously in conflict (something being still makes it easier to see where it is, but also makes it harder to move it to where it should be) and mutually reinforcing (being able to tell where something is makes it easier to move it precisely where it needs to be).
Similarly, you can make an argument that control without understanding is impossible, that getting AI systems to do what we want is one task instead of two. I think I agree the “two progress bars” frame is incorrect but I think the typical AGI developer at a lab is not grappling with the philosophical problems behind alignment difficulties, and is trying to make something that ‘works at all’ instead of ‘works understandably’ in the sort of way that would actually lead to understanding which would enable control.