I’m guessing that what you are getting at is the process of building through learning and practice, which I assume is quite uncontroversial. I think the main argument against its applicability to AI alignment specifically is that you (and everyone else) die before you get to learn and practice enough, unlike in baseball or in physics. If TAI emergence is a slow and incremental process, then you have a point. Eliezer argues that it is not.
I’m guessing that what you are getting at is the process of building through learning and practice, which I assume is quite uncontroversial. I think the main argument against its applicability to AI alignment specifically is that you (and everyone else) die before you get to learn and practice enough, unlike in baseball or in physics. If TAI emergence is a slow and incremental process, then you have a point. Eliezer argues that it is not.
I agree there probably isn’t enough time. Best case scenario there’s enough time for weak alignment tools (small apples).