Of course the same is true for machine learning, though it’s less surprising there. I think subproblems getting solved is only something you’d really expect on a perspective like mine where you are looking for some kind of cleaner / more discrete notion of “solution.” On that perspective maybe you’d count the special case “AIs are weak relative to humans, individual decisions are low-stakes” as being solved? (Though even then not quite.)
Are there any good examples of useful or interesting sub-problems in AI Alignment that can actually be considered “solved”?
I don’t think so.
Of course the same is true for machine learning, though it’s less surprising there. I think subproblems getting solved is only something you’d really expect on a perspective like mine where you are looking for some kind of cleaner / more discrete notion of “solution.” On that perspective maybe you’d count the special case “AIs are weak relative to humans, individual decisions are low-stakes” as being solved? (Though even then not quite.)