Technical AI safety and/or alignment advances are intrinsically safe and helpful to humanity, irrespective of the state of humanity.
I think this statement is weakly true, insofar as almost no misuse by humans could possibly be worse than what a completely out of control ASI would do. Technical safety is a necessary but not sufficient condition to beneficial AI. That said, it’s also absolutely true that it’s not nearly enough. Most scenarios with controllable AI still end with humanity nearly extinct IMO, with only a few people lording their AI over everyone else. Preventing that is not a merely technical challenge.
I think this statement is weakly true, insofar as almost no misuse by humans could possibly be worse than what a completely out of control ASI would do. Technical safety is a necessary but not sufficient condition to beneficial AI. That said, it’s also absolutely true that it’s not nearly enough. Most scenarios with controllable AI still end with humanity nearly extinct IMO, with only a few people lording their AI over everyone else. Preventing that is not a merely technical challenge.