I think we agree that pushing oneself is very fraught. And we agree that one is at least fairly unlikely to push the boundaries of knowledge about AI alignment without “a lot” of effort.(Though maybe I think this a bit less than you? I don’t think it’s been adequately tested to take brilliant minds from very distinct disciplines and have them think seriously about alignment. How many psychologists, how many top-notch philosophers, how many cognitive scientists, how many animal behaviorists have seriously thought about alignment? Might there be relatively low-hanging fruit from the perspective of those bodies of knowledge?)
What I’m saying here is that career boundaries are things to be minimized, and the referenced post seemed to be career-boundary-maxing. One doesn’t know what would happen if one made even a small hobby of AI alignment; maybe it would become fun + interesting / productive and become a large hobby. Even if the way one is going to contribute is not by solving the technical problem, it still helps quite a lot with other methods of helping to understand about the technical problem. So in any case, cutting off that exploration because one is the wrong type of guy is stupid, and advocating for doing that is stupid.
I think we agree that pushing oneself is very fraught. And we agree that one is at least fairly unlikely to push the boundaries of knowledge about AI alignment without “a lot” of effort.(Though maybe I think this a bit less than you? I don’t think it’s been adequately tested to take brilliant minds from very distinct disciplines and have them think seriously about alignment. How many psychologists, how many top-notch philosophers, how many cognitive scientists, how many animal behaviorists have seriously thought about alignment? Might there be relatively low-hanging fruit from the perspective of those bodies of knowledge?)
What I’m saying here is that career boundaries are things to be minimized, and the referenced post seemed to be career-boundary-maxing. One doesn’t know what would happen if one made even a small hobby of AI alignment; maybe it would become fun + interesting / productive and become a large hobby. Even if the way one is going to contribute is not by solving the technical problem, it still helps quite a lot with other methods of helping to understand about the technical problem. So in any case, cutting off that exploration because one is the wrong type of guy is stupid, and advocating for doing that is stupid.