You know, you can contribute to alignment without contributing to alignment. Focus on the places you’re shocked everyone else is dropping the ball. “Hey wait, why so little emphasis on aligning the humans that make AI? Wouldn’t getting people to just slow the hell down and stop racing toward oblivion be helpful?” is one example of this, that would use an entirely different skillset (PR, social skills, etc) to work on. In my own case, I’m mainly interested in designing a system enabling mass human coordination and factored cognition, though I’m terrible at actually writing anything about the mountain of ideas in my head. This would indirectly speed up alignment by helping researchers think clearly, and also be great in many other ways. Think outside the “AI alignment directly and nothing else” box, and find something you can work on, with your skillset.
You know, you can contribute to alignment without contributing to alignment. Focus on the places you’re shocked everyone else is dropping the ball. “Hey wait, why so little emphasis on aligning the humans that make AI? Wouldn’t getting people to just slow the hell down and stop racing toward oblivion be helpful?” is one example of this, that would use an entirely different skillset (PR, social skills, etc) to work on. In my own case, I’m mainly interested in designing a system enabling mass human coordination and factored cognition, though I’m terrible at actually writing anything about the mountain of ideas in my head. This would indirectly speed up alignment by helping researchers think clearly, and also be great in many other ways. Think outside the “AI alignment directly and nothing else” box, and find something you can work on, with your skillset.