I think increasing the surface-area of useful ways to contribute to AI Alignment is quite important. Often, people sort of bounce off a problem if they don’t see how they can help, figuring it’s someone else’s problem. Making it easier for people in other fields to understand how they can help seems valuable.
I think this post would probably be improved if it went into more detail about each point (maybe linking to other posts that explained some of the context). I’m not sure if that’s quite right for this post or followup ones. But if we get more philosophers trying to get involved it might be good to further improve their “onboarding experience.”
Curated.
I think increasing the surface-area of useful ways to contribute to AI Alignment is quite important. Often, people sort of bounce off a problem if they don’t see how they can help, figuring it’s someone else’s problem. Making it easier for people in other fields to understand how they can help seems valuable.
I think this post would probably be improved if it went into more detail about each point (maybe linking to other posts that explained some of the context). I’m not sure if that’s quite right for this post or followup ones. But if we get more philosophers trying to get involved it might be good to further improve their “onboarding experience.”