If you’re not saying to go into AI safety research, what non-business-as-usual course of action are you expecting? Is your premise that everyone taking this seriously should figure out their comparative advantage within an AI risk organization because they contain many non-researcher roles, or are you imagining some potential course of action outside of “Give your time/money to MIRI/HCAI/etc”?
Is your premise that everyone taking this seriously should figure out their comparative advantage within an AI risk organization because they contain many non-researcher roles
Yes, basically. One of the specific possibilities I alluded to was taking on managerial or entreprenerial roles, here:
So people like me can’t just hand complicated assignments off and trust they get done competently. Someone might understand the theory but not get the political nuances they need to do something useful with the theory. Or they get the political nuances, and maybe get the theory at-the-time, but aren’t keeping up with the evolving technical landscape.
The thesis of the post is intended to be ‘donating to MIRI/CHAI etc is not the most useful thing you can be doing’
If you’re not saying to go into AI safety research, what non-business-as-usual course of action are you expecting? Is your premise that everyone taking this seriously should figure out their comparative advantage within an AI risk organization because they contain many non-researcher roles, or are you imagining some potential course of action outside of “Give your time/money to MIRI/HCAI/etc”?
Yes, basically. One of the specific possibilities I alluded to was taking on managerial or entreprenerial roles, here:
The thesis of the post is intended to be ‘donating to MIRI/CHAI etc is not the most useful thing you can be doing’