But I admit that I am clueless as to how that should be done. It’s just that it makes “set aside three years of your life to invest in AI safety research” ring pretty desperate and suboptimal to me.
I think this sentence actually contains my own answer, basically. I didn’t say “invest three years of your life in AI safety research.” (I realize looking back that I didn’t clearly *not* say that, so this misunderstanding is on me and I’ll consider rewriting that section). What I meant to say was:
Get three years of runway (note: this does not mean you’re quitting your job for three years, it means that you have 3 years of runway so you can quit your job for 1 or 2 years before starting to feel antsy about not having enough money)
Quit your job or arrange your life such that you have to time to think clearly
figure out what’s going on (this involves keeping up on industry trends and understanding them well enough to know what they mean, keeping up on AI safety community discourse, following relevant bits of politics in both government, corporations, etc)
figure out what to do (including what skills you need to gain in order to be able to do it)
do it
i.e, the first step is to become not clueless. And then step 2 depends a lot on your existing skillset. I specifically am not saying to go into AI safety research (although I realize it may have looked that way). I’m asserting that some minimum threshold of technical literacy is necessary make serious contributions in any domain.
Do you want to persuade powerful people to help? You’ll need to know what you’re talking about.
Do you want to direct funding to the right places? You need to understand what’s going on well enough to know what needs funding.
Do you want to just be a cog in an organization where you mostly just work like a normal person but are helping move progress forward? You’ll need to know what’s going on enough to pick an organization where you’ll be a marginally beneficial cog.
The question isn’t “what is the optimal thing for AI risk people collectively to do”. It’s “what is the optimal thing for you in particular to do, given that the AI risk community exist.” In the past 10 years, the AI risk community has gone from a few online discussion groups to a collection of orgs that have millions of funding in current dollars; funders who have millions or billions more; as of this week, Henry Kissinger endorsing AI risk as important.
In that context, figuring out “what the best marginal contribution you personally can make to one of the most important problems humanity will face” is a difficult question.
The thesis of this post is that taking that question seriously requires a lot of time to think, and that because money is less of a limiting bottleneck now, you are more useful on the margin as a person who has carved out enough to time to think seriously than as an Earning-to-Give person.
If you’re not saying to go into AI safety research, what non-business-as-usual course of action are you expecting? Is your premise that everyone taking this seriously should figure out their comparative advantage within an AI risk organization because they contain many non-researcher roles, or are you imagining some potential course of action outside of “Give your time/money to MIRI/HCAI/etc”?
Is your premise that everyone taking this seriously should figure out their comparative advantage within an AI risk organization because they contain many non-researcher roles
Yes, basically. One of the specific possibilities I alluded to was taking on managerial or entreprenerial roles, here:
So people like me can’t just hand complicated assignments off and trust they get done competently. Someone might understand the theory but not get the political nuances they need to do something useful with the theory. Or they get the political nuances, and maybe get the theory at-the-time, but aren’t keeping up with the evolving technical landscape.
The thesis of the post is intended to be ‘donating to MIRI/CHAI etc is not the most useful thing you can be doing’
I think this sentence actually contains my own answer, basically. I didn’t say “invest three years of your life in AI safety research.” (I realize looking back that I didn’t clearly *not* say that, so this misunderstanding is on me and I’ll consider rewriting that section). What I meant to say was:
Get three years of runway (note: this does not mean you’re quitting your job for three years, it means that you have 3 years of runway so you can quit your job for 1 or 2 years before starting to feel antsy about not having enough money)
Quit your job or arrange your life such that you have to time to think clearly
figure out what’s going on (this involves keeping up on industry trends and understanding them well enough to know what they mean, keeping up on AI safety community discourse, following relevant bits of politics in both government, corporations, etc)
figure out what to do (including what skills you need to gain in order to be able to do it)
do it
i.e, the first step is to become not clueless. And then step 2 depends a lot on your existing skillset. I specifically am not saying to go into AI safety research (although I realize it may have looked that way). I’m asserting that some minimum threshold of technical literacy is necessary make serious contributions in any domain.
Do you want to persuade powerful people to help? You’ll need to know what you’re talking about.
Do you want to direct funding to the right places? You need to understand what’s going on well enough to know what needs funding.
Do you want to just be a cog in an organization where you mostly just work like a normal person but are helping move progress forward? You’ll need to know what’s going on enough to pick an organization where you’ll be a marginally beneficial cog.
The question isn’t “what is the optimal thing for AI risk people collectively to do”. It’s “what is the optimal thing for you in particular to do, given that the AI risk community exist.” In the past 10 years, the AI risk community has gone from a few online discussion groups to a collection of orgs that have millions of funding in current dollars; funders who have millions or billions more; as of this week, Henry Kissinger endorsing AI risk as important.
In that context, figuring out “what the best marginal contribution you personally can make to one of the most important problems humanity will face” is a difficult question.
The thesis of this post is that taking that question seriously requires a lot of time to think, and that because money is less of a limiting bottleneck now, you are more useful on the margin as a person who has carved out enough to time to think seriously than as an Earning-to-Give person.
If you’re not saying to go into AI safety research, what non-business-as-usual course of action are you expecting? Is your premise that everyone taking this seriously should figure out their comparative advantage within an AI risk organization because they contain many non-researcher roles, or are you imagining some potential course of action outside of “Give your time/money to MIRI/HCAI/etc”?
Yes, basically. One of the specific possibilities I alluded to was taking on managerial or entreprenerial roles, here:
The thesis of the post is intended to be ‘donating to MIRI/CHAI etc is not the most useful thing you can be doing’