Along these lines, I’ve been thinking maybe the best chance we have will be finding ways to directly support the major AI labs most likely to create advanced AI, to help guide their decisions toward better outcomes.
Like perhaps some well-chosen representatives from the EA AI safety community could be doing free, regular consulting with DeepMind and OpenAI, etc. about safety. Find some way to be a resource they consider useful but also help them keep safety top-of-mind. If free isn’t good enough, with the current state of funding in EA, we could even pay the companies just to meet regularly with the AI safety consultants.
If this was realized, of course the consultants would have to sign NDAs and so what’s going on couldn’t be openly discussed on forums like LessWrong. (I suppose this kind of arrangement may already be happening and we just aren’t aware of it because of this.)
Update: Chris’s suggestion in the reply to this comment just for EA funders to offer the labs money to hire more safety researchers seems simpler and more workable than the above consultant model.
This is a rough idea—a lot more thought needs to go into exactly what to do and how to do it. But something like this could be extremely impactful. A handful of people at one of these AI companies could well be soon determining the fate of humanity with their engineering decisions. If we could positively influence them in some way, that may be our best hope.
Yeah, I wonder if we could offer these companies funding to take on more AI Safety researchers? Even if they’re well-resourced, management probably wants to look financially responsible.
DeepMind and OpenAI both already employee teams of existential-risk focused AI safety researchers. While I don’t personally work on any of these teams, I get the impression from speaking to them that they are much more talent-constrained than resource-constrained.
I’m not sure how to alleviate this problem in the short term. My best guess would be free bootcamp-style training for value-aligned people who are promising researchers but lack specific relevant skills. For example, ML engineering training or formal mathematics education for junior AIS researchers who would plausibly be competitive hires if that part of their background were strengthened.
However, I don’t think that offering AI safety researchers as “free consultants” to these organizations would have much impact. I doubt the organizations would accept since they already have relevant internal teams, and AI safety researchers can presumably have greater impact working within the organization than as external consultants.
My best guess would be free bootcamp-style training for value-aligned people who are promising researchers but lack specific relevant skills. For example, ML engineering training or formal mathematics education for junior AIS researchers who would plausibly be competitive hires if that part of their background were strengthened.
The low-effort version of this would be, instead of spinning up your own bootcamp, having value-aligned people apply for a grant to the Long-Term Future Fund to participate in a bootcamp.
Along these lines, I’ve been thinking maybe the best chance we have will be finding ways to directly support the major AI labs most likely to create advanced AI, to help guide their decisions toward better outcomes.
Like perhaps some well-chosen representatives from the EA AI safety community could be doing free, regular consulting with DeepMind and OpenAI, etc. about safety. Find some way to be a resource they consider useful but also help them keep safety top-of-mind. If free isn’t good enough, with the current state of funding in EA, we could even pay the companies just to meet regularly with the AI safety consultants.
If this was realized, of course the consultants would have to sign NDAs and so what’s going on couldn’t be openly discussed on forums like LessWrong. (I suppose this kind of arrangement may already be happening and we just aren’t aware of it because of this.)
Update: Chris’s suggestion in the reply to this comment just for EA funders to offer the labs money to hire more safety researchers seems simpler and more workable than the above consultant model.
This is a rough idea—a lot more thought needs to go into exactly what to do and how to do it. But something like this could be extremely impactful. A handful of people at one of these AI companies could well be soon determining the fate of humanity with their engineering decisions. If we could positively influence them in some way, that may be our best hope.
Yeah, I wonder if we could offer these companies funding to take on more AI Safety researchers? Even if they’re well-resourced, management probably wants to look financially responsible.
DeepMind and OpenAI both already employee teams of existential-risk focused AI safety researchers. While I don’t personally work on any of these teams, I get the impression from speaking to them that they are much more talent-constrained than resource-constrained.
I’m not sure how to alleviate this problem in the short term. My best guess would be free bootcamp-style training for value-aligned people who are promising researchers but lack specific relevant skills. For example, ML engineering training or formal mathematics education for junior AIS researchers who would plausibly be competitive hires if that part of their background were strengthened.
However, I don’t think that offering AI safety researchers as “free consultants” to these organizations would have much impact. I doubt the organizations would accept since they already have relevant internal teams, and AI safety researchers can presumably have greater impact working within the organization than as external consultants.
The low-effort version of this would be, instead of spinning up your own bootcamp, having value-aligned people apply for a grant to the Long-Term Future Fund to participate in a bootcamp.