I think direct outreach to the AI labs is a great idea. (If coordinated and well thought out.) Trying to get them to stop building AGI seems unlikely to help IMO, though I’m not totally against it.
I’d be more interested to see targeted efforts aimed at improving safety outcomes from the major AI labs. Things like:
Getting more value-aligned people in the AIS community onto the safety teams of DeepMind and OpenAI
EA funders offering those labs money to increase headcount on their safety teams
Other efforts to try and help tilt the culture of those labs more toward safety or convince their leadership to prioritize safety more highly
(My background assumptions here are short-ish timelines for AGI/TAI and that with high confidence it will be originating from one of a handful of these AI labs.)
This is something I’ve been thinking about a lot very recently. But as another commenter said, it’s probably better to see what the AI governance folks are up to, since this is essentially what they do.
(I learned today that “AI governance” isn’t just about what governments should do but also strategy around AI labs, etc.)
Getting more value-aligned people in the AIS community onto the safety teams of DeepMind and OpenAI
Why is this important? As far as I can tell, the safety teams of these two organisations are already almost entirely “value-aligned people in the AIS community”. They need more influence within the organisation, sure, but that’s not going to be solved by altering team composition.
rachelAF mentioned that she had the impression their safety teams were more talent-constrained than funding-constrained. So I inferred that getting more value-aligned people onto those teams wouldn’t just alter the team composition, but increase the size of their safety teams.
We probably need more evidence that those teams do still have open headcount though. I know DeepMind’s does right now, but I’m not sure whether that’s just a temporary opening.
You make a good point though. If the safety teams have little influence within those orgs, then it #3 may be a lot more impactful than #1.
As far as I can tell, the safety teams of these two organisations are already almost entirely “value-aligned people in the AIS community”
Interesting, how do you know this? Is there information about these teams available somewhere?
I think direct outreach to the AI labs is a great idea. (If coordinated and well thought out.) Trying to get them to stop building AGI seems unlikely to help IMO, though I’m not totally against it.
I’d be more interested to see targeted efforts aimed at improving safety outcomes from the major AI labs. Things like:
Getting more value-aligned people in the AIS community onto the safety teams of DeepMind and OpenAI
EA funders offering those labs money to increase headcount on their safety teams
Other efforts to try and help tilt the culture of those labs more toward safety or convince their leadership to prioritize safety more highly
(My background assumptions here are short-ish timelines for AGI/TAI and that with high confidence it will be originating from one of a handful of these AI labs.)
This is something I’ve been thinking about a lot very recently. But as another commenter said, it’s probably better to see what the AI governance folks are up to, since this is essentially what they do.
(I learned today that “AI governance” isn’t just about what governments should do but also strategy around AI labs, etc.)
Why is this important? As far as I can tell, the safety teams of these two organisations are already almost entirely “value-aligned people in the AIS community”. They need more influence within the organisation, sure, but that’s not going to be solved by altering team composition.
rachelAF mentioned that she had the impression their safety teams were more talent-constrained than funding-constrained. So I inferred that getting more value-aligned people onto those teams wouldn’t just alter the team composition, but increase the size of their safety teams.
We probably need more evidence that those teams do still have open headcount though. I know DeepMind’s does right now, but I’m not sure whether that’s just a temporary opening.
You make a good point though. If the safety teams have little influence within those orgs, then it #3 may be a lot more impactful than #1.
Interesting, how do you know this? Is there information about these teams available somewhere?