What do you mean by “select hard for value alignment”? Chinese culture is very different from that of the US, and EA is almost unheard of. You can influence things by hiring, but expecting very tight conformance with EA culture is… unlikely to work. Are people interested in AI capabilities research currently barred from being hired by alignment orgs? I am very curious what the situation on the ground is.
Also, there are various local legal issues when it comes to advanced research. Sharing genomics data with foreign orgs is pretty illegal, for example. There’s also the problem of not having the possibility of keeping research closed. All companies above a certain size are required to hire a certain number of Party members to act as informants.
So what has stopped there from being more alignment orgs in China? Is it bottleneck local coordination, interest, vetting, or funding? I’d very much be interested in participating in any new projects.
Value alignment here means being focused on improving humanity’s long term future by reducing existential risk, not other specific cultural markers (identifying as EA or rationalist, for example, is not necessary). Having people working towards same goal seems vital for organizational cohesion, and I think alignment orgs would rightly not hire people who are not focused on trying to solve alignment. Upskilling people who are happy to do capabilities jobs without pushing hard internally for capabilities orgs to be more safety focused seems net negative.
I think it’s important for AI safety initiatives to screen for participants that are very likely to go into AI safety research because:
AI safety initiatives eat up valuable free energy in the form of AI safety researchers, engineers, and support staff that could benefit other initiatives;
Longtermist funding is ~30% depleted post-FTX, and therefore the quality and commitment of participants funded by longtermist money are more important now;
Some programs like MLAB might counterfactually improve a participant’s ability to get hired as an AI capabilities researcher, which might mean the program contributes insufficiently to the field of alignment relative to accelerating capabilities.
These concerns might be addressed by:
Requiring all participants in MLAB-style programs to engage with AGISF first;
Selecting ML talent for research programs (like MATS is trying) rather than building ML talent with engineer upskilling programs;
Encouraging participants to seek non-longtermist funding and mentorship for their projects, perhaps through supporting research projects in academia that leverage non-AI safety academic ML research mentorship and funding for AI safety-relevant projects;
Interviewing applicants to assess their motivations;
Offering ~30% less money (and slightly less prestige) than tech internships to filter out people who will leave safety research and work on capabilities after the program.
What do you mean by “select hard for value alignment”? Chinese culture is very different from that of the US, and EA is almost unheard of. You can influence things by hiring, but expecting very tight conformance with EA culture is… unlikely to work. Are people interested in AI capabilities research currently barred from being hired by alignment orgs? I am very curious what the situation on the ground is.
Also, there are various local legal issues when it comes to advanced research. Sharing genomics data with foreign orgs is pretty illegal, for example. There’s also the problem of not having the possibility of keeping research closed. All companies above a certain size are required to hire a certain number of Party members to act as informants.
So what has stopped there from being more alignment orgs in China? Is it bottleneck local coordination, interest, vetting, or funding? I’d very much be interested in participating in any new projects.
Value alignment here means being focused on improving humanity’s long term future by reducing existential risk, not other specific cultural markers (identifying as EA or rationalist, for example, is not necessary). Having people working towards same goal seems vital for organizational cohesion, and I think alignment orgs would rightly not hire people who are not focused on trying to solve alignment. Upskilling people who are happy to do capabilities jobs without pushing hard internally for capabilities orgs to be more safety focused seems net negative.
I think it’s important for AI safety initiatives to screen for participants that are very likely to go into AI safety research because:
AI safety initiatives eat up valuable free energy in the form of AI safety researchers, engineers, and support staff that could benefit other initiatives;
Longtermist funding is ~30% depleted post-FTX, and therefore the quality and commitment of participants funded by longtermist money are more important now;
Some programs like MLAB might counterfactually improve a participant’s ability to get hired as an AI capabilities researcher, which might mean the program contributes insufficiently to the field of alignment relative to accelerating capabilities.
These concerns might be addressed by:
Requiring all participants in MLAB-style programs to engage with AGISF first;
Selecting ML talent for research programs (like MATS is trying) rather than building ML talent with engineer upskilling programs;
Encouraging participants to seek non-longtermist funding and mentorship for their projects, perhaps through supporting research projects in academia that leverage non-AI safety academic ML research mentorship and funding for AI safety-relevant projects;
Interviewing applicants to assess their motivations;
Offering ~30% less money (and slightly less prestige) than tech internships to filter out people who will leave safety research and work on capabilities after the program.