Glad to see this write-up & excited for more posts.
I think these are three areas that MATS feels like it has handled fairly well. I’d be especially excited to hear more about areas where MATS thinks it’s struggling, MATS is uncertain, or where MATS feels like it has a lot of room to grow. Potential candidates include:
How is MATS going about talent selection and advertising for the next cohort, especially given the recent wave of interest in AI/AI safety?
How does MATS intend to foster (or recruit) the kinds of qualities that strong researchers often possess?
How does MATS define “good” alignment research?
Other things I’m be curious about:
Which work from previous MATS scholars is the MATS team most excited about? What are MATS’s biggest wins? Which individuals or research outputs is MATS most proud of?
Most peoples’ timelines have shortened a lot since MATS was established. Does this substantially reduce the value of MATS (relative to worlds with longer timelines)?
Does MATS plan to try to attract senior researchers who are becoming interested in AI Safety (e.g., professors, people with 10+ years of experience in industry)? Or will MATS continue to recruit primarily from the (largely younger and less experienced) EA/LW communities?
We broadened our advertising approach for the Summer 2023 Cohort, including a Twitter post and a shout-out on Rob Miles’ YouTube and TikTok channels. We expected some lowering of average applicant quality as a result but have yet to see a massive influx of applicants from these sources. We additionally focused more on targeted advertising to AI safety student groups, given their recent growth. We will publish updated applicant statistics after our applications close.
In addition to applicant selection and curriculum elements, our Scholar Support staff, introduced in the Winter 2022-23 Cohort, supplement the mentorship experience by providing 1-1 research strategy and unblocking support for scholars. This program feature aims to:
Supplement and augment mentorship with 1-1 debugging, planning, and unblocking;
Solve scholars’ problems, giving more time for research.
Defining “good alignment research” is very complicated and merits a post of its own (or two, if you also include the theories of change that MATS endorses). We are currently developing scholar research ability through curriculum elements focused on breadth, depth, and epistemology (the “T-model of research”):
Our Alumni Spotlight includes an incomplete list of projects we highlight. Many more past scholar projects seem promising to us but have yet to meet our criteria for inclusion here. Watch this space.
Since Summer 2022, MATS has explicitly been trying to parallelize the field of AI safety as much as is prudent, given the available mentorship and scholarly talent. In longer-timeline worlds, more careful serial research seems prudent, as growing the field rapidly is a risk for the reasons outlined in the above article. We believe that MATS’ goals have grown more important from the perspective of timelines shortening (though MATS management has not updated on timelines much as they were already fairly short in our estimation).
MATS would love to support senior research talent interested in transitioning into AI safety! Our scholars generally comprise 10% Postdocs, and we would like this number to rise. Currently, our advertising strategy is contingent on the AI safety community adequately targeting these populations (which seems false) and might change for future cohorts.
Glad to see this write-up & excited for more posts.
I think these are three areas that MATS feels like it has handled fairly well. I’d be especially excited to hear more about areas where MATS thinks it’s struggling, MATS is uncertain, or where MATS feels like it has a lot of room to grow. Potential candidates include:
How is MATS going about talent selection and advertising for the next cohort, especially given the recent wave of interest in AI/AI safety?
How does MATS intend to foster (or recruit) the kinds of qualities that strong researchers often possess?
How does MATS define “good” alignment research?
Other things I’m be curious about:
Which work from previous MATS scholars is the MATS team most excited about? What are MATS’s biggest wins? Which individuals or research outputs is MATS most proud of?
Most peoples’ timelines have shortened a lot since MATS was established. Does this substantially reduce the value of MATS (relative to worlds with longer timelines)?
Does MATS plan to try to attract senior researchers who are becoming interested in AI Safety (e.g., professors, people with 10+ years of experience in industry)? Or will MATS continue to recruit primarily from the (largely younger and less experienced) EA/LW communities?
We broadened our advertising approach for the Summer 2023 Cohort, including a Twitter post and a shout-out on Rob Miles’ YouTube and TikTok channels. We expected some lowering of average applicant quality as a result but have yet to see a massive influx of applicants from these sources. We additionally focused more on targeted advertising to AI safety student groups, given their recent growth. We will publish updated applicant statistics after our applications close.
In addition to applicant selection and curriculum elements, our Scholar Support staff, introduced in the Winter 2022-23 Cohort, supplement the mentorship experience by providing 1-1 research strategy and unblocking support for scholars. This program feature aims to:
Supplement and augment mentorship with 1-1 debugging, planning, and unblocking;
Allow air-gapping of evaluation and support, improving scholar outcomes by resolving issues they would not take to their mentor;
Solve scholars’ problems, giving more time for research.
Defining “good alignment research” is very complicated and merits a post of its own (or two, if you also include the theories of change that MATS endorses). We are currently developing scholar research ability through curriculum elements focused on breadth, depth, and epistemology (the “T-model of research”):
Breadth-first search (literature reviews, building a “toolbox” of knowledge, noticing gaps);
Depth-first search (forming testable hypotheses, project-specific skills, executing research, recursing appropriately, using checkpoints);
Epistemology (identifying threat models, backchaining to local search, applying builder/breaker methodology, babble and prune, “infinite-compute/time” style problem decompositions, etc.).
Our Alumni Spotlight includes an incomplete list of projects we highlight. Many more past scholar projects seem promising to us but have yet to meet our criteria for inclusion here. Watch this space.
Since Summer 2022, MATS has explicitly been trying to parallelize the field of AI safety as much as is prudent, given the available mentorship and scholarly talent. In longer-timeline worlds, more careful serial research seems prudent, as growing the field rapidly is a risk for the reasons outlined in the above article. We believe that MATS’ goals have grown more important from the perspective of timelines shortening (though MATS management has not updated on timelines much as they were already fairly short in our estimation).
MATS would love to support senior research talent interested in transitioning into AI safety! Our scholars generally comprise 10% Postdocs, and we would like this number to rise. Currently, our advertising strategy is contingent on the AI safety community adequately targeting these populations (which seems false) and might change for future cohorts.