Currently, MATS is somewhat supporting rolling admissions for a minority of mentors with our Autumn and Spring cohorts (which are generally extensions of our Summer and Winter cohorts, respectively). Given that MATS is mainly focused on optimizing the cohort experience for scholars (because we think starting a research project in an academic cohort of people with similar experience with targeted seminars and workshops is ideal), we are probably a worse experience for scholars or mentors who ideally would start research projects at irregular intervals. Some scholars might not benefit as much from the academic cohort experience as others. Some mentors might ideally commit to mentorship during times of the year outside of MATS’ primary Winter/Summer cohorts. Also, MATS’ seminar program doesn’t necessarily run year-round, and we don’t offer as much logistical support to scholars outside of Winter/Summer. There is definitely free energy here for a complementary program, I think.
I am also scared about ML upskilling bootcamps that act as feeder grounds for AI capabilities organizations. I think vetting (including perhaps AGISF prerequisite) is key, as is a clear understanding of where the participants will go next. I only recommend this kind of project because hundreds of people seemingly complete AGISF and want to upskill to work on AI alignment but have scant opportunities. Also, MATS’ theory of change includes adding value through accelerating the development of (rare) “research leads” to increase the “carrying capacity” of the alignment research ecosystem (which theoretically is not principally bottlenecked by “research supporter” talent because training/buying such talent scales much easier than training/buying “research lead” talent). I will publish my reasoning for the latter point as soon as I have time.
Probably ask Sam Bowman. At a minimum, it might consist of an office space for longtermist organizations, like Lightcone or Constellation in Berkeley, some operations staff to make the office run, and some AI safety outreach to NYU and other strong universities nearby, like Columbia. I think some people might already be working on this?
Quick note on 2: CBAI is pretty concerned about our winter ML bootcamp attracting bad-faith applicants and plan to use a combo of AGISF and references to filter pretty aggressively for alignment interest. Somewhat problematic in the medium term if people find out they can get free ML upskilling by successfully feigning interest in alignment, though...
Currently, MATS is somewhat supporting rolling admissions for a minority of mentors with our Autumn and Spring cohorts (which are generally extensions of our Summer and Winter cohorts, respectively). Given that MATS is mainly focused on optimizing the cohort experience for scholars (because we think starting a research project in an academic cohort of people with similar experience with targeted seminars and workshops is ideal), we are probably a worse experience for scholars or mentors who ideally would start research projects at irregular intervals. Some scholars might not benefit as much from the academic cohort experience as others. Some mentors might ideally commit to mentorship during times of the year outside of MATS’ primary Winter/Summer cohorts. Also, MATS’ seminar program doesn’t necessarily run year-round, and we don’t offer as much logistical support to scholars outside of Winter/Summer. There is definitely free energy here for a complementary program, I think.
I am also scared about ML upskilling bootcamps that act as feeder grounds for AI capabilities organizations. I think vetting (including perhaps AGISF prerequisite) is key, as is a clear understanding of where the participants will go next. I only recommend this kind of project because hundreds of people seemingly complete AGISF and want to upskill to work on AI alignment but have scant opportunities. Also, MATS’ theory of change includes adding value through accelerating the development of (rare) “research leads” to increase the “carrying capacity” of the alignment research ecosystem (which theoretically is not principally bottlenecked by “research supporter” talent because training/buying such talent scales much easier than training/buying “research lead” talent). I will publish my reasoning for the latter point as soon as I have time.
Probably ask Sam Bowman. At a minimum, it might consist of an office space for longtermist organizations, like Lightcone or Constellation in Berkeley, some operations staff to make the office run, and some AI safety outreach to NYU and other strong universities nearby, like Columbia. I think some people might already be working on this?
Quick note on 2: CBAI is pretty concerned about our winter ML bootcamp attracting bad-faith applicants and plan to use a combo of AGISF and references to filter pretty aggressively for alignment interest. Somewhat problematic in the medium term if people find out they can get free ML upskilling by successfully feigning interest in alignment, though...