I agree with you that AI is generally seen as “the big thing” now, and we are very unlikely to be counterfactual in encouraging AI hype. This was a large factor in our recent decision to advertise the Summer 2023 Cohort via a Twitter post and a shout-out on Rob Miles’ YouTube and TikTok channels.
However, because we provide a relatively simple opportunity to gain access to mentorship from scientists at scaling labs, we believe that our program might seem attractive to aspiring AI researchers who are not fundamentally directed toward reducing x-risk. We believe that accepting such individuals as scholars is bad because:
We might counterfactually accelerate their ability to contribute to AI capabilities;
They might displace an x-risk-motivated scholar.
Therefore, while we intend to expand our advertising approach to capture more out-of-network applicants, we do not currently plan to reduce the selection pressures for x-risk-motivated scholars.
Another crux here is that I believe the field is in a nascent stage where new funders and the public might be swayed by fundamentally bad “AI safety” projects that make AI systems more commercialisable without reducing x-risk. Empowering founders of such projects is not a goal of MATS. After the field has grown a bit larger while maintaining its focus on reducing x-risk, there will hopefully be less “free energy” for naive AI safety projects, and we can afford to be less choosy with scholars.
I agree with you that AI is generally seen as “the big thing” now, and we are very unlikely to be counterfactual in encouraging AI hype. This was a large factor in our recent decision to advertise the Summer 2023 Cohort via a Twitter post and a shout-out on Rob Miles’ YouTube and TikTok channels.
However, because we provide a relatively simple opportunity to gain access to mentorship from scientists at scaling labs, we believe that our program might seem attractive to aspiring AI researchers who are not fundamentally directed toward reducing x-risk. We believe that accepting such individuals as scholars is bad because:
We might counterfactually accelerate their ability to contribute to AI capabilities;
They might displace an x-risk-motivated scholar.
Therefore, while we intend to expand our advertising approach to capture more out-of-network applicants, we do not currently plan to reduce the selection pressures for x-risk-motivated scholars.
Another crux here is that I believe the field is in a nascent stage where new funders and the public might be swayed by fundamentally bad “AI safety” projects that make AI systems more commercialisable without reducing x-risk. Empowering founders of such projects is not a goal of MATS. After the field has grown a bit larger while maintaining its focus on reducing x-risk, there will hopefully be less “free energy” for naive AI safety projects, and we can afford to be less choosy with scholars.