Thanks for the descriptions — it is interesting for me to hear about your experiences, and I imagine a number of others found it interesting too.
A couple clarifications from my perspective:
First: AIRCS is co-run by both CFAR and MIRI, and is not entirely a MIRI recruiting program, although it is partly that! (You might know this part already, but it seems like useful context.)
We are hoping that different people go on from AIRCS to a number of different AI safety career paths. For example:
Some people head straight from AIRCS to MIRI.
Some people attend AIRCS workshops multiple times, spaced across months or small years, while they gradually get familiar with AI safety and related fields.
Some people realize after an AIRCS workshop that AI safety is not a good fit for them.
Some people, after attending one or perhaps many AIRCS workshops, go on to do AI safety research at an organization that isn’t MIRI.
All of these are good and intended outcomes from our perspective! AI safety could use more good technical researchers, and AIRCS is a long-term investment toward improving the number of good computer scientists (and mathematicians and others) who have some background in the field. (Although it is also partly aimed to help with MIRI’s recruiting in particular.)
Separately, I did not mean to “give people a rule” to “not speak about AI safety to people who do not express interest.” I mean, neither I nor AIRCS as an organization have some sort of official request that people follow a “rule” of that sort. I do personally usually follow a rule of that sort, though (with exceptions). Also, when people ask me for advice about whether to try to “spread the word” about AI risk, I often share that I personally am a bit cautious about when and how I talk with people about AI risk; and I often share details about that.
I do try to have real conversations with people that reply to their curiosity and/or to their arguments/questions/etc., without worrying too much which directions such conversations will update them toward.
I try to do this about AI safety, as about other topics. And when I do this about AI safety (or other difficult topics), I try to help people have enough “space” that they can process things bit-by-bit if they want. I think it is often easier and healthier to take in a difficult topic at one’s own pace. But all of this is tricky, and I would not claim to know the one true answer to how everyone should talk about AI safety.
Also, I appreciate hearing about the bits you found distressing; thank you. Your comments make sense to me. I wonder if we’ll be able to find a better format in time. We keep getting bits of feedback and making small adjustments, but it is a slow process. Job applications are perhaps always a bit awkward, but iterating on “how do we make it less awkward” does seem to yield slow-but-of-some-value modifications over time.
You are welcome.
I won’t repeat what I answered to Buck’s comment; some part of the answer are certainly relevant here. In particular regarding the above-mentioned “rule”.
While I did not write about «space» above, I hope my point was clear. You and all the staff were making sure we were able to process things safely. While I would not have been able to state explicitly your goals, I was trying to emphasize that you did care about those questions.
AIRCS [...] is not entirely a MIRI recruiting program
I believe that I explicitly stated that a lot of people are here for different reasons than being recruited at MIRI. And that your goal are also to hope that people will work at other place on AI safety. I’m actually surprised that you have to clarify it. I fear I was not as clear as I hoped to be.
There is one last thing I should have added to the post. I wrote this for myself. More precisely, I wrote what I would have wanted to read before applying at MIRI/attending AIRCS. I wrote what I expect other applicants might find useful eventually. It seems safe to assume that those applicants sometime reads Less Wrong and so might looks for similar posts and fin this one. Currently, all comments had been made by LW/MIRI/CFAR’s staff, which means that this is not (yet) a success. Anyway, you were not the intended audience, even if I assumed that somehow, some AIRCS staff would hear about this post. I didn’t try to write anything that you would appreciate. After all, you know AIRCS better than me. And it’s entirely possible that you might find some critiques to be unfair. I’m happy to read that you do appreciate and found interesting some parts of what I wrote.
Note that most of the things I wrote here related to AIRCS were already present in my feed back form. There were some details I omitted in the form (I don’t need to tell you that there is a lot of vegan options), and some details I omit here (In particular the one mentioning people by name). So there was already a text in which you were the intended audience.
Hi Arthur,
Thanks for the descriptions — it is interesting for me to hear about your experiences, and I imagine a number of others found it interesting too.
A couple clarifications from my perspective:
First: AIRCS is co-run by both CFAR and MIRI, and is not entirely a MIRI recruiting program, although it is partly that! (You might know this part already, but it seems like useful context.)
We are hoping that different people go on from AIRCS to a number of different AI safety career paths. For example:
Some people head straight from AIRCS to MIRI.
Some people attend AIRCS workshops multiple times, spaced across months or small years, while they gradually get familiar with AI safety and related fields.
Some people realize after an AIRCS workshop that AI safety is not a good fit for them.
Some people, after attending one or perhaps many AIRCS workshops, go on to do AI safety research at an organization that isn’t MIRI. All of these are good and intended outcomes from our perspective! AI safety could use more good technical researchers, and AIRCS is a long-term investment toward improving the number of good computer scientists (and mathematicians and others) who have some background in the field. (Although it is also partly aimed to help with MIRI’s recruiting in particular.)
Separately, I did not mean to “give people a rule” to “not speak about AI safety to people who do not express interest.” I mean, neither I nor AIRCS as an organization have some sort of official request that people follow a “rule” of that sort. I do personally usually follow a rule of that sort, though (with exceptions). Also, when people ask me for advice about whether to try to “spread the word” about AI risk, I often share that I personally am a bit cautious about when and how I talk with people about AI risk; and I often share details about that.
I do try to have real conversations with people that reply to their curiosity and/or to their arguments/questions/etc., without worrying too much which directions such conversations will update them toward.
I try to do this about AI safety, as about other topics. And when I do this about AI safety (or other difficult topics), I try to help people have enough “space” that they can process things bit-by-bit if they want. I think it is often easier and healthier to take in a difficult topic at one’s own pace. But all of this is tricky, and I would not claim to know the one true answer to how everyone should talk about AI safety.
Also, I appreciate hearing about the bits you found distressing; thank you. Your comments make sense to me. I wonder if we’ll be able to find a better format in time. We keep getting bits of feedback and making small adjustments, but it is a slow process. Job applications are perhaps always a bit awkward, but iterating on “how do we make it less awkward” does seem to yield slow-but-of-some-value modifications over time.
You are welcome. I won’t repeat what I answered to Buck’s comment; some part of the answer are certainly relevant here. In particular regarding the above-mentioned “rule”. While I did not write about «space» above, I hope my point was clear. You and all the staff were making sure we were able to process things safely. While I would not have been able to state explicitly your goals, I was trying to emphasize that you did care about those questions.
There is one last thing I should have added to the post. I wrote this for myself. More precisely, I wrote what I would have wanted to read before applying at MIRI/attending AIRCS. I wrote what I expect other applicants might find useful eventually. It seems safe to assume that those applicants sometime reads Less Wrong and so might looks for similar posts and fin this one. Currently, all comments had been made by LW/MIRI/CFAR’s staff, which means that this is not (yet) a success. Anyway, you were not the intended audience, even if I assumed that somehow, some AIRCS staff would hear about this post. I didn’t try to write anything that you would appreciate. After all, you know AIRCS better than me. And it’s entirely possible that you might find some critiques to be unfair. I’m happy to read that you do appreciate and found interesting some parts of what I wrote. Note that most of the things I wrote here related to AIRCS were already present in my feed back form. There were some details I omitted in the form (I don’t need to tell you that there is a lot of vegan options), and some details I omit here (In particular the one mentioning people by name). So there was already a text in which you were the intended audience.