This was all really interesting, thanks for writing it and for being so open with your thinking, I think it’s really valuable. Lots of this hiring process sounds very healthy—for instance, I’m glad to hear they pay you well for the hours you spend doing work trial projects.
As far as I understand it, plenty of people are panicked when they really understand what AI risks are. So Anna Salamon gave us a rule: We don’t speak of AI safety to people who do not express the desire to hear about it. When I asked for more informations, she specified that it is okay to mention the words “AI Safety”; but not to give any details until the other person is sure they want to hear about it. In practice, this means it is okay to share a book/post on AI safety, but we should warn the person to read it only if they feel ready. Which leads to a related problem: some people never experienced an existential crisis or anxiety attack of their life, so it’s all too possible they can’t really “be ready”...
As you can see, the staff really cared about us and wanted to be sure that we would be able to manage the thoughts related to AI risk.
Yeah, Anna talked in a bunch of detail about her thinking on this in this comment on the recent CFAR AMA, in case you’re interested in seeing more examples of ways people can get confused when thinking about AI risk.
Every time a company decides not to hire me, I would love to know why, at least as to avoid making the same mistakes again. Miri here is an exception. I can see only so many reasons not to hire me that the outcome was unsurprising. The process and they considering me in the first place was.
My take here is a bit different from yours. I think it’s best to collaborate with interesting people who have unique ways of thinking, and it’s more important to focus on that than “I will only hire people who agree with me”. When I’m thinking about new people I’ve met and whether to hang out with them more / work with them more, I rarely am thinking about whether or not they also often use slogans like “x-risk” and “AI safety”, but primarily how they think and whether I’d get value out of working with them / hanging out with them.
The process you describe at CFAR sounds like a way to focus on finding interesting people: withholding judgement for as long as possible about whether you should work together, while giving you and them lots of space and time to build a connection. This lets you talk through the ideas that are interesting to both of you, and generally understand each other better than a 2-hour interview or coding project offers.
Ed Kmett seems like a central example here; my (perhaps mistaken) belief is that he’s primarily doing a lot of non-MIRI work he finds valuable, and is inspired by that more than the other kinds of research MIRI is doing, but he and other researchers at MIRI find ways to collaborate, and when they do he does really cool stuff. I expect there was a period where he engaged with the ideas around AI alignment in a lot of detail, and has opinions about them, and of course at some point that was important to whether he wanted to work at MIRI, but I expect Nate and others would be very excited about him being around, regardless of whether he thought their project was terribly important, given his broader expertise and ways of thinking about functional programming. I think it’s great to spend time with interesting people finding out more about their interests and how they think, and that this stuff is more valuable than taking a group of people where the main thing you know is that they passed the coding part of the interview, and primarily spending time persuading them that your research is important.
Even given that, I’m sorry you had a stressful/awkward time trying to pretend this was casual and not directly important for you for financial and employment reasons. It’s a dynamic I’ve experience not infrequently within the Bay Area rationalist gatherings (and EA gatherings globally), of being at large social events and trying to ignore how much they can affect e.g. your hiring prospects (I don’t think MIRI is different in this regard from spaces involving other orgs like 80k, OpenPhil, etc). I’ll add that, as above, I do indeed think that MIRI staff had not themselves made a judgment at the time of inviting you to the workshop. Also, I’ll mention my sense is that it was affecting you somewhat more than it does for the median person at such events, who I think mostly have an overall strongly positive experience and an okay time ignoring this part of it. Still, I think it’s a widespread effect, and I’m honestly not sure what to do about it.
You’re welcome. I’m happy to read that people find it interesting.
Regarding Ed, your explanation seems different than what I recall hearing him says. But, I’ll follow my rule: I won’t repeat what a person said unless it was given in a public talk. I’ll let him know we mention him, and if he want, he’ll answer himself.
I’d just states that, honestly, I would have loved to collaborate with Ed. He had really interesting discussions. However, he already has an assistant, I have not heard anything making me believes he would need a second one.
Congratulations on ending my long-time LW lurker status and prompting me to comment for once. =)
I think Ben’s comment hits pretty close to the state of affairs. I have been internalizing MIRI’s goals and looking for obstacles in the surrounding research space that I can knock down to make their (our? our) work go more smoothly, either in the form of subgoals or backwards chaining required capabilities to get a sense of how to proceed.
Why do I work around the edges? Mostly because if I take the vector I’m trying to push the world and the direction MIRI is trying to push the world, and project one onto the other, it currently seems to be the approach that maximizes the dot product. Some of what I want to do doesn’t seem to work behind closed doors, because I need a lot more outside feedback, or have projects that just need to bake more before being turned into a more product-like direction.
Sometimes I lend a hand behind the curtain where one of their projects comes close to something I have experience to offer, other times, its trying to offer a reframing of the problem, or offering libraries or connections to folks that they can built atop.
I can say that I’ve very much enjoyed working with MIRI over the last 16 months or so, and I think the relationship has been mutually beneficial. Nate has given me a lot of latitude in choosing what to work on and how I can contribute, and I have to admit he couldn’t have come up with a more effective way to leash me to his cause had he tried.
I’ve attend so many of these AIRCS workshops in part to better understand how to talk about AI safety. (I think I’ve been to something like 8-10 of them so far?) For better or worse, I have a reputation I have in the outside functional programming community, and I’d hate to give folks the wrong impression of MIRI, simply by dint of my having done insufficient homework, so I’ve been using this as a way to sharpen my arguments and gather a more nuanced understanding of what ways to talk about AI safety, rationality, 80,000 hours, EA, x-risks, etc. work for my kind of audience, and which seem to fall short, especially when talking to the kind of hardcore computer scientists and mathematicians I tend to talk to.
Arthur, I greatly enjoyed interacting with you at the workshop—who knew that an expert on logic and automata theory was exactly what I needed at that moment!? -- and I’m sorry that the MIRI recruitment attempt didn’t move forward.
This was all really interesting, thanks for writing it and for being so open with your thinking, I think it’s really valuable. Lots of this hiring process sounds very healthy—for instance, I’m glad to hear they pay you well for the hours you spend doing work trial projects.
Yeah, Anna talked in a bunch of detail about her thinking on this in this comment on the recent CFAR AMA, in case you’re interested in seeing more examples of ways people can get confused when thinking about AI risk.
My take here is a bit different from yours. I think it’s best to collaborate with interesting people who have unique ways of thinking, and it’s more important to focus on that than “I will only hire people who agree with me”. When I’m thinking about new people I’ve met and whether to hang out with them more / work with them more, I rarely am thinking about whether or not they also often use slogans like “x-risk” and “AI safety”, but primarily how they think and whether I’d get value out of working with them / hanging out with them.
The process you describe at CFAR sounds like a way to focus on finding interesting people: withholding judgement for as long as possible about whether you should work together, while giving you and them lots of space and time to build a connection. This lets you talk through the ideas that are interesting to both of you, and generally understand each other better than a 2-hour interview or coding project offers.
Ed Kmett seems like a central example here; my (perhaps mistaken) belief is that he’s primarily doing a lot of non-MIRI work he finds valuable, and is inspired by that more than the other kinds of research MIRI is doing, but he and other researchers at MIRI find ways to collaborate, and when they do he does really cool stuff. I expect there was a period where he engaged with the ideas around AI alignment in a lot of detail, and has opinions about them, and of course at some point that was important to whether he wanted to work at MIRI, but I expect Nate and others would be very excited about him being around, regardless of whether he thought their project was terribly important, given his broader expertise and ways of thinking about functional programming. I think it’s great to spend time with interesting people finding out more about their interests and how they think, and that this stuff is more valuable than taking a group of people where the main thing you know is that they passed the coding part of the interview, and primarily spending time persuading them that your research is important.
Even given that, I’m sorry you had a stressful/awkward time trying to pretend this was casual and not directly important for you for financial and employment reasons. It’s a dynamic I’ve experience not infrequently within the Bay Area rationalist gatherings (and EA gatherings globally), of being at large social events and trying to ignore how much they can affect e.g. your hiring prospects (I don’t think MIRI is different in this regard from spaces involving other orgs like 80k, OpenPhil, etc). I’ll add that, as above, I do indeed think that MIRI staff had not themselves made a judgment at the time of inviting you to the workshop. Also, I’ll mention my sense is that it was affecting you somewhat more than it does for the median person at such events, who I think mostly have an overall strongly positive experience and an okay time ignoring this part of it. Still, I think it’s a widespread effect, and I’m honestly not sure what to do about it.
You’re welcome. I’m happy to read that people find it interesting.
Regarding Ed, your explanation seems different than what I recall hearing him says. But, I’ll follow my rule: I won’t repeat what a person said unless it was given in a public talk. I’ll let him know we mention him, and if he want, he’ll answer himself. I’d just states that, honestly, I would have loved to collaborate with Ed. He had really interesting discussions. However, he already has an assistant, I have not heard anything making me believes he would need a second one.
Congratulations on ending my long-time LW lurker status and prompting me to comment for once. =)
I think Ben’s comment hits pretty close to the state of affairs. I have been internalizing MIRI’s goals and looking for obstacles in the surrounding research space that I can knock down to make their (our? our) work go more smoothly, either in the form of subgoals or backwards chaining required capabilities to get a sense of how to proceed.
Why do I work around the edges? Mostly because if I take the vector I’m trying to push the world and the direction MIRI is trying to push the world, and project one onto the other, it currently seems to be the approach that maximizes the dot product. Some of what I want to do doesn’t seem to work behind closed doors, because I need a lot more outside feedback, or have projects that just need to bake more before being turned into a more product-like direction.
Sometimes I lend a hand behind the curtain where one of their projects comes close to something I have experience to offer, other times, its trying to offer a reframing of the problem, or offering libraries or connections to folks that they can built atop.
I can say that I’ve very much enjoyed working with MIRI over the last 16 months or so, and I think the relationship has been mutually beneficial. Nate has given me a lot of latitude in choosing what to work on and how I can contribute, and I have to admit he couldn’t have come up with a more effective way to leash me to his cause had he tried.
I’ve attend so many of these AIRCS workshops in part to better understand how to talk about AI safety. (I think I’ve been to something like 8-10 of them so far?) For better or worse, I have a reputation I have in the outside functional programming community, and I’d hate to give folks the wrong impression of MIRI, simply by dint of my having done insufficient homework, so I’ve been using this as a way to sharpen my arguments and gather a more nuanced understanding of what ways to talk about AI safety, rationality, 80,000 hours, EA, x-risks, etc. work for my kind of audience, and which seem to fall short, especially when talking to the kind of hardcore computer scientists and mathematicians I tend to talk to.
Arthur, I greatly enjoyed interacting with you at the workshop—who knew that an expert on logic and automata theory was exactly what I needed at that moment!? -- and I’m sorry that the MIRI recruitment attempt didn’t move forward.
Cool, all seems good (and happy to be corrected) :-)