(I’m unsure whether I should write this comment referring to the author of this post in second or third person; I think I’m going to go with third person, though it feels a bit awkward. Arthur reviewed this comment before I posted it.)
Here are a couple of clarifications about things in this post, which might be relevant for people who are using it to learn about the MIRI recruiting process. Note that I’m the MIRI recruiter Arthur describes working with.
General comments:
I think Arthur is a really smart, good programmer. Arthur doesn’t have as much background with AI safety stuff as many people who I consider as candidates for MIRI work, but it seemed worth spending effort on bringing Arthur to AIRCS etc because it would be really cool if it worked out.
Arthur reports a variety of people in this post as saying things that I think are somewhat misinterpreted, and I disagree with several of the things he describes them as saying.
I still don’t understand that: what’s the point of inviting me if the test fails ? It would appear more cost efficient to wait until after the test to decide whether they want me to come or not (I don’t think I ever asked it out loud, I was already happy to have a trip to California for free).
I thought it was very likely Arthur would do well on the two-day project (he did).
I do not wish to disclose how much I have been paid, but I’ll state that two hours at that rate was more than a day at the French PhD rate. I didn’t even ask to be paid; I hadn’t even thought that being paid for a job interview was possible.
It’s considered good practice to pay people to do work for trials; we paid Arthur a rate which is lower than you’d pay a Bay Area software engineer as a contractor, and I was getting Arthur to do somewhat unusually difficult (though unusually interesting) work.
I assume that if EA cares about animal suffering in itself, then using throwaways is less of a direct suffering factor.
Yep
So Anna Salamon gave us a rule: We don’t speak of AI safety to people who do not express the desire to hear about it. When I asked for more informations, she specified that it is okay to mention the words “AI Safety”; but not to give any details until the other person is sure they want to hear about it. In practice, this means it is okay to share a book/post on AI safety, but we should warn the person to read it only if they feel ready. Which leads to a related problem: some people never experienced an existential crisis or anxiety attack of their life, so it’s all too possible they can’t really “be ready”.
I think this is a substantial misunderstanding of what Anna said. I don’t think she was trying to propose a rule that people should follow, and she definitely wasn’t explaining a rule of the AIRCS workshop or something; I think she was doing something a lot more like talking about something she thought about how people should relate to AI risk. I might come back and edit this comment later to say more.
That means that, during circles, I was asked to be as honest as possible about my feelings while also being considered for an internship. This is extremely awkward.
For the record, I think that “being asked to be as honest as possible” is a pretty bad description of what circling is, though I’m sad that it came across this way to Arthur (I’ve already talked to him about this)
But just because they do not think of AIRCS as a job interview does not mean AIRCS is not a job interview. Case in point: half a week after the workshop, the recruiter told me that “After discussing some more, we decided that we don’t want to move forward with you right now”. So the workshop really was what led them to decide not to hire me.
For the record, the workshop indeed made the difference about whether we wanted to make Arthur an offer right then. I think this is totally reasonable—Arthur is a smart guy, but not that involved with the AI safety community; my best guess before the AIRCS workshop was that he wouldn’t be a good fit at MIRI immediately because of his insufficient background in AI safety, and then at the AIRCS workshop I felt like it turned out that this guess was right and the gamble hadn’t paid off (though I told Arthur, truthfully, that I hoped he’d keep in contact).
During a trip to the beach, I finally had the courage to tell the recruiter that AIRCS is quite complex to navigate for me, when it’s both a CFAR workshop and a job interview.
:( This is indeed awkward and I wish I knew how to do it better. My main strategy is to be as upfront and accurate with people as I can; AFAICT, my level of transparency with applicants is quite unusual. This often isn’t sufficient to make everything okay.
First: they could mention people coming to AIRCS for a future job interview that some things will be awkward for them; but that they have the same workshop as everyone else so they’ll have to deal with it.
I think I do mention this (and am somewhat surprised that it was a surprise for Arthur)
Furthermore, I do understand why it’s generally a bad idea to tell unknown people in your buildings that they won’t have the job.
I wasn’t worried about Arthur destroying the AIRCS venue; I needed to confer with my coworkers before making a decision.
I do not believe that my first advice will be listened to. During a discussion, the last night near the fire, the recruiter was discussing with some other miri staff and participants. And at some point they mentioned MIRI’s recruiting process. I think that they were mentioning that they loved recruiting because it leads them to work with extremely interesting people, but that it’s hard to find them. Given that my goal was explicitly to be recruited, and that I didn’t have any answers yet, it was extremely awkward for me. I can’t state explicitly why, after all I didn’t have to add anything to their remark. But even if I can’t explain why I think that, I still firmly believe that it’s the kind of things a recruiter should avoid saying near their potential hire.
I don’t quite understand what Arthur’s complaint is here, though I agree that it’s awkward having people be at events with people who are considering hiring them.
Miri here is an exception. I can see only so many reasons not to hire me that the outcome was unsurprising. The process and they considering me in the first place was.
Arthur is really smart and it seemed worth getting him more involved in all this stuff.
Thank you for your long and detailed answer. I’m amazed that you were
able to do it so quickly after the post’s publication. Especially
since you sent me your answer by email while I just published my post
on LW without showing it to anyone first.
Arthur reports a variety of people in this post as saying things
that I think are somewhat misinterpreted, and I disagree with
several of the things he describes them as saying.
I added a link to this comment in the top of the post. I am not
surprised to learn that I misunderstood some things which were said
during the workshop honestly. Those were 5 pretty intense days, and
there was no way for me to have perfect memory of everything. However,
I won’t correct the post; this is a text explaining as honestly as
possible how I felt about the event. Those kinds of misunderstanding
are parts of the events too. I really hope that people reading this
kind of posts do understand that it’s a personal text and that they
should form their own view. Given that it’s a LW blog post and not a
newspaper/research article, I feel like it’s okay.
It’s considered good practice to pay people to do work for trials;
we paid Arthur a rate which is lower than you’d pay a Bay Area
software engineer as a contractor, and I was getting Arthur to do
somewhat unusually difficult (though unusually interesting) work.
I do confirm that it was interesting.
I guess that I do not know what is good practice in California or
not. I spent hundreds of euros for job interviews in France, when I
had to pay for train/plane/hotel to go meet a potential employer, and
I kind of assume that looking for a job is an expensive task.
I think this is a substantial misunderstanding of what Anna said. I
don’t think she was trying to propose a rule that people should
follow, and she definitely wasn’t explaining a rule of the AIRCS
workshop or something; I think she was doing something a lot more
like talking about something she thought about how people should
relate to AI risk. I might come back and edit this comment later to
say more.
I mostly understand it as a common rule, not as an AIRCS rule. This
rule seems similar to the rule “do not show pictures of slaughterhouse
to people who didn’t decide by themselves to check how slaughterhouse
are”. On the one hand, it can be argued that if people knew how badly
animals were treated, things would get better for them. It remains
that, even if you believe that, showing slaughterhouse’s picture to
random people who were not prepared would be an extremely mean thing to
do to them.
AFAICT, my level of transparency with applicants is quite
unusual. This often isn’t sufficient to make everything okay.
Would it be a LW post if I didn’t mention a single biais ? I wonder
whether there is an illusion of transparency here. There are some
informations you write there that would have been helpful to have
beforehand, and that I don’t recall hearing. For example, “my best
guess before the AIRCS workshop was that he wouldn’t be a good fit at
MIRI immediately because of his insufficient background in AI
safety”. On the one hand, it could be expected that I understand that
I would not be a good fit, given that I don’t have AI safety
background. That would makes sens in most companies actually. On the
other hand, the way I perceive MIRI is that you’re quite unusual, so I
could assume that you mainly are looking for devs’ wanting to work
with rationalist, and that it would be okay if those people needs some
time to teach themselves everything they need to learn.
Given that both hypothesis are possible, I see how it can seem more
transparent to you than it actually was for me. However, I must admit
that on my side, I was not totally transparent, since I didn’t ask you
to clarify immediately. More generally, the point I want to make here
is that my goal is not to blame you, nor the MIRI, nor AIRCS, nor
myself. I would hate if this post or comment was read as me wanting to
complain. When I wrote the post, I thought about what I would have
wanted to read before going to AIRCS; and tried to write it. While I
do have some negative remarks, I hope that it globally appears as a
positive post. I did state it, and I repeat it: I did appreciate
coming to AIRCS.
First: they could mention people coming to AIRCS for a future job
interview that some things will be awkward for them; but that they
have the same workshop as everyone else so they’ll have to deal with
it.
I think I do mention this (and am somewhat surprised that it was a
surprise for Arthur)
I may have forgotten then. I don’t claim my memory is perfect. It’s
entirely possible that I did not take this warning seriously
enough. If at some point someone read this post before going to AIRCS,
I hope it’ll help them take this into account. Even if I do not think
that what was important for me will actually be important for them, so
maybe that’ll be useless in the end.
I don’t quite understand what Arthur’s complaint is here, though I
agree that it’s awkward having people be at events with people who
are considering hiring them.
I honestly can’t state exactly what felt wrong. This is actually a
paragraph I spent a lot of time, because I didn’t find an exact
answer. I finally decided to state what I felt, without being able to
explain the reason behind it. Which by the way seems a lot what I
understood about circling the way it was presented to my group the
first day.
Arthur is really smart and it seemed worth getting him more involved
in all this stuff.
This rule seems similar to the rule “do not show pictures of slaughterhouse to people who didn’t decide by themselves to check how slaughterhouse are”. On the one hand, it can be argued that if people knew how badly animals were treated, things would get better for them. It remains that, even if you believe that, showing slaughterhouse’s picture to random people who were not prepared would be an extremely mean thing to do to them.
Huh. That’s a surprisingly interesting analogy. I will think more on it. Thx.
(I’m unsure whether I should write this comment referring to the author of this post in second or third person; I think I’m going to go with third person, though it feels a bit awkward. Arthur reviewed this comment before I posted it.)
Here are a couple of clarifications about things in this post, which might be relevant for people who are using it to learn about the MIRI recruiting process. Note that I’m the MIRI recruiter Arthur describes working with.
General comments:
I think Arthur is a really smart, good programmer. Arthur doesn’t have as much background with AI safety stuff as many people who I consider as candidates for MIRI work, but it seemed worth spending effort on bringing Arthur to AIRCS etc because it would be really cool if it worked out.
Arthur reports a variety of people in this post as saying things that I think are somewhat misinterpreted, and I disagree with several of the things he describes them as saying.
I thought it was very likely Arthur would do well on the two-day project (he did).
It’s considered good practice to pay people to do work for trials; we paid Arthur a rate which is lower than you’d pay a Bay Area software engineer as a contractor, and I was getting Arthur to do somewhat unusually difficult (though unusually interesting) work.
Yep
I think this is a substantial misunderstanding of what Anna said. I don’t think she was trying to propose a rule that people should follow, and she definitely wasn’t explaining a rule of the AIRCS workshop or something; I think she was doing something a lot more like talking about something she thought about how people should relate to AI risk. I might come back and edit this comment later to say more.
For the record, I think that “being asked to be as honest as possible” is a pretty bad description of what circling is, though I’m sad that it came across this way to Arthur (I’ve already talked to him about this)
For the record, the workshop indeed made the difference about whether we wanted to make Arthur an offer right then. I think this is totally reasonable—Arthur is a smart guy, but not that involved with the AI safety community; my best guess before the AIRCS workshop was that he wouldn’t be a good fit at MIRI immediately because of his insufficient background in AI safety, and then at the AIRCS workshop I felt like it turned out that this guess was right and the gamble hadn’t paid off (though I told Arthur, truthfully, that I hoped he’d keep in contact).
:( This is indeed awkward and I wish I knew how to do it better. My main strategy is to be as upfront and accurate with people as I can; AFAICT, my level of transparency with applicants is quite unusual. This often isn’t sufficient to make everything okay.
I think I do mention this (and am somewhat surprised that it was a surprise for Arthur)
I wasn’t worried about Arthur destroying the AIRCS venue; I needed to confer with my coworkers before making a decision.
I don’t quite understand what Arthur’s complaint is here, though I agree that it’s awkward having people be at events with people who are considering hiring them.
Arthur is really smart and it seemed worth getting him more involved in all this stuff.
Hi,
Thank you for your long and detailed answer. I’m amazed that you were able to do it so quickly after the post’s publication. Especially since you sent me your answer by email while I just published my post on LW without showing it to anyone first.
I added a link to this comment in the top of the post. I am not surprised to learn that I misunderstood some things which were said during the workshop honestly. Those were 5 pretty intense days, and there was no way for me to have perfect memory of everything. However, I won’t correct the post; this is a text explaining as honestly as possible how I felt about the event. Those kinds of misunderstanding are parts of the events too. I really hope that people reading this kind of posts do understand that it’s a personal text and that they should form their own view. Given that it’s a LW blog post and not a newspaper/research article, I feel like it’s okay.
I do confirm that it was interesting.
I guess that I do not know what is good practice in California or not. I spent hundreds of euros for job interviews in France, when I had to pay for train/plane/hotel to go meet a potential employer, and I kind of assume that looking for a job is an expensive task.
I mostly understand it as a common rule, not as an AIRCS rule. This rule seems similar to the rule “do not show pictures of slaughterhouse to people who didn’t decide by themselves to check how slaughterhouse are”. On the one hand, it can be argued that if people knew how badly animals were treated, things would get better for them. It remains that, even if you believe that, showing slaughterhouse’s picture to random people who were not prepared would be an extremely mean thing to do to them.
Would it be a LW post if I didn’t mention a single biais ? I wonder whether there is an illusion of transparency here. There are some informations you write there that would have been helpful to have beforehand, and that I don’t recall hearing. For example, “my best guess before the AIRCS workshop was that he wouldn’t be a good fit at MIRI immediately because of his insufficient background in AI safety”. On the one hand, it could be expected that I understand that I would not be a good fit, given that I don’t have AI safety background. That would makes sens in most companies actually. On the other hand, the way I perceive MIRI is that you’re quite unusual, so I could assume that you mainly are looking for devs’ wanting to work with rationalist, and that it would be okay if those people needs some time to teach themselves everything they need to learn.
Given that both hypothesis are possible, I see how it can seem more transparent to you than it actually was for me. However, I must admit that on my side, I was not totally transparent, since I didn’t ask you to clarify immediately. More generally, the point I want to make here is that my goal is not to blame you, nor the MIRI, nor AIRCS, nor myself. I would hate if this post or comment was read as me wanting to complain. When I wrote the post, I thought about what I would have wanted to read before going to AIRCS; and tried to write it. While I do have some negative remarks, I hope that it globally appears as a positive post. I did state it, and I repeat it: I did appreciate coming to AIRCS.
I may have forgotten then. I don’t claim my memory is perfect. It’s entirely possible that I did not take this warning seriously enough. If at some point someone read this post before going to AIRCS, I hope it’ll help them take this into account. Even if I do not think that what was important for me will actually be important for them, so maybe that’ll be useless in the end.
I honestly can’t state exactly what felt wrong. This is actually a paragraph I spent a lot of time, because I didn’t find an exact answer. I finally decided to state what I felt, without being able to explain the reason behind it. Which by the way seems a lot what I understood about circling the way it was presented to my group the first day.
Thank you.
Huh. That’s a surprisingly interesting analogy. I will think more on it. Thx.