Thank you for your long and detailed answer. I’m amazed that you were
able to do it so quickly after the post’s publication. Especially
since you sent me your answer by email while I just published my post
on LW without showing it to anyone first.
Arthur reports a variety of people in this post as saying things
that I think are somewhat misinterpreted, and I disagree with
several of the things he describes them as saying.
I added a link to this comment in the top of the post. I am not
surprised to learn that I misunderstood some things which were said
during the workshop honestly. Those were 5 pretty intense days, and
there was no way for me to have perfect memory of everything. However,
I won’t correct the post; this is a text explaining as honestly as
possible how I felt about the event. Those kinds of misunderstanding
are parts of the events too. I really hope that people reading this
kind of posts do understand that it’s a personal text and that they
should form their own view. Given that it’s a LW blog post and not a
newspaper/research article, I feel like it’s okay.
It’s considered good practice to pay people to do work for trials;
we paid Arthur a rate which is lower than you’d pay a Bay Area
software engineer as a contractor, and I was getting Arthur to do
somewhat unusually difficult (though unusually interesting) work.
I do confirm that it was interesting.
I guess that I do not know what is good practice in California or
not. I spent hundreds of euros for job interviews in France, when I
had to pay for train/plane/hotel to go meet a potential employer, and
I kind of assume that looking for a job is an expensive task.
I think this is a substantial misunderstanding of what Anna said. I
don’t think she was trying to propose a rule that people should
follow, and she definitely wasn’t explaining a rule of the AIRCS
workshop or something; I think she was doing something a lot more
like talking about something she thought about how people should
relate to AI risk. I might come back and edit this comment later to
say more.
I mostly understand it as a common rule, not as an AIRCS rule. This
rule seems similar to the rule “do not show pictures of slaughterhouse
to people who didn’t decide by themselves to check how slaughterhouse
are”. On the one hand, it can be argued that if people knew how badly
animals were treated, things would get better for them. It remains
that, even if you believe that, showing slaughterhouse’s picture to
random people who were not prepared would be an extremely mean thing to
do to them.
AFAICT, my level of transparency with applicants is quite
unusual. This often isn’t sufficient to make everything okay.
Would it be a LW post if I didn’t mention a single biais ? I wonder
whether there is an illusion of transparency here. There are some
informations you write there that would have been helpful to have
beforehand, and that I don’t recall hearing. For example, “my best
guess before the AIRCS workshop was that he wouldn’t be a good fit at
MIRI immediately because of his insufficient background in AI
safety”. On the one hand, it could be expected that I understand that
I would not be a good fit, given that I don’t have AI safety
background. That would makes sens in most companies actually. On the
other hand, the way I perceive MIRI is that you’re quite unusual, so I
could assume that you mainly are looking for devs’ wanting to work
with rationalist, and that it would be okay if those people needs some
time to teach themselves everything they need to learn.
Given that both hypothesis are possible, I see how it can seem more
transparent to you than it actually was for me. However, I must admit
that on my side, I was not totally transparent, since I didn’t ask you
to clarify immediately. More generally, the point I want to make here
is that my goal is not to blame you, nor the MIRI, nor AIRCS, nor
myself. I would hate if this post or comment was read as me wanting to
complain. When I wrote the post, I thought about what I would have
wanted to read before going to AIRCS; and tried to write it. While I
do have some negative remarks, I hope that it globally appears as a
positive post. I did state it, and I repeat it: I did appreciate
coming to AIRCS.
First: they could mention people coming to AIRCS for a future job
interview that some things will be awkward for them; but that they
have the same workshop as everyone else so they’ll have to deal with
it.
I think I do mention this (and am somewhat surprised that it was a
surprise for Arthur)
I may have forgotten then. I don’t claim my memory is perfect. It’s
entirely possible that I did not take this warning seriously
enough. If at some point someone read this post before going to AIRCS,
I hope it’ll help them take this into account. Even if I do not think
that what was important for me will actually be important for them, so
maybe that’ll be useless in the end.
I don’t quite understand what Arthur’s complaint is here, though I
agree that it’s awkward having people be at events with people who
are considering hiring them.
I honestly can’t state exactly what felt wrong. This is actually a
paragraph I spent a lot of time, because I didn’t find an exact
answer. I finally decided to state what I felt, without being able to
explain the reason behind it. Which by the way seems a lot what I
understood about circling the way it was presented to my group the
first day.
Arthur is really smart and it seemed worth getting him more involved
in all this stuff.
This rule seems similar to the rule “do not show pictures of slaughterhouse to people who didn’t decide by themselves to check how slaughterhouse are”. On the one hand, it can be argued that if people knew how badly animals were treated, things would get better for them. It remains that, even if you believe that, showing slaughterhouse’s picture to random people who were not prepared would be an extremely mean thing to do to them.
Huh. That’s a surprisingly interesting analogy. I will think more on it. Thx.
Hi,
Thank you for your long and detailed answer. I’m amazed that you were able to do it so quickly after the post’s publication. Especially since you sent me your answer by email while I just published my post on LW without showing it to anyone first.
I added a link to this comment in the top of the post. I am not surprised to learn that I misunderstood some things which were said during the workshop honestly. Those were 5 pretty intense days, and there was no way for me to have perfect memory of everything. However, I won’t correct the post; this is a text explaining as honestly as possible how I felt about the event. Those kinds of misunderstanding are parts of the events too. I really hope that people reading this kind of posts do understand that it’s a personal text and that they should form their own view. Given that it’s a LW blog post and not a newspaper/research article, I feel like it’s okay.
I do confirm that it was interesting.
I guess that I do not know what is good practice in California or not. I spent hundreds of euros for job interviews in France, when I had to pay for train/plane/hotel to go meet a potential employer, and I kind of assume that looking for a job is an expensive task.
I mostly understand it as a common rule, not as an AIRCS rule. This rule seems similar to the rule “do not show pictures of slaughterhouse to people who didn’t decide by themselves to check how slaughterhouse are”. On the one hand, it can be argued that if people knew how badly animals were treated, things would get better for them. It remains that, even if you believe that, showing slaughterhouse’s picture to random people who were not prepared would be an extremely mean thing to do to them.
Would it be a LW post if I didn’t mention a single biais ? I wonder whether there is an illusion of transparency here. There are some informations you write there that would have been helpful to have beforehand, and that I don’t recall hearing. For example, “my best guess before the AIRCS workshop was that he wouldn’t be a good fit at MIRI immediately because of his insufficient background in AI safety”. On the one hand, it could be expected that I understand that I would not be a good fit, given that I don’t have AI safety background. That would makes sens in most companies actually. On the other hand, the way I perceive MIRI is that you’re quite unusual, so I could assume that you mainly are looking for devs’ wanting to work with rationalist, and that it would be okay if those people needs some time to teach themselves everything they need to learn.
Given that both hypothesis are possible, I see how it can seem more transparent to you than it actually was for me. However, I must admit that on my side, I was not totally transparent, since I didn’t ask you to clarify immediately. More generally, the point I want to make here is that my goal is not to blame you, nor the MIRI, nor AIRCS, nor myself. I would hate if this post or comment was read as me wanting to complain. When I wrote the post, I thought about what I would have wanted to read before going to AIRCS; and tried to write it. While I do have some negative remarks, I hope that it globally appears as a positive post. I did state it, and I repeat it: I did appreciate coming to AIRCS.
I may have forgotten then. I don’t claim my memory is perfect. It’s entirely possible that I did not take this warning seriously enough. If at some point someone read this post before going to AIRCS, I hope it’ll help them take this into account. Even if I do not think that what was important for me will actually be important for them, so maybe that’ll be useless in the end.
I honestly can’t state exactly what felt wrong. This is actually a paragraph I spent a lot of time, because I didn’t find an exact answer. I finally decided to state what I felt, without being able to explain the reason behind it. Which by the way seems a lot what I understood about circling the way it was presented to my group the first day.
Thank you.
Huh. That’s a surprisingly interesting analogy. I will think more on it. Thx.