What are we supposed to infer from that? That if you add an amateur scientist to a group of PhDs, that would substantially decrease their chance of making a breakthrough?
No, certainly not. I just don’t see much evidence that Eliezer is presently adding value to Friendly AI research. I think he could be doing more to reduce existential risk if he were operating under different assumptions.
SIAI held a 3-day decision theory workshop in March that I attended along with Stuart Armstrong and Gary Drescher as outside guests. I feel pretty safe in saying that none of us found Eliezer particularly difficult to work with. I wonder if perhaps you’re generalizing from one example here.
Of course you could be right here, but the situation is symmetric, the same could be the case for you, Stuart Armstrong and Gary Drescher. Keep in mind that there’s a strong selection effect here—if you’re spending time with Eliezer you’re disproportionately likely to be well suited to working with Eliezer, and people who have difficulty working with Eliezer are disproportionately unlikely be posting on Less Wrong or meeting with Eliezer.
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
Do you also think it would be worthwhile for somebody to try to build an organization to do FAI research? If so, who do you think should be doing that, if not Eliezer and his supporters? Or is your position more like cousin_it’s, namely that FAI research should just be done by individuals on their free time for now?
Quite possibly it’s a good thing for Eliezer and his supporters to be building an organization to do FAI research. On the other hand maybe cousin_it’s position is right. I have a fair amount of uncertainty on this point.
The claim that I’m making is quite narrow: that it would be good for the cause of existential risk reduction if Eliezer seriously considered the possibility that he’s greatly overestimated his chances of building a Friendly AI.
I’m not saying that it’s a bad thing to have an organization like SIAI. I’m not saying that Eliezer doesn’t have a valuable role to serve within SIAI. I’m reminded of Robin Hanson’s Against Disclaimers though I don’t feel comfortable with his condescending tone and am not thinking of you in that light :-).
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
This topic seems important enough that you should try to figure out why your intuition says that. I’d be interested in hearing more details about why you think a lot of good potential FAI researchers would not feel comfortable working with Eliezer. And in what ways do you think he could improve his disposition?
No, certainly not. I just don’t see much evidence that Eliezer is presently adding value to Friendly AI research. I think he could be doing more to reduce existential risk if he were operating under different assumptions.
Of course you could be right here, but the situation is symmetric, the same could be the case for you, Stuart Armstrong and Gary Drescher. Keep in mind that there’s a strong selection effect here—if you’re spending time with Eliezer you’re disproportionately likely to be well suited to working with Eliezer, and people who have difficulty working with Eliezer are disproportionately unlikely be posting on Less Wrong or meeting with Eliezer.
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
Quite possibly it’s a good thing for Eliezer and his supporters to be building an organization to do FAI research. On the other hand maybe cousin_it’s position is right. I have a fair amount of uncertainty on this point.
The claim that I’m making is quite narrow: that it would be good for the cause of existential risk reduction if Eliezer seriously considered the possibility that he’s greatly overestimated his chances of building a Friendly AI.
I’m not saying that it’s a bad thing to have an organization like SIAI. I’m not saying that Eliezer doesn’t have a valuable role to serve within SIAI. I’m reminded of Robin Hanson’s Against Disclaimers though I don’t feel comfortable with his condescending tone and am not thinking of you in that light :-).
This topic seems important enough that you should try to figure out why your intuition says that. I’d be interested in hearing more details about why you think a lot of good potential FAI researchers would not feel comfortable working with Eliezer. And in what ways do you think he could improve his disposition?