So, the situation is somewhat different than the one that you describe. Some points of clarification.
•I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend’s endorsement.
•I started reading Less Wrong in earnest in the beginning of 2010. This made it clear to me that Eliezer has a lot to offer and that it was unfortunate that I had been pushed away by my initial reaction.
•I never assigned a very high probability to Eliezer making a crucial contribution to an FAI research project. My thinking was that the enormous positive outcome associated with success might be sufficiently great to justify the project despite the small probability.
•I didn’t get much of a chance to correspond privately with Eliezer at all. He responded to a couple of my messages with one line dismissive responses and then stopped responding to my subsequent messages. Naturally this lowered the probability that I assigned to being able to collaborating with him. This also lowered my confidence in his ability to attract collaborators in general.
•If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I’ll throw out the number 10^(-6).
•I feel that the world is very complicated and that randomness plays a very large role. This leads me to assign a very small probability to the proposition that any given individual will play a crucial role in eliminating existential risk.
•I freely acknowledge that I may be influenced by emotional factors. I make an honest effort at being level headed and sober but as I mention elsewhere, my experience posting on Less Wrong has been emotionally draining. I find that I become substantially less rational when people assume that my motives are impure (some sort of self fulfilling prophesy).
You may notice that of my last four posts, the first pair was considerably more impartial than the second pair. (This is reflected in the fact that the first pair was upvoted more than the second pair.) My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend’s endorsement.
I’d be really interested to know which posts these were, because it would help me to distinguish between the following interpretations:
(1) First impressions really do matter: even though you and I are probably very similar in many respects, we have different opinions of Eliezer simply because in the first posts of his I read, he sounded more like a yoga instructor than a cult leader; whereas perhaps the first thing you read was some post where his high estimation of his abilities relative to the rest of humanity was made explicit, and you didn’t have the experience of his other writings to allow you to “forgive” him for this social transgression.
(2) We have different personalities, which cause us to interpret people’s words differently: you and I read more or less the same kind of material first, but you just interpreted it as “grandiose” whereas I didn’t.
What’s interesting in any case is that I’m not sure that I actually disagree with you all that much about Eliezer having a small chance of success (though I think you quantify it incorrectly with numbers like 10^(-9) or 10^(-6) -- these are way too small). Where we differ seems to be in the implications we draw from this. You appear to believe that Eliezer and SIAI are doing something importantly wrong, that could be fixed by means of a simple change of mindset, and that they shouldn’t be supported until they make this change. By contrast, my interpretation is that this is an extremely difficult problem, that SIAI is basically the first organization that has begun to make a serious attempt to address it, and that they are therefore worthy of being supported so that they can increase their efforts in the directions they are currently pursuing and potentially have a larger impact than they otherwise would.
I’ve been meaning to ask you: given your interest in reducing existential risk, and your concerns about SIAI’s transparency and their general strategy, have you considered applying to the Visiting Fellows program? That would be an excellent way not only to see what it is they do up close, but also to discuss these very issues in person at length with the people involved in SIAI strategy—which, in my experience, they are very interested in doing, even with short-term visitors.
I’d be really interested to know which posts these were, because it would help me to distinguish between the following interpretations:
Right, so the first posts that I came across were Eliezer’s Coming of Age posts which I think are unrepresentatively self absorbed. So I think that the right interpretation is the first that you suggest.
What’s interesting in any case is that I’m not sure that I actually disagree with you all that much about Eliezer having a small chance of success (though I think you quantify it incorrectly with numbers like 10^(-9) or 10^(-6) -- these are way too small). Where we differ seems to be in the implications we draw from this. You appear to believe that Eliezer and SIAI are doing something importantly wrong, that could be fixed by means of a simple change of mindset, and that they shouldn’t be supported until they make this change. By contrast, my interpretation is that this is an extremely difficult problem, that SIAI is basically the first organization that has begun to make a serious attempt to address it, and that they are therefore worthy of being supported so that they can increase their efforts in the directions they are currently pursuing and potentially have a larger impact than they otherwise would.
Since I made my top level posts, I’ve been corresponding with Carl Shulman who informed me of some good things that SIAI has been doing that have altered my perception of the institution. I think that SIAI may be worthy of funding.
Regardless as to the merits of SIAI’s research and activities, I think that in general it’s valuable to promote norms of Transparency and Accountability. I would certainly be willing to fund SIAI if it were strongly recommended by a highly credible external charity evaluator like GiveWell. Note also a comment which I wrote in response to Jasen.
I would like to talk more about these things—would you like to share email addresses? PM me if so.
I’ve been meaning to ask you: given your interest in reducing existential risk, and your concerns about SIAI’s transparency and their general strategy, have you considered applying to the Visiting Fellows program? That would be an excellent way not only to see what it is they do up close, but also to discuss these very issues in person at length with the people involved in SIAI strategy—which, in my experience, they are very interested in doing, even with short-term visitors.
At this point I worry that I’ve alienated the SIAI people to such an extent that they might not be happy to have me. But I’d certainly be willing if they’re favorably disposed toward me.
I’ll remark that back in December after reading Anna Salamon’s posting on the SIAI Visting Fellows program I did send Anna Salamon a long email expressing some degree of interest and describing some my concerns without receiving a response. I now find it most plausible that she just forgot about it and that I should have tried again, but maybe you can understand from this how I got the impression that becoming an SIAI Visiting Fellow was not a strong option for me.
I would like to talk more about these things—would you like to share email addresses? PM me if so.
Done.
I’ll remark that back in December after reading Anna Salamon’s posting on the SIAI Visting Fellows program I did send Anna Salamon a long email expressing some degree of interest and describing some my concerns without receiving a response. I now find it most plausible that she just forgot about it and that I should have tried again, but maybe you can understand from this how I got the impression that becoming an SIAI Visiting Fellow was not a strong option for me.
As it happens, the same thing happened to me; it turned out that my initial message had been caught in a spam filter. I eventually ended up visiting for two weeks, and highly recommend the experience.
If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I’ll throw out the number 10^(-6).
This, along with your other estimate of 10^(-9), implies that your probability for Eliezer being able to eventually attract and work well with collaborators is currently 1/1000. Does that really seem reasonable to you (would you be willing to bet at those odds?), given other evidence besides your private exchange with Eliezer? Such as:
Eliezer already had a close collaborator, namely Marcello
SIAI has successfully attracted many visiting fellows
SIAI has successfully attracted top academics to speak at their Singularity Summit
Eliezer is currently writing a book on rationality, so presumably he isn’t actively trying to recruit collaborators at the moment
Other people’s reports of not finding Eliezer particularly difficult to work with
It seems to me that rationally updating on Eliezer’s private comments couldn’t have resulted in such a low probability. So I think a more likely explanation is that you were offended by the implications of Eliezer’s dismissive attitude towards your comments.
(Although, given Eliezer’s situation, it would probably be a good idea for him to make a greater effort to avoid offending potential supporters, even if he doesn’t consider them to be viable future collaborators.)
My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
Your responses to me seem pretty level headed and sober. I hope that means you don’t find my comments too hostile.
This, along with your other estimate of 10^(-9), implies that your probability for Eliezer being able to eventually attract and work well with collaborators is currently 1/1000. Does that really seem reasonable to you (would you be willing to bet at those odds?)
Thinking it over, my estimate of 10^(-6) was way too high. This isn’t because of a lack of faith in Eliezer’s abilities in particular. I would recur to my above remark that I think that everybody has very small probability of succeeding in efforts to eliminate existential risk. We’re part of a complicated chaotic dynamical system and to a large degree our cumulative impact on the world is unintelligible and unexpected (because of a complicated network of unintended consequences, side effects, side effects of the side effects, etc.).
Your responses to me seem pretty level headed and sober. I hope that means you don’t find my comments too hostile.
So, the situation is somewhat different than the one that you describe. Some points of clarification.
•I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend’s endorsement.
•I started reading Less Wrong in earnest in the beginning of 2010. This made it clear to me that Eliezer has a lot to offer and that it was unfortunate that I had been pushed away by my initial reaction.
•I never assigned a very high probability to Eliezer making a crucial contribution to an FAI research project. My thinking was that the enormous positive outcome associated with success might be sufficiently great to justify the project despite the small probability.
•I didn’t get much of a chance to correspond privately with Eliezer at all. He responded to a couple of my messages with one line dismissive responses and then stopped responding to my subsequent messages. Naturally this lowered the probability that I assigned to being able to collaborating with him. This also lowered my confidence in his ability to attract collaborators in general.
•If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I’ll throw out the number 10^(-6).
•I feel that the world is very complicated and that randomness plays a very large role. This leads me to assign a very small probability to the proposition that any given individual will play a crucial role in eliminating existential risk.
•I freely acknowledge that I may be influenced by emotional factors. I make an honest effort at being level headed and sober but as I mention elsewhere, my experience posting on Less Wrong has been emotionally draining. I find that I become substantially less rational when people assume that my motives are impure (some sort of self fulfilling prophesy).
You may notice that of my last four posts, the first pair was considerably more impartial than the second pair. (This is reflected in the fact that the first pair was upvoted more than the second pair.) My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
I’d be really interested to know which posts these were, because it would help me to distinguish between the following interpretations:
(1) First impressions really do matter: even though you and I are probably very similar in many respects, we have different opinions of Eliezer simply because in the first posts of his I read, he sounded more like a yoga instructor than a cult leader; whereas perhaps the first thing you read was some post where his high estimation of his abilities relative to the rest of humanity was made explicit, and you didn’t have the experience of his other writings to allow you to “forgive” him for this social transgression.
(2) We have different personalities, which cause us to interpret people’s words differently: you and I read more or less the same kind of material first, but you just interpreted it as “grandiose” whereas I didn’t.
What’s interesting in any case is that I’m not sure that I actually disagree with you all that much about Eliezer having a small chance of success (though I think you quantify it incorrectly with numbers like 10^(-9) or 10^(-6) -- these are way too small). Where we differ seems to be in the implications we draw from this. You appear to believe that Eliezer and SIAI are doing something importantly wrong, that could be fixed by means of a simple change of mindset, and that they shouldn’t be supported until they make this change. By contrast, my interpretation is that this is an extremely difficult problem, that SIAI is basically the first organization that has begun to make a serious attempt to address it, and that they are therefore worthy of being supported so that they can increase their efforts in the directions they are currently pursuing and potentially have a larger impact than they otherwise would.
I’ve been meaning to ask you: given your interest in reducing existential risk, and your concerns about SIAI’s transparency and their general strategy, have you considered applying to the Visiting Fellows program? That would be an excellent way not only to see what it is they do up close, but also to discuss these very issues in person at length with the people involved in SIAI strategy—which, in my experience, they are very interested in doing, even with short-term visitors.
Right, so the first posts that I came across were Eliezer’s Coming of Age posts which I think are unrepresentatively self absorbed. So I think that the right interpretation is the first that you suggest.
Since I made my top level posts, I’ve been corresponding with Carl Shulman who informed me of some good things that SIAI has been doing that have altered my perception of the institution. I think that SIAI may be worthy of funding.
Regardless as to the merits of SIAI’s research and activities, I think that in general it’s valuable to promote norms of Transparency and Accountability. I would certainly be willing to fund SIAI if it were strongly recommended by a highly credible external charity evaluator like GiveWell. Note also a comment which I wrote in response to Jasen.
I would like to talk more about these things—would you like to share email addresses? PM me if so.
At this point I worry that I’ve alienated the SIAI people to such an extent that they might not be happy to have me. But I’d certainly be willing if they’re favorably disposed toward me.
I’ll remark that back in December after reading Anna Salamon’s posting on the SIAI Visting Fellows program I did send Anna Salamon a long email expressing some degree of interest and describing some my concerns without receiving a response. I now find it most plausible that she just forgot about it and that I should have tried again, but maybe you can understand from this how I got the impression that becoming an SIAI Visiting Fellow was not a strong option for me.
Done.
As it happens, the same thing happened to me; it turned out that my initial message had been caught in a spam filter. I eventually ended up visiting for two weeks, and highly recommend the experience.
This, along with your other estimate of 10^(-9), implies that your probability for Eliezer being able to eventually attract and work well with collaborators is currently 1/1000. Does that really seem reasonable to you (would you be willing to bet at those odds?), given other evidence besides your private exchange with Eliezer? Such as:
Eliezer already had a close collaborator, namely Marcello
SIAI has successfully attracted many visiting fellows
SIAI has successfully attracted top academics to speak at their Singularity Summit
Eliezer is currently writing a book on rationality, so presumably he isn’t actively trying to recruit collaborators at the moment
Other people’s reports of not finding Eliezer particularly difficult to work with
It seems to me that rationally updating on Eliezer’s private comments couldn’t have resulted in such a low probability. So I think a more likely explanation is that you were offended by the implications of Eliezer’s dismissive attitude towards your comments.
(Although, given Eliezer’s situation, it would probably be a good idea for him to make a greater effort to avoid offending potential supporters, even if he doesn’t consider them to be viable future collaborators.)
Your responses to me seem pretty level headed and sober. I hope that means you don’t find my comments too hostile.
Thinking it over, my estimate of 10^(-6) was way too high. This isn’t because of a lack of faith in Eliezer’s abilities in particular. I would recur to my above remark that I think that everybody has very small probability of succeeding in efforts to eliminate existential risk. We’re part of a complicated chaotic dynamical system and to a large degree our cumulative impact on the world is unintelligible and unexpected (because of a complicated network of unintended consequences, side effects, side effects of the side effects, etc.).
Glad to hear it :-)