Maybe the word “evangelical” isn’t strictly correct. (A quick Google search suggests that I had cached the phrase from this discussion.) I’d like to point out an example of an incident that leaves a bad taste in my mouth.
(Before anyone asks, yes, we’re polyamorous – I am in long-term
relationships with three women, all of whom are involved with
more than one guy. Apologies in advance to any 19th-century old
fogies who are offended by our more advanced culture. Also before
anyone asks: One of those is my primary who I’ve been with for
7+ years, and the other two did know my real-life identity before
reading HPMOR, but HPMOR played a role in their deciding that I
was interesting enough to date.)
This comment was made by Eliezer under the name of this community in the author’s notes to one of LessWrongs’s largest recruiting tools. I remember when I first read this, I kind of flipped out. Professor Quirrell wouldn’t have written this, I thought. It was needlessly antagonistic, it squandered a bunch of positive affect, there was little to be gained from this digression, it was blatant signaling—it was so obviously the wrong thing to do and yet it was published anyway.
A few months before that was written, I had cut a fairly substantial cheque to the Singularity Institute. I want to purchase AI risk reduction, not fund a phyg. Blocks of text like the above do not make me feel comfortable that I am doing the former and not the later. I am not alone here.
Back when I only lurked here and saw the first PUA fights, I was in favor of the PUA discussion ban because if LessWrong wants to be a movement that either tries to raise the sanity waterline or maximizes the probability of solving the Friendly AI problem, it needs to be as inclusive as possible and have as few ugh fields that immediately drive away new members. I now think an outright ban would do more harm than good, but the ugh field remains and is counterproductive.
When you decide to fund research, what are your requirements for researchers’ personal lives? Is the problem that his sex life is unusual, or that he talks about it?
My biggest problem is more that he talks about it, sometimes in semiofficial
channels. This doesn’t mean that I wouldn’t be squicked out if I learned about
it, but I wouldn’t see it as a political problem for the SIAI.
The SIAI isn’t some random research think tank: it presents itself as the
charity with the highest utility per marginal dollar. Likewise, Eliezer
Yudkowsky isn’t some random anonymous researcher: he is the public face of the
SIAI. His actions and public behavior reflect on the SIAI whether or not it’s
fair, and everyone involved should have already had that as a strongly held
prior.
If people ignore lesswrong or don’t donate to the SIAI because they’re filtered
out by squickish feelings, then this is less resources for the SIAI’s mission
in return for inconsequential short term gains realized mostly by SIAI
insiders. Compound this that talking about the singularity already triggers
some people’s absurdity bias; there needs to be as few other filters as
possible to maximize usable resources that the SIAI has to maximize the chance
of positive singularity outcomes.
It seems there are two problems: you trust SIAI less, and you worry that others will trust it less. I understand the reason for the second worry, but not the first. Is it that you worry your investment will become worth less because others won’t want to fund SIAI?
That talk was very strong evidence that the SI is incompetent at PR, and furthermore, irrational. edit: or doesn’t possess stated goals and beliefs. If you believe the donations are important for saving your life (along with everyone else’s), then you naturally try to avoid making such statements. Though I do in some way admire straight up in your face honesty.
My feelings on the topic are similar to iceman’s, though possibly for slightly different reasons.
What bothers me is not the fact that Eliezer’s sex life is “unusual”, or that he talks about it, but that he talks about it in his capacity as the chief figurehead and PR representative for his organization. This signals a certain lack of focus due to an inability to distinguish one’s personal and professional life.
Unless the precise number and configuration of Eliezer’s significant others is directly applicable to AI risk reduction, there’s simply no need to discuss it in his official capacity. It’s unprofessional and distracting.
(in the interests of full disclosure, I should mention that I am not planning on donating to SIAI any time soon, so my points above are more or less academic).
On the other hand—while I’m also worried about other people’s reaction to that comment, my own reaction was positive. Which suggests there might be other people with positive reactions to it.
I think I like having a community leader who doesn’t come across as though everything he says is carefully tailored to not offend people who might be useful; and occasionally offending such people is one way to signal being such a leader.
I also worry that Eliezer having to filter comments like this would make writing less fun for him; and if that made him write less, it might be worse than offending people.
Who’s being evangelical about it?
Maybe the word “evangelical” isn’t strictly correct. (A quick Google search suggests that I had cached the phrase from this discussion.) I’d like to point out an example of an incident that leaves a bad taste in my mouth.
This comment was made by Eliezer under the name of this community in the author’s notes to one of LessWrongs’s largest recruiting tools. I remember when I first read this, I kind of flipped out. Professor Quirrell wouldn’t have written this, I thought. It was needlessly antagonistic, it squandered a bunch of positive affect, there was little to be gained from this digression, it was blatant signaling—it was so obviously the wrong thing to do and yet it was published anyway.
A few months before that was written, I had cut a fairly substantial cheque to the Singularity Institute. I want to purchase AI risk reduction, not fund a phyg. Blocks of text like the above do not make me feel comfortable that I am doing the former and not the later. I am not alone here.
Back when I only lurked here and saw the first PUA fights, I was in favor of the PUA discussion ban because if LessWrong wants to be a movement that either tries to raise the sanity waterline or maximizes the probability of solving the Friendly AI problem, it needs to be as inclusive as possible and have as few ugh fields that immediately drive away new members. I now think an outright ban would do more harm than good, but the ugh field remains and is counterproductive.
[d1]: http://lesswrong.com/lw/9kf/ive_had_it_with_those_dark_rumours_about_our/5raj
When you decide to fund research, what are your requirements for researchers’ personal lives? Is the problem that his sex life is unusual, or that he talks about it?
My biggest problem is more that he talks about it, sometimes in semiofficial channels. This doesn’t mean that I wouldn’t be squicked out if I learned about it, but I wouldn’t see it as a political problem for the SIAI.
The SIAI isn’t some random research think tank: it presents itself as the charity with the highest utility per marginal dollar. Likewise, Eliezer Yudkowsky isn’t some random anonymous researcher: he is the public face of the SIAI. His actions and public behavior reflect on the SIAI whether or not it’s fair, and everyone involved should have already had that as a strongly held prior.
If people ignore lesswrong or don’t donate to the SIAI because they’re filtered out by squickish feelings, then this is less resources for the SIAI’s mission in return for inconsequential short term gains realized mostly by SIAI insiders. Compound this that talking about the singularity already triggers some people’s absurdity bias; there needs to be as few other filters as possible to maximize usable resources that the SIAI has to maximize the chance of positive singularity outcomes.
It seems there are two problems: you trust SIAI less, and you worry that others will trust it less. I understand the reason for the second worry, but not the first. Is it that you worry your investment will become worth less because others won’t want to fund SIAI?
That talk was very strong evidence that the SI is incompetent at PR, and furthermore, irrational. edit: or doesn’t possess stated goals and beliefs. If you believe the donations are important for saving your life (along with everyone else’s), then you naturally try to avoid making such statements. Though I do in some way admire straight up in your face honesty.
My feelings on the topic are similar to iceman’s, though possibly for slightly different reasons.
What bothers me is not the fact that Eliezer’s sex life is “unusual”, or that he talks about it, but that he talks about it in his capacity as the chief figurehead and PR representative for his organization. This signals a certain lack of focus due to an inability to distinguish one’s personal and professional life.
Unless the precise number and configuration of Eliezer’s significant others is directly applicable to AI risk reduction, there’s simply no need to discuss it in his official capacity. It’s unprofessional and distracting.
(in the interests of full disclosure, I should mention that I am not planning on donating to SIAI any time soon, so my points above are more or less academic).
On the other hand—while I’m also worried about other people’s reaction to that comment, my own reaction was positive. Which suggests there might be other people with positive reactions to it.
I think I like having a community leader who doesn’t come across as though everything he says is carefully tailored to not offend people who might be useful; and occasionally offending such people is one way to signal being such a leader.
I also worry that Eliezer having to filter comments like this would make writing less fun for him; and if that made him write less, it might be worse than offending people.
I can only give you one upvote, so please take my comment as a second.