What would that change, in the grand scheme of things or in your personal life?
The very fate of the universe, potentially.
I suppose I could draw from that the inference that you have a rather inflated notion of the importance of what multi is doing here, … but, in the immortal words of Richard Milhous Nixon, “That would be wrong.”
More seriously, I think everyone here realizes that EY has some rough edges, as well as some intellectual strengths. For his own self-improvement, he ought to be working on those rough edges. I suspect he is. However, in the meantime, it would be best if his responsibilities were in areas where his strengths are exploited and his rough edges don’t really matter. So, just what are his current responsibilities?
Convincing people that UFAI constitutes a serious existential risk while not giving the whole field of futurism and existential risk reduction a bad rep.
Setting direction for and managing FAI and UFAI-avoidance research at SIAI.
Conducting FAI and UFAI-avoidance research.
Reviewing and doing conceptual QC on the research work product.
To be honest, I don’t see EY’s “rough edges” as producing any problems at all with his performance on tasks #3 and #4. Only SIAI insiders know whether there has been a problem on task #2. Based on multi’s arguments, I suspect he may not be doing so well on #1. So, to me, the indicated response ought to be one of the following:
A. Hire someone articulate (and if possible, even charismatic) to take over task #1 and make whatever minor adjustments are needed regarding task #2.
B. Do nothing. There is no problem!
C. Get some academic papers published so that FAI/anti-UFAI research becomes interesting to the same funding sources that currently support CS, AI, and decision theory research. Then reconstitute SIAI as just one additional research institution which is fighting for that research funding.
I would be interested in what EY thinks of these three possibilities. Perhaps for different reasons, I suspect, so would multi.
[Edited to correct my hallucination of confusing multifoliaterose with wedrifid. As a result of this edit, various comments below may seem confused. Sorry about that, but I judge that making this comment clear is the higher priority.]
Was the first (unedited) ‘you’ intended? If so I’ll note that I was merely answering a question within a counterfactual framework suggested by the context. I haven’t even evaluated what potential importance multi’s post may have—but the prior probability I have for ‘a given post on LW mattering significantly’ is not particularly high.
I like your general analysis by the way and am always interested to know what the SIAI guys are doing along the lines of either your 1,2,3 or your A, B, C. I would seriously like to see C happen. Being able and willing to make that sort of move would be a huge step forward (and something that makes any hints of ‘arrogance’ seem trivial.)
I think you are right. I’m just playing the disclaimer game. Since this is a political thread there is always the risk of being condemned for supporting various positions. In this case I gave a literal answer to a rhetorical question directed at multi. Following purely social reasoning that would mean that I:
Am challenging cousin_it
Condemning Eliezer
Agreeing with anything and everything said by multi and probably also with everything said by anyone else who agrees with multi.
Almost certainly saying something about the credulity of uFAI risks.
In some way think any of this is particularly important to the universe outside the time/abstract-space bubble that is LessWrong this week.
Of course that comment actually lent credence to Eliezer (hence the humor) and was rather orthogonal to multi’s position with respect to arrogance.
It’s not that I mind too much sticking my neck out risking a social thrashing here or there. It’s just that I have sufficient capability for sticking my neck out for things that I actually do mean and for some reason prefer any potential criticism to be correctly targeted. It says something about many nerds that they value being comprehended more highly than approval.
I suppose I could draw from that the inference that you have a rather inflated notion of the importance of what multi is doing here, … but, in the immortal words of Richard Milhous Nixon, “That would be wrong.”
More seriously, I think everyone here realizes that EY has some rough edges, as well as some intellectual strengths. For his own self-improvement, he ought to be working on those rough edges. I suspect he is. However, in the meantime, it would be best if his responsibilities were in areas where his strengths are exploited and his rough edges don’t really matter. So, just what are his current responsibilities?
Convincing people that UFAI constitutes a serious existential risk while not giving the whole field of futurism and existential risk reduction a bad rep.
Setting direction for and managing FAI and UFAI-avoidance research at SIAI.
Conducting FAI and UFAI-avoidance research.
Reviewing and doing conceptual QC on the research work product.
To be honest, I don’t see EY’s “rough edges” as producing any problems at all with his performance on tasks #3 and #4. Only SIAI insiders know whether there has been a problem on task #2. Based on multi’s arguments, I suspect he may not be doing so well on #1. So, to me, the indicated response ought to be one of the following:
A. Hire someone articulate (and if possible, even charismatic) to take over task #1 and make whatever minor adjustments are needed regarding task #2.
B. Do nothing. There is no problem!
C. Get some academic papers published so that FAI/anti-UFAI research becomes interesting to the same funding sources that currently support CS, AI, and decision theory research. Then reconstitute SIAI as just one additional research institution which is fighting for that research funding.
I would be interested in what EY thinks of these three possibilities. Perhaps for different reasons, I suspect, so would multi.
[Edited to correct my hallucination of confusing multifoliaterose with wedrifid. As a result of this edit, various comments below may seem confused. Sorry about that, but I judge that making this comment clear is the higher priority.]
Was the first (unedited) ‘you’ intended? If so I’ll note that I was merely answering a question within a counterfactual framework suggested by the context. I haven’t even evaluated what potential importance multi’s post may have—but the prior probability I have for ‘a given post on LW mattering significantly’ is not particularly high.
I like your general analysis by the way and am always interested to know what the SIAI guys are doing along the lines of either your 1,2,3 or your A, B, C. I would seriously like to see C happen. Being able and willing to make that sort of move would be a huge step forward (and something that makes any hints of ‘arrogance’ seem trivial.)
I think that originally Perplexed didn’t look at your comment carefully and thought that multi had written it.
Close. Actually, I had looked at the first part of the comment and then written my response under the delusion that wedrifid had been the OP.
I am now going to edit my comment to cleanly replace the mistaken “you” with “multi”
I think you are right. I’m just playing the disclaimer game. Since this is a political thread there is always the risk of being condemned for supporting various positions. In this case I gave a literal answer to a rhetorical question directed at multi. Following purely social reasoning that would mean that I:
Am challenging cousin_it
Condemning Eliezer
Agreeing with anything and everything said by multi and probably also with everything said by anyone else who agrees with multi.
Almost certainly saying something about the credulity of uFAI risks.
In some way think any of this is particularly important to the universe outside the time/abstract-space bubble that is LessWrong this week.
Of course that comment actually lent credence to Eliezer (hence the humor) and was rather orthogonal to multi’s position with respect to arrogance.
It’s not that I mind too much sticking my neck out risking a social thrashing here or there. It’s just that I have sufficient capability for sticking my neck out for things that I actually do mean and for some reason prefer any potential criticism to be correctly targeted. It says something about many nerds that they value being comprehended more highly than approval.
Approval based on incomprehension is fragile and unsatisfying.