In your recent posts you’ve been slowly, carefully, thoroughly deconstructing one person. Part of me wants to break into applause at the techniques used, and learn from them, because in my whole life of manipulation I’ve never mounted an attack of such scale. (The paragraph saying “something has gone very wrong” was absolutely epic, to the point of evoking musical cues somewhere at the edge of my hearing. Just like the “greatly misguided” bit in your previous post. Bravo!) But another part of me feels horror and disgust because after traumatic events in my own life I’d resolved to never do this thing again.
It comes down to this: I enjoy LW for now. If Eliezer insists on creating a sealed reality around himself, what’s that to me? You don’t have to slay every dragon you see. Saving one person from megalomania (real or imagined) is way less important than your own research. Imagine the worst possible world: Eliezer turns into a kook. What would that change, in the grand scheme of things or in your personal life? Are there not enough kooks in AI already?
And lastly, a note about saving people. I think many of us here have had the unpleasant experience (to put it mildly) of trying to save someone from suicide. Looking back at such episodes in my own life, I’m sure that everyone involved would’ve been better off if I’d just hit “ignore” at the first sign of trouble. Cut and run: in serious cases it always comes to that, no exceptions. People are very stubborn, both consciously and subconsciously—they stay on their track. They will waste their life (or spend it wisely, it’s a matter of perspective), but if you join the tug-of-war, you’ll waste a big chunk of yours as well.
I saved someone from suicide once. While the experience was certainly quite unpleasant at the time, if I had hit “ignore,” as you suggest, she would have died. I don’t think that I would be better off today if I had let her die, to say nothing of her. The fact that saving people is hard doesn’t mean that you shouldn’t do it!
It comes down to this: I enjoy LW for now. If Eliezer insists on creating a sealed reality around himself, what’s that to me? You don’t have to slay every dragon you see. Saving one person from megalomania (real or imagined) is way less important than your own research. Imagine the worst possible world: Eliezer turns into a kook. What would that change, in the grand scheme of things or in your personal life?
The very fate of the universe, potentially. Purely hypothetically and for the sake of the discussion:
If Eliezer did have the potential to provide a strong positive influence on grand scale future outcomes but was crippled by the still hypothetical lack of self-doubt then that is a loss of real value.
A bad ‘Frodo’ can be worse than no Frodo at all. If we were to give the ring to a Frodo who thought he could take on Nazgul in hand to hand combat then we would lose the ring and so the lose the chance to give said ring to someone who could pull it off. Multi (and those for whom he asks such questions) have limited resources (and attention) so it may be worth deliberate investigation of potential recipients of trust.
Worse yet than a counterproductive Frodo would be a Frodo whose arrogance pisses of Aragorn, Gandalf, Legolas, Gimli, Merry, Pippin and even Sam so much that they get disgusted with the whole ‘save the world’ thing and go hang out in the forest flirting with Elven maidens. Further cause to investigate just whose bid for notoriety and influence you wish to support.
I cannot emphasise how much this is only a reply to the literal question cousin_it asked and no endorsement or denial of any of the above claims as they relate to persons real or imagined. For example it may have been good if Frodo was arrogant enough to piss off Aragorn. He may cracked it, taken the ring from Frodo and given it to Arwen. Arwen was crazy enough to give up the immortality she already had and so would be as good a candidate as any for being able to ditch a ring, without being completely useless for basically all purposes.
Er… I can’t help but notice a certain humor in the idea that it’s terrible if I’m self-deluded about my own importance because that means I might destroy the world.
Yes, there is is a certain humor. But I hope you did read the dot points and followed the reasoning. It, among other things, suggests a potential benefit of criticism such as multi’s aside from hypothetical benefits of discrediting you should it have been the case that you were not, in fact, competent.
What would that change, in the grand scheme of things or in your personal life?
The very fate of the universe, potentially.
I suppose I could draw from that the inference that you have a rather inflated notion of the importance of what multi is doing here, … but, in the immortal words of Richard Milhous Nixon, “That would be wrong.”
More seriously, I think everyone here realizes that EY has some rough edges, as well as some intellectual strengths. For his own self-improvement, he ought to be working on those rough edges. I suspect he is. However, in the meantime, it would be best if his responsibilities were in areas where his strengths are exploited and his rough edges don’t really matter. So, just what are his current responsibilities?
Convincing people that UFAI constitutes a serious existential risk while not giving the whole field of futurism and existential risk reduction a bad rep.
Setting direction for and managing FAI and UFAI-avoidance research at SIAI.
Conducting FAI and UFAI-avoidance research.
Reviewing and doing conceptual QC on the research work product.
To be honest, I don’t see EY’s “rough edges” as producing any problems at all with his performance on tasks #3 and #4. Only SIAI insiders know whether there has been a problem on task #2. Based on multi’s arguments, I suspect he may not be doing so well on #1. So, to me, the indicated response ought to be one of the following:
A. Hire someone articulate (and if possible, even charismatic) to take over task #1 and make whatever minor adjustments are needed regarding task #2.
B. Do nothing. There is no problem!
C. Get some academic papers published so that FAI/anti-UFAI research becomes interesting to the same funding sources that currently support CS, AI, and decision theory research. Then reconstitute SIAI as just one additional research institution which is fighting for that research funding.
I would be interested in what EY thinks of these three possibilities. Perhaps for different reasons, I suspect, so would multi.
[Edited to correct my hallucination of confusing multifoliaterose with wedrifid. As a result of this edit, various comments below may seem confused. Sorry about that, but I judge that making this comment clear is the higher priority.]
Was the first (unedited) ‘you’ intended? If so I’ll note that I was merely answering a question within a counterfactual framework suggested by the context. I haven’t even evaluated what potential importance multi’s post may have—but the prior probability I have for ‘a given post on LW mattering significantly’ is not particularly high.
I like your general analysis by the way and am always interested to know what the SIAI guys are doing along the lines of either your 1,2,3 or your A, B, C. I would seriously like to see C happen. Being able and willing to make that sort of move would be a huge step forward (and something that makes any hints of ‘arrogance’ seem trivial.)
I think you are right. I’m just playing the disclaimer game. Since this is a political thread there is always the risk of being condemned for supporting various positions. In this case I gave a literal answer to a rhetorical question directed at multi. Following purely social reasoning that would mean that I:
Am challenging cousin_it
Condemning Eliezer
Agreeing with anything and everything said by multi and probably also with everything said by anyone else who agrees with multi.
Almost certainly saying something about the credulity of uFAI risks.
In some way think any of this is particularly important to the universe outside the time/abstract-space bubble that is LessWrong this week.
Of course that comment actually lent credence to Eliezer (hence the humor) and was rather orthogonal to multi’s position with respect to arrogance.
It’s not that I mind too much sticking my neck out risking a social thrashing here or there. It’s just that I have sufficient capability for sticking my neck out for things that I actually do mean and for some reason prefer any potential criticism to be correctly targeted. It says something about many nerds that they value being comprehended more highly than approval.
Arwen was crazy enough to give up the immortality she already had
Come on now. Humans are immortal in Tolkien, they just sit in a different waiting room. (And technically can’t come back until the End of Days™, but who cares about that.)
Alright, then, call it her permanent resident status. If real death is off the table for everyone sapient, she’s still taking as big a risk as any member of the Fellowship proper.
To be sure. I was only pointing out that her “giving up immortality” was not nearly as crazy as the words “giving up immortality” might suggest in other contexts.
What Eliezer said. I was arguing from the assumption that he is wrong about FAI and stuff. If he’s right about the object level, then he’s not deluded in considering himself important.
I was arguing from the assumption that he is wrong about FAI and stuff. If he’s right about the object level, then he’s not deluded in considering himself important.
But if he is wrong about FAI and stuff, then he is still deluded not specifically about considering himself important, that implication is correct, he is deluded about FAI and stuff.
How so? Eliezer’s thesis is “AGI is dangerous and FAI is possible”. If he’s wrong—if AGI poses no danger or FAI is impossible—then what do you need a Frodo for?
Edited the grandparent to disambiguate the context.
(I haven’t discussed that particular thesis of Eliezer’s and nor does doubting that particular belief seem to be a take home message from multi’s post. The great grandparent is just a straightforward answer to the paragraph it quotes.)
But another part of me feels horror and disgust because after traumatic events in my own life I’d resolved to never do this thing again.
Because you were on the giving or on the receiving end of it?
What would that change, in the grand scheme of things or in your personal life? Are there not enough kooks in AI already?
Agreed; personally I de-converted myself from orthodox judaism, but I still find it crazy when people write big scholarly books debunking the bible; it’s just useless a waste of energy (part of it is academic incentives).
They will waste their life (or spend it wisely, it’s a matter of perspective), but if you join the tug-of-war, you’ll waste a big chunk of yours as well.
I haven’t been involved in these situations, but taking a cue from drug addicts (who incidentally have high suicide rate) most of them do not recover, but maybe 10% do. So most of the time you’ll find frustration, but one in 10 you’d save a life, I am not sure if that’s worthless.
I upvoted this, but I’m torn about this.
In your recent posts you’ve been slowly, carefully, thoroughly deconstructing one person. Part of me wants to break into applause at the techniques used, and learn from them, because in my whole life of manipulation I’ve never mounted an attack of such scale. (The paragraph saying “something has gone very wrong” was absolutely epic, to the point of evoking musical cues somewhere at the edge of my hearing. Just like the “greatly misguided” bit in your previous post. Bravo!) But another part of me feels horror and disgust because after traumatic events in my own life I’d resolved to never do this thing again.
It comes down to this: I enjoy LW for now. If Eliezer insists on creating a sealed reality around himself, what’s that to me? You don’t have to slay every dragon you see. Saving one person from megalomania (real or imagined) is way less important than your own research. Imagine the worst possible world: Eliezer turns into a kook. What would that change, in the grand scheme of things or in your personal life? Are there not enough kooks in AI already?
And lastly, a note about saving people. I think many of us here have had the unpleasant experience (to put it mildly) of trying to save someone from suicide. Looking back at such episodes in my own life, I’m sure that everyone involved would’ve been better off if I’d just hit “ignore” at the first sign of trouble. Cut and run: in serious cases it always comes to that, no exceptions. People are very stubborn, both consciously and subconsciously—they stay on their track. They will waste their life (or spend it wisely, it’s a matter of perspective), but if you join the tug-of-war, you’ll waste a big chunk of yours as well.
How’s that for other-optimizing?
I saved someone from suicide once. While the experience was certainly quite unpleasant at the time, if I had hit “ignore,” as you suggest, she would have died. I don’t think that I would be better off today if I had let her die, to say nothing of her. The fact that saving people is hard doesn’t mean that you shouldn’t do it!
The very fate of the universe, potentially. Purely hypothetically and for the sake of the discussion:
If Eliezer did have the potential to provide a strong positive influence on grand scale future outcomes but was crippled by the still hypothetical lack of self-doubt then that is a loss of real value.
A bad ‘Frodo’ can be worse than no Frodo at all. If we were to give the ring to a Frodo who thought he could take on Nazgul in hand to hand combat then we would lose the ring and so the lose the chance to give said ring to someone who could pull it off. Multi (and those for whom he asks such questions) have limited resources (and attention) so it may be worth deliberate investigation of potential recipients of trust.
Worse yet than a counterproductive Frodo would be a Frodo whose arrogance pisses of Aragorn, Gandalf, Legolas, Gimli, Merry, Pippin and even Sam so much that they get disgusted with the whole ‘save the world’ thing and go hang out in the forest flirting with Elven maidens. Further cause to investigate just whose bid for notoriety and influence you wish to support.
I cannot emphasise how much this is only a reply to the literal question cousin_it asked and no endorsement or denial of any of the above claims as they relate to persons real or imagined. For example it may have been good if Frodo was arrogant enough to piss off Aragorn. He may cracked it, taken the ring from Frodo and given it to Arwen. Arwen was crazy enough to give up the immortality she already had and so would be as good a candidate as any for being able to ditch a ring, without being completely useless for basically all purposes.
Er… I can’t help but notice a certain humor in the idea that it’s terrible if I’m self-deluded about my own importance because that means I might destroy the world.
It’s some sort of mutant version of “just because you’re paranoid doesn’t mean they’re not out to get you”.
Yes, there is is a certain humor. But I hope you did read the dot points and followed the reasoning. It, among other things, suggests a potential benefit of criticism such as multi’s aside from hypothetical benefits of discrediting you should it have been the case that you were not, in fact, competent.
I suppose I could draw from that the inference that you have a rather inflated notion of the importance of what multi is doing here, … but, in the immortal words of Richard Milhous Nixon, “That would be wrong.”
More seriously, I think everyone here realizes that EY has some rough edges, as well as some intellectual strengths. For his own self-improvement, he ought to be working on those rough edges. I suspect he is. However, in the meantime, it would be best if his responsibilities were in areas where his strengths are exploited and his rough edges don’t really matter. So, just what are his current responsibilities?
Convincing people that UFAI constitutes a serious existential risk while not giving the whole field of futurism and existential risk reduction a bad rep.
Setting direction for and managing FAI and UFAI-avoidance research at SIAI.
Conducting FAI and UFAI-avoidance research.
Reviewing and doing conceptual QC on the research work product.
To be honest, I don’t see EY’s “rough edges” as producing any problems at all with his performance on tasks #3 and #4. Only SIAI insiders know whether there has been a problem on task #2. Based on multi’s arguments, I suspect he may not be doing so well on #1. So, to me, the indicated response ought to be one of the following:
A. Hire someone articulate (and if possible, even charismatic) to take over task #1 and make whatever minor adjustments are needed regarding task #2.
B. Do nothing. There is no problem!
C. Get some academic papers published so that FAI/anti-UFAI research becomes interesting to the same funding sources that currently support CS, AI, and decision theory research. Then reconstitute SIAI as just one additional research institution which is fighting for that research funding.
I would be interested in what EY thinks of these three possibilities. Perhaps for different reasons, I suspect, so would multi.
[Edited to correct my hallucination of confusing multifoliaterose with wedrifid. As a result of this edit, various comments below may seem confused. Sorry about that, but I judge that making this comment clear is the higher priority.]
Was the first (unedited) ‘you’ intended? If so I’ll note that I was merely answering a question within a counterfactual framework suggested by the context. I haven’t even evaluated what potential importance multi’s post may have—but the prior probability I have for ‘a given post on LW mattering significantly’ is not particularly high.
I like your general analysis by the way and am always interested to know what the SIAI guys are doing along the lines of either your 1,2,3 or your A, B, C. I would seriously like to see C happen. Being able and willing to make that sort of move would be a huge step forward (and something that makes any hints of ‘arrogance’ seem trivial.)
I think that originally Perplexed didn’t look at your comment carefully and thought that multi had written it.
Close. Actually, I had looked at the first part of the comment and then written my response under the delusion that wedrifid had been the OP.
I am now going to edit my comment to cleanly replace the mistaken “you” with “multi”
I think you are right. I’m just playing the disclaimer game. Since this is a political thread there is always the risk of being condemned for supporting various positions. In this case I gave a literal answer to a rhetorical question directed at multi. Following purely social reasoning that would mean that I:
Am challenging cousin_it
Condemning Eliezer
Agreeing with anything and everything said by multi and probably also with everything said by anyone else who agrees with multi.
Almost certainly saying something about the credulity of uFAI risks.
In some way think any of this is particularly important to the universe outside the time/abstract-space bubble that is LessWrong this week.
Of course that comment actually lent credence to Eliezer (hence the humor) and was rather orthogonal to multi’s position with respect to arrogance.
It’s not that I mind too much sticking my neck out risking a social thrashing here or there. It’s just that I have sufficient capability for sticking my neck out for things that I actually do mean and for some reason prefer any potential criticism to be correctly targeted. It says something about many nerds that they value being comprehended more highly than approval.
Approval based on incomprehension is fragile and unsatisfying.
Veering wildly off-topic:
Come on now. Humans are immortal in Tolkien, they just sit in a different waiting room. (And technically can’t come back until the End of Days™, but who cares about that.)
Alright, then, call it her permanent resident status. If real death is off the table for everyone sapient, she’s still taking as big a risk as any member of the Fellowship proper.
To be sure. I was only pointing out that her “giving up immortality” was not nearly as crazy as the words “giving up immortality” might suggest in other contexts.
What Eliezer said. I was arguing from the assumption that he is wrong about FAI and stuff. If he’s right about the object level, then he’s not deluded in considering himself important.
But if he is wrong about FAI and stuff, then he is still deluded not specifically about considering himself important, that implication is correct, he is deluded about FAI and stuff.
Agreed.
Which, of course, would still leave the second two dot points as answers to your question.
How so? Eliezer’s thesis is “AGI is dangerous and FAI is possible”. If he’s wrong—if AGI poses no danger or FAI is impossible—then what do you need a Frodo for?
Edited the grandparent to disambiguate the context.
(I haven’t discussed that particular thesis of Eliezer’s and nor does doubting that particular belief seem to be a take home message from multi’s post. The great grandparent is just a straightforward answer to the paragraph it quotes.)
The previous post was fine, but this one is sloppy, and I don’t think it’s some kind of Machiavellian plot.
Because you were on the giving or on the receiving end of it?
Agreed; personally I de-converted myself from orthodox judaism, but I still find it crazy when people write big scholarly books debunking the bible; it’s just useless a waste of energy (part of it is academic incentives).
I haven’t been involved in these situations, but taking a cue from drug addicts (who incidentally have high suicide rate) most of them do not recover, but maybe 10% do. So most of the time you’ll find frustration, but one in 10 you’d save a life, I am not sure if that’s worthless.