Normally I refrain from commenting about the tone of a comment or post, but the discussion here revolves partly around the public appearance of SIAI, so I’ll say:
this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I’ve read or seen.
This isn’t a comment about the content of your response, which I think has valid points (and which multifoliaterose has at least partly responded to).
this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I’ve read or seen.
It is certainly Eliezer’s responses and not multi’s challenges which are the powerful influence here. Multi has effectively given Eliezer a platform from which to advertise the merits of SIAI as well as demonstrate that contrary to suspicions Eliezer is, in fact, able to handle situations in accordance to his own high standards of rationality despite the influences of his ego. This is not what I’ve seen recently. He has focussed on retaliation against multi at whatever weak points he can find and largely neglected to do what will win. Winning in this case would be demonstrating exactly why people ought to trust him to be able to achieve what he hopes to achieve (by which I mean ‘influence’ not ‘guarantee’ FAI protection of humanity.)
I want to see more of this:
With Michael Vassar in charge, SIAI has become more transparent, and will keep on doing things meant to make it more transparent
and less of this:
I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like “And this is why if you’re trying to minimize existential risk, you should support a charity that tries to stop tuberculosis” or “And this is where we’re going to assume the worst possible case instead of the expected case and actually act that way”, he’ll just blithely keep going.
I’ll leave aside ad hominim and note that tu quoque isn’t always fallacious. Unfortunately in this case it is, in fact, important that Eliezer doesn’t fall into the trap that he accuses multi of—deploying arguments as mere soldiers.
This sort of conversation just makes me feel tired. I’ve had debates before about my personal psychology and feel like I’ve talked myself out about all of them. They never produced anything positive, and I feel that they were a bad sign for the whole mailing list they appeared on—I would be horrified to see LW go the way of SL4. The war is lost as soon as it starts—there is no winning move. I feel like I’m being held to an absurdly high standard, being judged as though I were trying to be the sort of person that people accuse me of thinking I am, that I’m somehow supposed to produce exactly the right mix of charming modesty while still arguing my way into enough funding for SIAI… it just makes me feel tired, and like I’m being held to a ridiculously high standard, and that it’s impossible to satisfy people because the standard will keep going up, and like I’m being asking to solve PR problems that I never signed up for. I’ll solve your math problems if I can, I’ll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one, or better yet, why don’t you try being perfect and see whether it’s as easy as it sounds while you’re handing out advice?
I have looked, and I have seen under the Sun, that to those who try to defend themselves, more and more attacks will be given. Like, if you try to defend yourself, people sense that as a vulnerability, and they know they can demand even more concessions from you. I tried to avoid that failure mode in my responses, and apparently failed. So let me state it plainly for you. I’ll build a Friendly AI for you if I can. Anything else I can do is a bonus. If I say I can’t do it, asking me again isn’t likely to produce a different answer.
It was very clearly a mistake to have participated in this thread in the first place. It always is. Every single time. Other SIAI supporters who are better at that sort of thing can respond. I have to remember, now, that there are other people who can respond, and that there is no necessity for me to do it. In fact, someone really should have reminded me to shut up, and if it happens again, I hope someone will. I wish I could pull a Roko and just delete all my comments in all these threads, but that would be impolite.
I really appreciate this response. In fact, to mirror Jordan’s pattern I’ll say that this comment has done more to raise my confidence in SIAI than anything else in the recent context.
I’ll solve your math problems if I can, I’ll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one
I’m working on it, to within the limits of my own entrepreneurial ability and the costs of serving my own personal mission. Not that I would allocate such funds to a PR person. I would prefer to allocate it to research of the ‘publish in traditional journals’ kind. If I was in the business of giving advice I would give the same advice you have no doubt heard 1,000 times: the best thing that you personally could do for PR isn’t to talk about SIAI but to get peer reviewed papers published. Even though academia is far from perfect, riddled with biases and perhaps inclined to have a certain resistance to your impingement it is still important.
or better yet, why don’t you try being perfect and see whether it’s as easy as it sounds while you’re handing out advice?
Now, now, I think the ‘give us the cash’ helps you out rather a lot more than me being perfect. Mind you me following tsuyoku naritai does rather overlap with the ‘giving you cash’.
I have looked, and I have seen under the Sun, that to those who try to defend themselves, more and more attacks will be given. Like, if you try to defend yourself, people sense that as a vulnerability, and they know they can demand even more concessions from you. I tried to avoid that failure mode in my responses, and apparently failed.
You are right on all counts. I’ll note that it perhaps didn’t help that you felt it was a time to defend rather than a time to assert and convey. It certainly isn’t necessary to respond to criticism directly. Sometimes it is better to just take the feedback into consideration and use anything of merit when working out your own strategy. (As well as things that aren’t of merit but are still important because you have to win over even stupid people).
I’ll build a Friendly AI for you if I can.
Thankyou. I don’t necessarily expect you to succeed because the task is damn hard, takes longer to do right than for someone else to fail and we probably only have one shot to get it right. But you’re shutting up to do the impossible. Even if the odds are against us targeting focus at the one alternative that doesn’t suck is the sane thing to do.
It was very clearly a mistake to have participated in this thread in the first place. It always is. Every single time. Other SIAI supporters who are better at that sort of thing can respond. I have to remember, now, that there are other people who can respond, and that there is no necessity for me to do it.
Yes.
In fact, someone really should have reminded me to shut up, and if it happens again, I hope someone will.
I will do so, since you have expressed willingness to hear it. That is an option I would much prefer to criticising any responses you make that I don’t find satisfactory. You’re trying to contribute to saving the goddam world and have years of preparation behind you in some areas that nobody has. You can free yourself up to get on with that while someone else explains how you can be useful.
I wish I could pull a Roko and just delete all my comments in all these threads, but that would be impolite.
The sentiment is good but perhaps you could have left off the reminder of the Roko incident. The chips may have fallen somewhat differently in these threads if the ghost of Nearly-Headless Roko wasn’t looming in the background.
Once again, this was an encouraging reply. Thankyou.
And, if you draw closer to your goal, the standards you’re held to will dwarf what you see here. You’re trying to build god, for christ’s sake. As the end game approaches more and more people are going to be taking that prospect more and more seriously, and there will be a shit storm. You’d better believe that every last word you’ve written is going to be analysed and misconstrued, often by people with much less sympathy than us here.
It’s a lot of pressure. I don’t envy you. All I can offer you is: =(
Much of what you say here sounds quite reasonable. Since many great scientists have lacked social grace, it seems to me that your PR difficulties have no bearing on your ability to do valuable research.
I think that the trouble arises from the fact that people (including myself) have taken you to be an official representative of SIAI. As long as SIAI makes it clear that your remarks do not reflect SIAI’s positions, there should be no problem.
Much of what you say here sounds quite reasonable. Since many great scientists have lacked social grace, it seems to me that your PR difficulties have no bearing on your ability to do valuable research.
There is an interesting question. Does it make a difference if one of the core subjects of research (rationality) strongly suggests different actions be taken and the core research goal (creating or influencing the creation of an FAI) requires particular standards in ethics and rationality? For FAI research behaviours that reflect ethically relevant decision making and rational thinking under pressure matter.
If you do research into ‘applied godhood’ then you can be expected to be held to particularly high standards.
What I was saying above is that if Eliezer wants to defer to other SIAI staff then we should seek justification from them rather than from him. Maybe they have good reasons for thinking that it’s a good idea for him to do FAI research despite the issue that you mention.
Only if we need official LW slang for “Have rigid boundaries and take decisive action in response to an irredeemable defection in a game of community involvement”. I just mean to say it may be better to use a different slang term for “exit without leaving a trace” since for some “pull a Roko” would always prompt a feeling of regret. I wouldn’t bring up Roko at all (particularly reference to the exit) because I want to leave the past in the past. I’m only commenting now because I don’t want it to, as you say, become official LW slang.
I’m probably alone on this one, but “charming modesty” usually puts me off. It gives me the impression of someone slimy and manipulative, or of someone without the rigor of thought necessary to accomplish anything interesting.
In the past I’ve seen Eliezer respond to criticism very well. His responses seemed to be in good faith, even when abrasive. I use this signal as a heuristic for evaluating experts in fields I know little about. I’m versed in the area of existential risk reduction well enough not to need this heuristic, but I’m not versed in the area of the effectiveness of SIAI.
Eliezer’s recent responses have reduced my faith in SIAI, which, after all, is rooted almost solely in my impression of its members. This is a double stroke: my faith in Eliezer himself is reduced, and the reason for this is a public appearance which will likely prevent others from supporting SIAI, which is more evidence to me that SIAI won’t succeed.
SIAI (and Eliezer) still has a lot of credibility in my eyes, but I will be moving away from heuristics and looking for more concrete evidence as I debate whether to continue to be a supporter.
Tone matters, but would you really destroy the Earth because Eliezer was mean?
The issue seems to be not whether Eliezer personally behaves in an unpleasant manner but rather how Eliezer’s responses influence predictions of just how much difference Eliezer will be able to make on the problem at hand. The implied reasoning Jordan makes is different to the one you suggest.
BTW, when people start saying, not, “You offended me, personally” but “I’m worried about how other people will react”, I usually take that as a cue to give up.
This too is not the key point in question. Your reaction here can be expected to correlate with other reactions you would make in situations not necessarily related to PR. In particular it says something about potential responses to ego damaging information. I am sure you can see how that would be relevant to estimates of possible value (positive or negative) that you will contribute. I again disclaim that ‘influences’ does not mean ‘drastically influences’.
Note that the reply in the parent does relate to other contexts somewhat more than this one.
Maybe because most people who talk to you know that they are not typical and also know that others wouldn’t be as polite given your style of argumentation?
People who know they aren’t typical should probably realize that their simulation of typical minds will often be wrong. A corollary to Vinge’s Law, perhaps?
That’s still hard to see—that you could go from thinking that SIAI represents the most efficient form of altruistic giving to that it no longer is, because Eliezer was rude to someone on LW. He’s manifestly managed to persuade quite a few people of the importance of AI risk despite these disadvantages—I think you’d have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.
Rudeness isn’t the issue* so much as what the rudeness is used instead of. A terrible response in a place where a good response should be possible is information that should have influence on evaluations.
* Except, I suppose, PR implications of behaviour in public having some effect, but that isn’t something that Jordan’s statement needs to rely on.
I think you’d have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.
Obviously. But not every update needs to be an overwhelming one. Again, the argument you refute here is not one that Jordan made.
EDIT: I just saw Jordan’s reply and saw that he did mean both these points but that he also considered the part that I had included only as a footnote.
I have difficulty taking this seriously. Someone else can respond to it.
was devoid of explicit information. It was purely negative.
Implicitly, I assume you meant that existential risk reduction is so important that no other ‘normal’ charity can compare in cost effectiveness (utility bought per dollar). While I agree that existential risk reduction is insanely important, it doesn’t follow that SIAI is a good charity to donate to. SIAI may actually be hurting the cause (in one way, by hurting public opinion), and this is one of multi’s points. Your implicit statement seems to me to be a rebuke of this point sans evidence, amounting to simply saying nuh-uh.
You say
you don’t make clear what constitutes a sufficient level of transparency and accountability
which is a good point. But then go on to say
though of course you will now carefully look over all of SIAI’s activities directed at transparency and accountability, and decide that the needed level is somewhere above that.
essentially accusing multi of a crime in rationality before he commits it. On a site devoted to rationality that is a serious accusation. It’s understood on this site that there are a million ways to fail in rationality, and that we all will fail, at one point or another, hence we rely on each other to point out failures and even potential future failures. Your accusation goes beyond giving a friendly warning to prevent a bias. It’s an attack.
Your comment about taking bets and putting money on the line is a common theme around here and OB and other similar forums/blogs. It’s neutral in tone to me (although I suspect negative in tone to some outside the community), but I find it distracting in the midst of a serious reply to serious objections. I want a debate, not a parlor trick to see if someone is really committed to an idea they are proposing. This is minor, it mostly extends the tone of the rest of the reply. In a different, friendlier context, I wouldn’t take note of it.
Finally,
I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like “And this is why if you’re trying to minimize existential risk, you should support a charity that tries to stop tuberculosis” or “And this is where we’re going to assume the worst possible case instead of the expected case and actually act that way”, he’ll just blithely keep going.
wedrifid has already commented on this point in this thread. What really jumps out at me here is the first part
my overall impression here is of someone who manages to talk mostly LW language most of the time
This just seems so utterly cliquey. “Hey, you almost fit in around here, you almost talk like we talk, but not quite. I can see the difference, and it’s exactly that little difference that sets you apart from us and makes you wrong.” The word “manages” seems especially negative in this context.
Normally I refrain from commenting about the tone of a comment or post, but the discussion here revolves partly around the public appearance of SIAI, so I’ll say:
this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I’ve read or seen.
This isn’t a comment about the content of your response, which I think has valid points (and which multifoliaterose has at least partly responded to).
It is certainly Eliezer’s responses and not multi’s challenges which are the powerful influence here. Multi has effectively given Eliezer a platform from which to advertise the merits of SIAI as well as demonstrate that contrary to suspicions Eliezer is, in fact, able to handle situations in accordance to his own high standards of rationality despite the influences of his ego. This is not what I’ve seen recently. He has focussed on retaliation against multi at whatever weak points he can find and largely neglected to do what will win. Winning in this case would be demonstrating exactly why people ought to trust him to be able to achieve what he hopes to achieve (by which I mean ‘influence’ not ‘guarantee’ FAI protection of humanity.)
I want to see more of this:
and less of this:
I’ll leave aside ad hominim and note that tu quoque isn’t always fallacious. Unfortunately in this case it is, in fact, important that Eliezer doesn’t fall into the trap that he accuses multi of—deploying arguments as mere soldiers.
This sort of conversation just makes me feel tired. I’ve had debates before about my personal psychology and feel like I’ve talked myself out about all of them. They never produced anything positive, and I feel that they were a bad sign for the whole mailing list they appeared on—I would be horrified to see LW go the way of SL4. The war is lost as soon as it starts—there is no winning move. I feel like I’m being held to an absurdly high standard, being judged as though I were trying to be the sort of person that people accuse me of thinking I am, that I’m somehow supposed to produce exactly the right mix of charming modesty while still arguing my way into enough funding for SIAI… it just makes me feel tired, and like I’m being held to a ridiculously high standard, and that it’s impossible to satisfy people because the standard will keep going up, and like I’m being asking to solve PR problems that I never signed up for. I’ll solve your math problems if I can, I’ll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one, or better yet, why don’t you try being perfect and see whether it’s as easy as it sounds while you’re handing out advice?
I have looked, and I have seen under the Sun, that to those who try to defend themselves, more and more attacks will be given. Like, if you try to defend yourself, people sense that as a vulnerability, and they know they can demand even more concessions from you. I tried to avoid that failure mode in my responses, and apparently failed. So let me state it plainly for you. I’ll build a Friendly AI for you if I can. Anything else I can do is a bonus. If I say I can’t do it, asking me again isn’t likely to produce a different answer.
It was very clearly a mistake to have participated in this thread in the first place. It always is. Every single time. Other SIAI supporters who are better at that sort of thing can respond. I have to remember, now, that there are other people who can respond, and that there is no necessity for me to do it. In fact, someone really should have reminded me to shut up, and if it happens again, I hope someone will. I wish I could pull a Roko and just delete all my comments in all these threads, but that would be impolite.
I really appreciate this response. In fact, to mirror Jordan’s pattern I’ll say that this comment has done more to raise my confidence in SIAI than anything else in the recent context.
I’m working on it, to within the limits of my own entrepreneurial ability and the costs of serving my own personal mission. Not that I would allocate such funds to a PR person. I would prefer to allocate it to research of the ‘publish in traditional journals’ kind. If I was in the business of giving advice I would give the same advice you have no doubt heard 1,000 times: the best thing that you personally could do for PR isn’t to talk about SIAI but to get peer reviewed papers published. Even though academia is far from perfect, riddled with biases and perhaps inclined to have a certain resistance to your impingement it is still important.
Now, now, I think the ‘give us the cash’ helps you out rather a lot more than me being perfect. Mind you me following tsuyoku naritai does rather overlap with the ‘giving you cash’.
You are right on all counts. I’ll note that it perhaps didn’t help that you felt it was a time to defend rather than a time to assert and convey. It certainly isn’t necessary to respond to criticism directly. Sometimes it is better to just take the feedback into consideration and use anything of merit when working out your own strategy. (As well as things that aren’t of merit but are still important because you have to win over even stupid people).
Thankyou. I don’t necessarily expect you to succeed because the task is damn hard, takes longer to do right than for someone else to fail and we probably only have one shot to get it right. But you’re shutting up to do the impossible. Even if the odds are against us targeting focus at the one alternative that doesn’t suck is the sane thing to do.
Yes.
I will do so, since you have expressed willingness to hear it. That is an option I would much prefer to criticising any responses you make that I don’t find satisfactory. You’re trying to contribute to saving the goddam world and have years of preparation behind you in some areas that nobody has. You can free yourself up to get on with that while someone else explains how you can be useful.
The sentiment is good but perhaps you could have left off the reminder of the Roko incident. The chips may have fallen somewhat differently in these threads if the ghost of Nearly-Headless Roko wasn’t looming in the background.
Once again, this was an encouraging reply. Thankyou.
Yes, the standards will keep going up.
And, if you draw closer to your goal, the standards you’re held to will dwarf what you see here. You’re trying to build god, for christ’s sake. As the end game approaches more and more people are going to be taking that prospect more and more seriously, and there will be a shit storm. You’d better believe that every last word you’ve written is going to be analysed and misconstrued, often by people with much less sympathy than us here.
It’s a lot of pressure. I don’t envy you. All I can offer you is: =(
Much of what you say here sounds quite reasonable. Since many great scientists have lacked social grace, it seems to me that your PR difficulties have no bearing on your ability to do valuable research.
I think that the trouble arises from the fact that people (including myself) have taken you to be an official representative of SIAI. As long as SIAI makes it clear that your remarks do not reflect SIAI’s positions, there should be no problem.
There is an interesting question. Does it make a difference if one of the core subjects of research (rationality) strongly suggests different actions be taken and the core research goal (creating or influencing the creation of an FAI) requires particular standards in ethics and rationality? For FAI research behaviours that reflect ethically relevant decision making and rational thinking under pressure matter.
If you do research into ‘applied godhood’ then you can be expected to be held to particularly high standards.
Yes, these are good points.
What I was saying above is that if Eliezer wants to defer to other SIAI staff then we should seek justification from them rather than from him. Maybe they have good reasons for thinking that it’s a good idea for him to do FAI research despite the issue that you mention.
I understand and I did vote your comment up. The point is relevant even if not absolutely so in this instance.
Is this now official LW slang?
Only if we need official LW slang for “Have rigid boundaries and take decisive action in response to an irredeemable defection in a game of community involvement”. I just mean to say it may be better to use a different slang term for “exit without leaving a trace” since for some “pull a Roko” would always prompt a feeling of regret. I wouldn’t bring up Roko at all (particularly reference to the exit) because I want to leave the past in the past. I’m only commenting now because I don’t want it to, as you say, become official LW slang.
I’m probably alone on this one, but “charming modesty” usually puts me off. It gives me the impression of someone slimy and manipulative, or of someone without the rigor of thought necessary to accomplish anything interesting.
Well said.
In the past I’ve seen Eliezer respond to criticism very well. His responses seemed to be in good faith, even when abrasive. I use this signal as a heuristic for evaluating experts in fields I know little about. I’m versed in the area of existential risk reduction well enough not to need this heuristic, but I’m not versed in the area of the effectiveness of SIAI.
Eliezer’s recent responses have reduced my faith in SIAI, which, after all, is rooted almost solely in my impression of its members. This is a double stroke: my faith in Eliezer himself is reduced, and the reason for this is a public appearance which will likely prevent others from supporting SIAI, which is more evidence to me that SIAI won’t succeed.
SIAI (and Eliezer) still has a lot of credibility in my eyes, but I will be moving away from heuristics and looking for more concrete evidence as I debate whether to continue to be a supporter.
Tone matters, but would you really destroy the Earth because Eliezer was mean?
The issue seems to be not whether Eliezer personally behaves in an unpleasant manner but rather how Eliezer’s responses influence predictions of just how much difference Eliezer will be able to make on the problem at hand. The implied reasoning Jordan makes is different to the one you suggest.
BTW, when people start saying, not, “You offended me, personally” but “I’m worried about how other people will react”, I usually take that as a cue to give up.
This too is not the key point in question. Your reaction here can be expected to correlate with other reactions you would make in situations not necessarily related to PR. In particular it says something about potential responses to ego damaging information. I am sure you can see how that would be relevant to estimates of possible value (positive or negative) that you will contribute. I again disclaim that ‘influences’ does not mean ‘drastically influences’.
Note that the reply in the parent does relate to other contexts somewhat more than this one.
Maybe because most people who talk to you know that they are not typical and also know that others wouldn’t be as polite given your style of argumentation?
People who know they aren’t typical should probably realize that their simulation of typical minds will often be wrong. A corollary to Vinge’s Law, perhaps?
That’s still hard to see—that you could go from thinking that SIAI represents the most efficient form of altruistic giving to that it no longer is, because Eliezer was rude to someone on LW. He’s manifestly managed to persuade quite a few people of the importance of AI risk despite these disadvantages—I think you’d have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.
Rudeness isn’t the issue* so much as what the rudeness is used instead of. A terrible response in a place where a good response should be possible is information that should have influence on evaluations.
* Except, I suppose, PR implications of behaviour in public having some effect, but that isn’t something that Jordan’s statement needs to rely on.
Obviously. But not every update needs to be an overwhelming one. Again, the argument you refute here is not one that Jordan made.
EDIT: I just saw Jordan’s reply and saw that he did mean both these points but that he also considered the part that I had included only as a footnote.
Yes, thank you for clarifying that.
Can you please be more precise about what you saw as the problem?
Yes. Your first line
was devoid of explicit information. It was purely negative.
Implicitly, I assume you meant that existential risk reduction is so important that no other ‘normal’ charity can compare in cost effectiveness (utility bought per dollar). While I agree that existential risk reduction is insanely important, it doesn’t follow that SIAI is a good charity to donate to. SIAI may actually be hurting the cause (in one way, by hurting public opinion), and this is one of multi’s points. Your implicit statement seems to me to be a rebuke of this point sans evidence, amounting to simply saying nuh-uh.
You say
which is a good point. But then go on to say
essentially accusing multi of a crime in rationality before he commits it. On a site devoted to rationality that is a serious accusation. It’s understood on this site that there are a million ways to fail in rationality, and that we all will fail, at one point or another, hence we rely on each other to point out failures and even potential future failures. Your accusation goes beyond giving a friendly warning to prevent a bias. It’s an attack.
Your comment about taking bets and putting money on the line is a common theme around here and OB and other similar forums/blogs. It’s neutral in tone to me (although I suspect negative in tone to some outside the community), but I find it distracting in the midst of a serious reply to serious objections. I want a debate, not a parlor trick to see if someone is really committed to an idea they are proposing. This is minor, it mostly extends the tone of the rest of the reply. In a different, friendlier context, I wouldn’t take note of it.
Finally,
wedrifid has already commented on this point in this thread. What really jumps out at me here is the first part
This just seems so utterly cliquey. “Hey, you almost fit in around here, you almost talk like we talk, but not quite. I can see the difference, and it’s exactly that little difference that sets you apart from us and makes you wrong.” The word “manages” seems especially negative in this context.
Thank you for your specifications, and I’ll try to keep them in mind!