SIAI does not presently exhibit high levels of transparency and accountability… For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that at present GiveWell’s top ranked charities VillageReach and StopTB are better choices than SIAI
I have difficulty taking this seriously. Someone else can respond to it.
agree with what I interpret to be Dario’s point above: that in evaluating charities which are not transparent and accountable, we should assume the worst.
Assuming that much of the worst isn’t rational. It would be a convenient soldier for your argument, but it’s not the odds to bet at. Also, you don’t make clear what constitutes a sufficient level of transparency and accountability, though of course you will now carefully look over all of SIAI’s activities directed at transparency and accountability, and decide that the needed level is somewhere above that.
You say you assume the worst, and that other people should act accordingly. Would you care to state “the worst”, your betting odds on it, how much you’re willing to bet, and what neutral third party you would accept as providing the verdict if they looked over SIAI’s finances and told you it wasn’t true? If you offer us enough free money, I’ll vote for taking it.
I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like “And this is why if you’re trying to minimize existential risk, you should support a charity that tries to stop tuberculosis” or “And this is where we’re going to assume the worst possible case instead of the expected case and actually act that way”, he’ll just blithely keep going.
With Michael Vassar in charge, SIAI has become more transparent, and will keep on doing things meant to make it more transparent, and I have every confidence that whatever it is we do, it will never be enough for someone who is, at that particular time, motivated to argue against SIAI.
Normally I refrain from commenting about the tone of a comment or post, but the discussion here revolves partly around the public appearance of SIAI, so I’ll say:
this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I’ve read or seen.
This isn’t a comment about the content of your response, which I think has valid points (and which multifoliaterose has at least partly responded to).
this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I’ve read or seen.
It is certainly Eliezer’s responses and not multi’s challenges which are the powerful influence here. Multi has effectively given Eliezer a platform from which to advertise the merits of SIAI as well as demonstrate that contrary to suspicions Eliezer is, in fact, able to handle situations in accordance to his own high standards of rationality despite the influences of his ego. This is not what I’ve seen recently. He has focussed on retaliation against multi at whatever weak points he can find and largely neglected to do what will win. Winning in this case would be demonstrating exactly why people ought to trust him to be able to achieve what he hopes to achieve (by which I mean ‘influence’ not ‘guarantee’ FAI protection of humanity.)
I want to see more of this:
With Michael Vassar in charge, SIAI has become more transparent, and will keep on doing things meant to make it more transparent
and less of this:
I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like “And this is why if you’re trying to minimize existential risk, you should support a charity that tries to stop tuberculosis” or “And this is where we’re going to assume the worst possible case instead of the expected case and actually act that way”, he’ll just blithely keep going.
I’ll leave aside ad hominim and note that tu quoque isn’t always fallacious. Unfortunately in this case it is, in fact, important that Eliezer doesn’t fall into the trap that he accuses multi of—deploying arguments as mere soldiers.
This sort of conversation just makes me feel tired. I’ve had debates before about my personal psychology and feel like I’ve talked myself out about all of them. They never produced anything positive, and I feel that they were a bad sign for the whole mailing list they appeared on—I would be horrified to see LW go the way of SL4. The war is lost as soon as it starts—there is no winning move. I feel like I’m being held to an absurdly high standard, being judged as though I were trying to be the sort of person that people accuse me of thinking I am, that I’m somehow supposed to produce exactly the right mix of charming modesty while still arguing my way into enough funding for SIAI… it just makes me feel tired, and like I’m being held to a ridiculously high standard, and that it’s impossible to satisfy people because the standard will keep going up, and like I’m being asking to solve PR problems that I never signed up for. I’ll solve your math problems if I can, I’ll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one, or better yet, why don’t you try being perfect and see whether it’s as easy as it sounds while you’re handing out advice?
I have looked, and I have seen under the Sun, that to those who try to defend themselves, more and more attacks will be given. Like, if you try to defend yourself, people sense that as a vulnerability, and they know they can demand even more concessions from you. I tried to avoid that failure mode in my responses, and apparently failed. So let me state it plainly for you. I’ll build a Friendly AI for you if I can. Anything else I can do is a bonus. If I say I can’t do it, asking me again isn’t likely to produce a different answer.
It was very clearly a mistake to have participated in this thread in the first place. It always is. Every single time. Other SIAI supporters who are better at that sort of thing can respond. I have to remember, now, that there are other people who can respond, and that there is no necessity for me to do it. In fact, someone really should have reminded me to shut up, and if it happens again, I hope someone will. I wish I could pull a Roko and just delete all my comments in all these threads, but that would be impolite.
I really appreciate this response. In fact, to mirror Jordan’s pattern I’ll say that this comment has done more to raise my confidence in SIAI than anything else in the recent context.
I’ll solve your math problems if I can, I’ll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one
I’m working on it, to within the limits of my own entrepreneurial ability and the costs of serving my own personal mission. Not that I would allocate such funds to a PR person. I would prefer to allocate it to research of the ‘publish in traditional journals’ kind. If I was in the business of giving advice I would give the same advice you have no doubt heard 1,000 times: the best thing that you personally could do for PR isn’t to talk about SIAI but to get peer reviewed papers published. Even though academia is far from perfect, riddled with biases and perhaps inclined to have a certain resistance to your impingement it is still important.
or better yet, why don’t you try being perfect and see whether it’s as easy as it sounds while you’re handing out advice?
Now, now, I think the ‘give us the cash’ helps you out rather a lot more than me being perfect. Mind you me following tsuyoku naritai does rather overlap with the ‘giving you cash’.
I have looked, and I have seen under the Sun, that to those who try to defend themselves, more and more attacks will be given. Like, if you try to defend yourself, people sense that as a vulnerability, and they know they can demand even more concessions from you. I tried to avoid that failure mode in my responses, and apparently failed.
You are right on all counts. I’ll note that it perhaps didn’t help that you felt it was a time to defend rather than a time to assert and convey. It certainly isn’t necessary to respond to criticism directly. Sometimes it is better to just take the feedback into consideration and use anything of merit when working out your own strategy. (As well as things that aren’t of merit but are still important because you have to win over even stupid people).
I’ll build a Friendly AI for you if I can.
Thankyou. I don’t necessarily expect you to succeed because the task is damn hard, takes longer to do right than for someone else to fail and we probably only have one shot to get it right. But you’re shutting up to do the impossible. Even if the odds are against us targeting focus at the one alternative that doesn’t suck is the sane thing to do.
It was very clearly a mistake to have participated in this thread in the first place. It always is. Every single time. Other SIAI supporters who are better at that sort of thing can respond. I have to remember, now, that there are other people who can respond, and that there is no necessity for me to do it.
Yes.
In fact, someone really should have reminded me to shut up, and if it happens again, I hope someone will.
I will do so, since you have expressed willingness to hear it. That is an option I would much prefer to criticising any responses you make that I don’t find satisfactory. You’re trying to contribute to saving the goddam world and have years of preparation behind you in some areas that nobody has. You can free yourself up to get on with that while someone else explains how you can be useful.
I wish I could pull a Roko and just delete all my comments in all these threads, but that would be impolite.
The sentiment is good but perhaps you could have left off the reminder of the Roko incident. The chips may have fallen somewhat differently in these threads if the ghost of Nearly-Headless Roko wasn’t looming in the background.
Once again, this was an encouraging reply. Thankyou.
And, if you draw closer to your goal, the standards you’re held to will dwarf what you see here. You’re trying to build god, for christ’s sake. As the end game approaches more and more people are going to be taking that prospect more and more seriously, and there will be a shit storm. You’d better believe that every last word you’ve written is going to be analysed and misconstrued, often by people with much less sympathy than us here.
It’s a lot of pressure. I don’t envy you. All I can offer you is: =(
Much of what you say here sounds quite reasonable. Since many great scientists have lacked social grace, it seems to me that your PR difficulties have no bearing on your ability to do valuable research.
I think that the trouble arises from the fact that people (including myself) have taken you to be an official representative of SIAI. As long as SIAI makes it clear that your remarks do not reflect SIAI’s positions, there should be no problem.
Much of what you say here sounds quite reasonable. Since many great scientists have lacked social grace, it seems to me that your PR difficulties have no bearing on your ability to do valuable research.
There is an interesting question. Does it make a difference if one of the core subjects of research (rationality) strongly suggests different actions be taken and the core research goal (creating or influencing the creation of an FAI) requires particular standards in ethics and rationality? For FAI research behaviours that reflect ethically relevant decision making and rational thinking under pressure matter.
If you do research into ‘applied godhood’ then you can be expected to be held to particularly high standards.
What I was saying above is that if Eliezer wants to defer to other SIAI staff then we should seek justification from them rather than from him. Maybe they have good reasons for thinking that it’s a good idea for him to do FAI research despite the issue that you mention.
Only if we need official LW slang for “Have rigid boundaries and take decisive action in response to an irredeemable defection in a game of community involvement”. I just mean to say it may be better to use a different slang term for “exit without leaving a trace” since for some “pull a Roko” would always prompt a feeling of regret. I wouldn’t bring up Roko at all (particularly reference to the exit) because I want to leave the past in the past. I’m only commenting now because I don’t want it to, as you say, become official LW slang.
I’m probably alone on this one, but “charming modesty” usually puts me off. It gives me the impression of someone slimy and manipulative, or of someone without the rigor of thought necessary to accomplish anything interesting.
In the past I’ve seen Eliezer respond to criticism very well. His responses seemed to be in good faith, even when abrasive. I use this signal as a heuristic for evaluating experts in fields I know little about. I’m versed in the area of existential risk reduction well enough not to need this heuristic, but I’m not versed in the area of the effectiveness of SIAI.
Eliezer’s recent responses have reduced my faith in SIAI, which, after all, is rooted almost solely in my impression of its members. This is a double stroke: my faith in Eliezer himself is reduced, and the reason for this is a public appearance which will likely prevent others from supporting SIAI, which is more evidence to me that SIAI won’t succeed.
SIAI (and Eliezer) still has a lot of credibility in my eyes, but I will be moving away from heuristics and looking for more concrete evidence as I debate whether to continue to be a supporter.
Tone matters, but would you really destroy the Earth because Eliezer was mean?
The issue seems to be not whether Eliezer personally behaves in an unpleasant manner but rather how Eliezer’s responses influence predictions of just how much difference Eliezer will be able to make on the problem at hand. The implied reasoning Jordan makes is different to the one you suggest.
BTW, when people start saying, not, “You offended me, personally” but “I’m worried about how other people will react”, I usually take that as a cue to give up.
This too is not the key point in question. Your reaction here can be expected to correlate with other reactions you would make in situations not necessarily related to PR. In particular it says something about potential responses to ego damaging information. I am sure you can see how that would be relevant to estimates of possible value (positive or negative) that you will contribute. I again disclaim that ‘influences’ does not mean ‘drastically influences’.
Note that the reply in the parent does relate to other contexts somewhat more than this one.
Maybe because most people who talk to you know that they are not typical and also know that others wouldn’t be as polite given your style of argumentation?
People who know they aren’t typical should probably realize that their simulation of typical minds will often be wrong. A corollary to Vinge’s Law, perhaps?
That’s still hard to see—that you could go from thinking that SIAI represents the most efficient form of altruistic giving to that it no longer is, because Eliezer was rude to someone on LW. He’s manifestly managed to persuade quite a few people of the importance of AI risk despite these disadvantages—I think you’d have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.
Rudeness isn’t the issue* so much as what the rudeness is used instead of. A terrible response in a place where a good response should be possible is information that should have influence on evaluations.
* Except, I suppose, PR implications of behaviour in public having some effect, but that isn’t something that Jordan’s statement needs to rely on.
I think you’d have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.
Obviously. But not every update needs to be an overwhelming one. Again, the argument you refute here is not one that Jordan made.
EDIT: I just saw Jordan’s reply and saw that he did mean both these points but that he also considered the part that I had included only as a footnote.
I have difficulty taking this seriously. Someone else can respond to it.
was devoid of explicit information. It was purely negative.
Implicitly, I assume you meant that existential risk reduction is so important that no other ‘normal’ charity can compare in cost effectiveness (utility bought per dollar). While I agree that existential risk reduction is insanely important, it doesn’t follow that SIAI is a good charity to donate to. SIAI may actually be hurting the cause (in one way, by hurting public opinion), and this is one of multi’s points. Your implicit statement seems to me to be a rebuke of this point sans evidence, amounting to simply saying nuh-uh.
You say
you don’t make clear what constitutes a sufficient level of transparency and accountability
which is a good point. But then go on to say
though of course you will now carefully look over all of SIAI’s activities directed at transparency and accountability, and decide that the needed level is somewhere above that.
essentially accusing multi of a crime in rationality before he commits it. On a site devoted to rationality that is a serious accusation. It’s understood on this site that there are a million ways to fail in rationality, and that we all will fail, at one point or another, hence we rely on each other to point out failures and even potential future failures. Your accusation goes beyond giving a friendly warning to prevent a bias. It’s an attack.
Your comment about taking bets and putting money on the line is a common theme around here and OB and other similar forums/blogs. It’s neutral in tone to me (although I suspect negative in tone to some outside the community), but I find it distracting in the midst of a serious reply to serious objections. I want a debate, not a parlor trick to see if someone is really committed to an idea they are proposing. This is minor, it mostly extends the tone of the rest of the reply. In a different, friendlier context, I wouldn’t take note of it.
Finally,
I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like “And this is why if you’re trying to minimize existential risk, you should support a charity that tries to stop tuberculosis” or “And this is where we’re going to assume the worst possible case instead of the expected case and actually act that way”, he’ll just blithely keep going.
wedrifid has already commented on this point in this thread. What really jumps out at me here is the first part
my overall impression here is of someone who manages to talk mostly LW language most of the time
This just seems so utterly cliquey. “Hey, you almost fit in around here, you almost talk like we talk, but not quite. I can see the difference, and it’s exactly that little difference that sets you apart from us and makes you wrong.” The word “manages” seems especially negative in this context.
I’ll reply in more detail later but for now I’l just say
With Michael Vassar in charge, SIAI has become more transparent, and will keep on doing things meant to make it more transparent
This sounds great
I have every confidence that whatever it is we do, it will never be enough for someone who is, at that particular time, motivated to argue against SIAI.
I’m not one of those people, I would be happy to donate to SIAI and encourage others to do so if I perceive a significant change from the status quo and there’s no new superior alternative that emerges. I think that if SIAI were well constituted, donating to it would be much more cost effective than VillageReach.
I would be thrilled to see the changes that I would like to see take place. More on precisely what I’m looking for to follow.
I think that if SIAI were well constituted, donating to it would be much more cost effective than VillageReach.
For most realistic interest rates this statement would have made it more rational to put your previous traditional aid donation into a banking account for a year to see if your bet had come out—and then donating to SIAI.
You would have done that by putting the money in a DAF and announcing your policy, rather than providing incentives in what you say is the wrong field. You’re signaling that you will use money wastefully (in its direct effects) by your own standards rather than withhold it until a good enough recipient emerges by your standards.
You’re signaling that you will use money wastefully (in its direct effects) by your own standards
This is an effective signal. It sounds like multi considers changing the current standard of existential risks charities from ineffective to effective far more important than boosting the income of said class of charities. This being the case showing that there is a potential supply of free money for any existential risk focussed charity that meets the standards multi considers important.
As well as influencing the world in the short term and satisfying his desire to signal and feel generous donating to Givewell effectively shows that multi is willing to put his money where is mouth is. He knows that if he wasn’t currently donating Eliezer would use that as another excuse to avoid the issue, since Eliezer has previously declared that to be his policy.
Donor-Advised-Fund, a vehicle which allows one to place money into an investment account where it can only be used to make charitable donations. It allows you to commit yourself to giving (you can’t take the money back to spend on beer) even if you don’t know what’s best now, or can allow you to accumulate donations over time so that you can get leverage with the simultaneous disbursal of a big chunk. Here’s the Fidelity DAF site.
When you’re giving a large donation all at once, the transaction costs of meeting your demands are smaller relative to the gains, and the transaction costs of doing nonstandard donations (e.g. teaming up with others to create a whole new program or organization) are more manageable.
I don’t get it, why would you want to make demands? Isn’t the point of donating that you think others are better positioned to accomplish goal X than you are, so they’re able to make more efficient use of the money?
E.g. demands for work to be done providing information to you, or to favor a specific project (although the latter is more murky with fungibility issues).
Charities sometimes favor the work they believe to be popular with donors over the work they believe would be more useful. Specifically, I’m thinking of monitoring and evaluation. By designating money for unpopular but useful tasks, you encourage them to better fund it. Before doing this, I would talk to the organizations you’re considering funding and find out what unsexy projects they would like to fund more. Then decide if you think they’re worth funding.
In your specific case, given what you have said about your epistemic state, I would think that you subjectively-ought to do something like this (a commitment mechanism, but not necessarily with a commitment to reducing existential risk given your normative uncertainty). I’ll have more to say about the general analysis in 48 hours or more, following a long flight from Australia.
You would have done that by putting the money in a DAF and announcing your policy, rather than providing incentives in what you say is the wrong field. You’re signaling that you will use money wastefully (in its direct effects) by your own standards rather than withhold it until a good enough recipient emerges by your standards.
One problem with this is that skeptics like Eliezer would assume that mulitfoliaterose will just move the goal posts when it comes time to pay.
Fortunately a specific public commitment does at least help keep things grounded in more than rhetoric. Even assuming no money eventuates there is value in clearly meeting the challenge to the satisfaction of observers.
I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like “And this is why if you’re trying to minimize existential risk, you should support a charity that tries to stop tuberculosis” or “And this is where we’re going to assume the worst possible case instead of the expected case and actually act that way”, he’ll just blithely keep going.
Are you reading multifoliaterose carefully? He has made neither of these claims.
He said that supporting a tuberculosis charity is better than donating to SIAI, not that supporting a tuberculosis charity is the best way to fight existential risk.
And he hasn’t advocated using something other than the expected case when evaluating a non-transparent charity. What you may infer is that he believes that the worst case does not significantly differ from the expected case in the context of the amount of money that he would donate. That belief may not be realistic, but it’s not the belief that you impute to him.
He said that supporting a tuberculosis charity is better than donating to SIAI, not that supporting a tuberculosis charity is the best way to fight existential risk.
I hesitate to point to language from an earlier version of the post, since multifoliaterose has taken out this language, but given that EY was responding to the earlier version, it seems fair. The original post included the following language:
I believe that at present GiveWell’s top ranked charities VillageReach and StopTB are better choices than SIAI, even for donors like utilitymonster who take astronomical waste seriously and believe in the ideas expressed in the cluster of blog posts linked under Shut Up and multiply.
(emphasis added)
I believe there originally may have been some links there, but I don’t have them anymore. Nonetheless, if I correctly understand the references to utilitymonster, astronomical waste, and Shut Up and multiply, I do think that that multifoliaterose was arguing that even the sorts of donors most interested in minimizing existential risk should still give to those other charities. Does that reading seem wrong?
Here is my reading: Even in the case of utilitymonster,
his/her concern about tuberculosis (say) in the near term is high enough, and
SIAI’s chances of lowering existential risk by a sufficient amount are low enough,
to imply that utilitymonster would get more expected utility from donating to StopTB than from donating to SIAI.
Also, multi isn’t denying that utilitymonster’s money would be better spent in some third way that directly pertains to existential risk. (However, such a denial may be implicit in multi’s own decision to give to GiveWell’s charities, depending on why he does it.)
I don’t know that we disagree very much, but I don’t want to lose sight of the original issue as to whether EY’s characterization accurately reflected what multifoliaterose was saying. I think we may agree that it takes an extra step in interpreting multifoliaterose’s post to get to EY’s characterization, and that there may be sufficient ambiguity in the original post such that not everyone would take that step:
Also, multi isn’t denying that utilitymonster’s money would be better spent in some third way that directly pertains to existential risk. (However, such a denial may be implicit in multi’s own decision to give to GiveWell’s charities, depending on why he does it.)
I did implicitly read such a denial into the original post. As Carl noted:
The invocation of VillageReach in addressing those aggregative utilitarians concerned about astronomical waste here seems baffling to me.
For me, the references to the Givewell-approved charities and the lack of references to alternate existential risk reducing charities like FHI seemed to suggest that multifoliaterose was implicitly denying the existence of a third alternative. Perhaps EY read the post similarly.
For me, the references to the Givewell-approved charities and the lack of references to alternate existential risk reducing charities like FHI seemed to suggest that multifoliaterose was implicitly denying the existence of a third alternative. Perhaps EY read the post similarly.
I agree that this is the most probable meaning. The only other relevant consideration I know of is multi’s statement upstream that he uses GiveWell in part to encourage transparency in other charities. Maybe he sees this as a way to encourage existential-risk charities to do better, making them more likely to succeed.
And he hasn’t advocated using something other than the expected case when evaluating a non-transparent charity. What you may infer is that he believes that the worst case does not significantly differ from the expected case
That’s not what Multi said. He said we should assume the worst. You only need to assume something when you know that belief would be useful even though you don’t believe it. So he clearly doesn’t believe the worst (or if he does, he hasn’t said so).
He said that supporting a tuberculosis charity is better than donating to SIAI, not that supporting a tuberculosis charity is the best way to fight existential risk.
He also said that he “believe[s] that reducing existential risk is ultimately more important than developing world aid.” How do you go from there to supporting StopTB over SIAI, unless you believe the worst?
And he hasn’t advocated using something other than the expected case when evaluating a non-transparent charity. What you may infer is that he believes that the worst case does not significantly differ from the expected case
That’s not what Multi said. He said we should assume the worst. You only need to assume something when you know that belief would be useful even though you don’t believe it. So he clearly doesn’t believe the worst (or if he does, he hasn’t said so).
I don’t use the word “assume” in the way that you describe, and I would be surprised if multi were.
He also said that he “believe[s] that reducing existential risk is ultimately more important than developing world aid.” How do you go from there to supporting StopTB over SIAI, unless you believe the worst?
Here I think we more-or-less agree. On my reading, multi is saying that, right now, the probability that SIAI is a money pit is high enough to outweigh the good that SIAI would do if it weren’t a money pit, relative to a tuberculosis charity. But multi is also saying that this probability assignment is unstable, so that some reasonable amount of evidence would lead him to radically reassign his probabilities.
I have difficulty taking this seriously. Someone else can respond to it.
Transparency requires reevaluation if we are talking about a small group of people that are responsible to give the thing in control of the fate of the universe and all entities in it it’s plan of action. That is, if you were able to evaluate that this is (1) necessary, (2) possible, (3) you are the right people for the job.
This lack of transparency could make people think (1) this is bullshit, (2) it’s impossible to do anyway, (3) you might apply a CEV of the SIAI rather than humanity, (4) given 3′ the utility payoff is higher donating to GiveWell’s top ranked charities.
I have difficulty taking this seriously. Someone else can respond to it.
Assuming that much of the worst isn’t rational. It would be a convenient soldier for your argument, but it’s not the odds to bet at. Also, you don’t make clear what constitutes a sufficient level of transparency and accountability, though of course you will now carefully look over all of SIAI’s activities directed at transparency and accountability, and decide that the needed level is somewhere above that.
You say you assume the worst, and that other people should act accordingly. Would you care to state “the worst”, your betting odds on it, how much you’re willing to bet, and what neutral third party you would accept as providing the verdict if they looked over SIAI’s finances and told you it wasn’t true? If you offer us enough free money, I’ll vote for taking it.
I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like “And this is why if you’re trying to minimize existential risk, you should support a charity that tries to stop tuberculosis” or “And this is where we’re going to assume the worst possible case instead of the expected case and actually act that way”, he’ll just blithely keep going.
With Michael Vassar in charge, SIAI has become more transparent, and will keep on doing things meant to make it more transparent, and I have every confidence that whatever it is we do, it will never be enough for someone who is, at that particular time, motivated to argue against SIAI.
Normally I refrain from commenting about the tone of a comment or post, but the discussion here revolves partly around the public appearance of SIAI, so I’ll say:
this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I’ve read or seen.
This isn’t a comment about the content of your response, which I think has valid points (and which multifoliaterose has at least partly responded to).
It is certainly Eliezer’s responses and not multi’s challenges which are the powerful influence here. Multi has effectively given Eliezer a platform from which to advertise the merits of SIAI as well as demonstrate that contrary to suspicions Eliezer is, in fact, able to handle situations in accordance to his own high standards of rationality despite the influences of his ego. This is not what I’ve seen recently. He has focussed on retaliation against multi at whatever weak points he can find and largely neglected to do what will win. Winning in this case would be demonstrating exactly why people ought to trust him to be able to achieve what he hopes to achieve (by which I mean ‘influence’ not ‘guarantee’ FAI protection of humanity.)
I want to see more of this:
and less of this:
I’ll leave aside ad hominim and note that tu quoque isn’t always fallacious. Unfortunately in this case it is, in fact, important that Eliezer doesn’t fall into the trap that he accuses multi of—deploying arguments as mere soldiers.
This sort of conversation just makes me feel tired. I’ve had debates before about my personal psychology and feel like I’ve talked myself out about all of them. They never produced anything positive, and I feel that they were a bad sign for the whole mailing list they appeared on—I would be horrified to see LW go the way of SL4. The war is lost as soon as it starts—there is no winning move. I feel like I’m being held to an absurdly high standard, being judged as though I were trying to be the sort of person that people accuse me of thinking I am, that I’m somehow supposed to produce exactly the right mix of charming modesty while still arguing my way into enough funding for SIAI… it just makes me feel tired, and like I’m being held to a ridiculously high standard, and that it’s impossible to satisfy people because the standard will keep going up, and like I’m being asking to solve PR problems that I never signed up for. I’ll solve your math problems if I can, I’ll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one, or better yet, why don’t you try being perfect and see whether it’s as easy as it sounds while you’re handing out advice?
I have looked, and I have seen under the Sun, that to those who try to defend themselves, more and more attacks will be given. Like, if you try to defend yourself, people sense that as a vulnerability, and they know they can demand even more concessions from you. I tried to avoid that failure mode in my responses, and apparently failed. So let me state it plainly for you. I’ll build a Friendly AI for you if I can. Anything else I can do is a bonus. If I say I can’t do it, asking me again isn’t likely to produce a different answer.
It was very clearly a mistake to have participated in this thread in the first place. It always is. Every single time. Other SIAI supporters who are better at that sort of thing can respond. I have to remember, now, that there are other people who can respond, and that there is no necessity for me to do it. In fact, someone really should have reminded me to shut up, and if it happens again, I hope someone will. I wish I could pull a Roko and just delete all my comments in all these threads, but that would be impolite.
I really appreciate this response. In fact, to mirror Jordan’s pattern I’ll say that this comment has done more to raise my confidence in SIAI than anything else in the recent context.
I’m working on it, to within the limits of my own entrepreneurial ability and the costs of serving my own personal mission. Not that I would allocate such funds to a PR person. I would prefer to allocate it to research of the ‘publish in traditional journals’ kind. If I was in the business of giving advice I would give the same advice you have no doubt heard 1,000 times: the best thing that you personally could do for PR isn’t to talk about SIAI but to get peer reviewed papers published. Even though academia is far from perfect, riddled with biases and perhaps inclined to have a certain resistance to your impingement it is still important.
Now, now, I think the ‘give us the cash’ helps you out rather a lot more than me being perfect. Mind you me following tsuyoku naritai does rather overlap with the ‘giving you cash’.
You are right on all counts. I’ll note that it perhaps didn’t help that you felt it was a time to defend rather than a time to assert and convey. It certainly isn’t necessary to respond to criticism directly. Sometimes it is better to just take the feedback into consideration and use anything of merit when working out your own strategy. (As well as things that aren’t of merit but are still important because you have to win over even stupid people).
Thankyou. I don’t necessarily expect you to succeed because the task is damn hard, takes longer to do right than for someone else to fail and we probably only have one shot to get it right. But you’re shutting up to do the impossible. Even if the odds are against us targeting focus at the one alternative that doesn’t suck is the sane thing to do.
Yes.
I will do so, since you have expressed willingness to hear it. That is an option I would much prefer to criticising any responses you make that I don’t find satisfactory. You’re trying to contribute to saving the goddam world and have years of preparation behind you in some areas that nobody has. You can free yourself up to get on with that while someone else explains how you can be useful.
The sentiment is good but perhaps you could have left off the reminder of the Roko incident. The chips may have fallen somewhat differently in these threads if the ghost of Nearly-Headless Roko wasn’t looming in the background.
Once again, this was an encouraging reply. Thankyou.
Yes, the standards will keep going up.
And, if you draw closer to your goal, the standards you’re held to will dwarf what you see here. You’re trying to build god, for christ’s sake. As the end game approaches more and more people are going to be taking that prospect more and more seriously, and there will be a shit storm. You’d better believe that every last word you’ve written is going to be analysed and misconstrued, often by people with much less sympathy than us here.
It’s a lot of pressure. I don’t envy you. All I can offer you is: =(
Much of what you say here sounds quite reasonable. Since many great scientists have lacked social grace, it seems to me that your PR difficulties have no bearing on your ability to do valuable research.
I think that the trouble arises from the fact that people (including myself) have taken you to be an official representative of SIAI. As long as SIAI makes it clear that your remarks do not reflect SIAI’s positions, there should be no problem.
There is an interesting question. Does it make a difference if one of the core subjects of research (rationality) strongly suggests different actions be taken and the core research goal (creating or influencing the creation of an FAI) requires particular standards in ethics and rationality? For FAI research behaviours that reflect ethically relevant decision making and rational thinking under pressure matter.
If you do research into ‘applied godhood’ then you can be expected to be held to particularly high standards.
Yes, these are good points.
What I was saying above is that if Eliezer wants to defer to other SIAI staff then we should seek justification from them rather than from him. Maybe they have good reasons for thinking that it’s a good idea for him to do FAI research despite the issue that you mention.
I understand and I did vote your comment up. The point is relevant even if not absolutely so in this instance.
Is this now official LW slang?
Only if we need official LW slang for “Have rigid boundaries and take decisive action in response to an irredeemable defection in a game of community involvement”. I just mean to say it may be better to use a different slang term for “exit without leaving a trace” since for some “pull a Roko” would always prompt a feeling of regret. I wouldn’t bring up Roko at all (particularly reference to the exit) because I want to leave the past in the past. I’m only commenting now because I don’t want it to, as you say, become official LW slang.
I’m probably alone on this one, but “charming modesty” usually puts me off. It gives me the impression of someone slimy and manipulative, or of someone without the rigor of thought necessary to accomplish anything interesting.
Well said.
In the past I’ve seen Eliezer respond to criticism very well. His responses seemed to be in good faith, even when abrasive. I use this signal as a heuristic for evaluating experts in fields I know little about. I’m versed in the area of existential risk reduction well enough not to need this heuristic, but I’m not versed in the area of the effectiveness of SIAI.
Eliezer’s recent responses have reduced my faith in SIAI, which, after all, is rooted almost solely in my impression of its members. This is a double stroke: my faith in Eliezer himself is reduced, and the reason for this is a public appearance which will likely prevent others from supporting SIAI, which is more evidence to me that SIAI won’t succeed.
SIAI (and Eliezer) still has a lot of credibility in my eyes, but I will be moving away from heuristics and looking for more concrete evidence as I debate whether to continue to be a supporter.
Tone matters, but would you really destroy the Earth because Eliezer was mean?
The issue seems to be not whether Eliezer personally behaves in an unpleasant manner but rather how Eliezer’s responses influence predictions of just how much difference Eliezer will be able to make on the problem at hand. The implied reasoning Jordan makes is different to the one you suggest.
BTW, when people start saying, not, “You offended me, personally” but “I’m worried about how other people will react”, I usually take that as a cue to give up.
This too is not the key point in question. Your reaction here can be expected to correlate with other reactions you would make in situations not necessarily related to PR. In particular it says something about potential responses to ego damaging information. I am sure you can see how that would be relevant to estimates of possible value (positive or negative) that you will contribute. I again disclaim that ‘influences’ does not mean ‘drastically influences’.
Note that the reply in the parent does relate to other contexts somewhat more than this one.
Maybe because most people who talk to you know that they are not typical and also know that others wouldn’t be as polite given your style of argumentation?
People who know they aren’t typical should probably realize that their simulation of typical minds will often be wrong. A corollary to Vinge’s Law, perhaps?
That’s still hard to see—that you could go from thinking that SIAI represents the most efficient form of altruistic giving to that it no longer is, because Eliezer was rude to someone on LW. He’s manifestly managed to persuade quite a few people of the importance of AI risk despite these disadvantages—I think you’d have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.
Rudeness isn’t the issue* so much as what the rudeness is used instead of. A terrible response in a place where a good response should be possible is information that should have influence on evaluations.
* Except, I suppose, PR implications of behaviour in public having some effect, but that isn’t something that Jordan’s statement needs to rely on.
Obviously. But not every update needs to be an overwhelming one. Again, the argument you refute here is not one that Jordan made.
EDIT: I just saw Jordan’s reply and saw that he did mean both these points but that he also considered the part that I had included only as a footnote.
Yes, thank you for clarifying that.
Can you please be more precise about what you saw as the problem?
Yes. Your first line
was devoid of explicit information. It was purely negative.
Implicitly, I assume you meant that existential risk reduction is so important that no other ‘normal’ charity can compare in cost effectiveness (utility bought per dollar). While I agree that existential risk reduction is insanely important, it doesn’t follow that SIAI is a good charity to donate to. SIAI may actually be hurting the cause (in one way, by hurting public opinion), and this is one of multi’s points. Your implicit statement seems to me to be a rebuke of this point sans evidence, amounting to simply saying nuh-uh.
You say
which is a good point. But then go on to say
essentially accusing multi of a crime in rationality before he commits it. On a site devoted to rationality that is a serious accusation. It’s understood on this site that there are a million ways to fail in rationality, and that we all will fail, at one point or another, hence we rely on each other to point out failures and even potential future failures. Your accusation goes beyond giving a friendly warning to prevent a bias. It’s an attack.
Your comment about taking bets and putting money on the line is a common theme around here and OB and other similar forums/blogs. It’s neutral in tone to me (although I suspect negative in tone to some outside the community), but I find it distracting in the midst of a serious reply to serious objections. I want a debate, not a parlor trick to see if someone is really committed to an idea they are proposing. This is minor, it mostly extends the tone of the rest of the reply. In a different, friendlier context, I wouldn’t take note of it.
Finally,
wedrifid has already commented on this point in this thread. What really jumps out at me here is the first part
This just seems so utterly cliquey. “Hey, you almost fit in around here, you almost talk like we talk, but not quite. I can see the difference, and it’s exactly that little difference that sets you apart from us and makes you wrong.” The word “manages” seems especially negative in this context.
Thank you for your specifications, and I’ll try to keep them in mind!
I’ll reply in more detail later but for now I’l just say
This sounds great
I’m not one of those people, I would be happy to donate to SIAI and encourage others to do so if I perceive a significant change from the status quo and there’s no new superior alternative that emerges. I think that if SIAI were well constituted, donating to it would be much more cost effective than VillageReach.
I would be thrilled to see the changes that I would like to see take place. More on precisely what I’m looking for to follow.
For most realistic interest rates this statement would have made it more rational to put your previous traditional aid donation into a banking account for a year to see if your bet had come out—and then donating to SIAI.
I donate now using GiveWell to signal that I care about transparency and accountability to incentivize charities to improve.
You would have done that by putting the money in a DAF and announcing your policy, rather than providing incentives in what you say is the wrong field. You’re signaling that you will use money wastefully (in its direct effects) by your own standards rather than withhold it until a good enough recipient emerges by your standards.
This is an effective signal. It sounds like multi considers changing the current standard of existential risks charities from ineffective to effective far more important than boosting the income of said class of charities. This being the case showing that there is a potential supply of free money for any existential risk focussed charity that meets the standards multi considers important.
As well as influencing the world in the short term and satisfying his desire to signal and feel generous donating to Givewell effectively shows that multi is willing to put his money where is mouth is. He knows that if he wasn’t currently donating Eliezer would use that as another excuse to avoid the issue, since Eliezer has previously declared that to be his policy.
DAF?
Donor-Advised-Fund, a vehicle which allows one to place money into an investment account where it can only be used to make charitable donations. It allows you to commit yourself to giving (you can’t take the money back to spend on beer) even if you don’t know what’s best now, or can allow you to accumulate donations over time so that you can get leverage with the simultaneous disbursal of a big chunk. Here’s the Fidelity DAF site.
I like the site!
The limits may be a little frustrating for some potential users… you need to start with US$5,000. (And pay a minimum of $60 per year in fees).
What exactly do you mean by leverage?
When you’re giving a large donation all at once, the transaction costs of meeting your demands are smaller relative to the gains, and the transaction costs of doing nonstandard donations (e.g. teaming up with others to create a whole new program or organization) are more manageable.
I don’t get it, why would you want to make demands? Isn’t the point of donating that you think others are better positioned to accomplish goal X than you are, so they’re able to make more efficient use of the money?
E.g. demands for work to be done providing information to you, or to favor a specific project (although the latter is more murky with fungibility issues).
Charities sometimes favor the work they believe to be popular with donors over the work they believe would be more useful. Specifically, I’m thinking of monitoring and evaluation. By designating money for unpopular but useful tasks, you encourage them to better fund it. Before doing this, I would talk to the organizations you’re considering funding and find out what unsexy projects they would like to fund more. Then decide if you think they’re worth funding.
This doesn’t sound like a bad idea. Could someone give reasons to think that donations to SIAI now would be better than this?
In your specific case, given what you have said about your epistemic state, I would think that you subjectively-ought to do something like this (a commitment mechanism, but not necessarily with a commitment to reducing existential risk given your normative uncertainty). I’ll have more to say about the general analysis in 48 hours or more, following a long flight from Australia.
Does “this” mean DAF, or signalling through waste?
One problem with this is that skeptics like Eliezer would assume that mulitfoliaterose will just move the goal posts when it comes time to pay.
Fortunately a specific public commitment does at least help keep things grounded in more than rhetoric. Even assuming no money eventuates there is value in clearly meeting the challenge to the satisfaction of observers.
Are you reading multifoliaterose carefully? He has made neither of these claims.
He said that supporting a tuberculosis charity is better than donating to SIAI, not that supporting a tuberculosis charity is the best way to fight existential risk.
And he hasn’t advocated using something other than the expected case when evaluating a non-transparent charity. What you may infer is that he believes that the worst case does not significantly differ from the expected case in the context of the amount of money that he would donate. That belief may not be realistic, but it’s not the belief that you impute to him.
I hesitate to point to language from an earlier version of the post, since multifoliaterose has taken out this language, but given that EY was responding to the earlier version, it seems fair. The original post included the following language:
(emphasis added)
I believe there originally may have been some links there, but I don’t have them anymore. Nonetheless, if I correctly understand the references to utilitymonster, astronomical waste, and Shut Up and multiply, I do think that that multifoliaterose was arguing that even the sorts of donors most interested in minimizing existential risk should still give to those other charities. Does that reading seem wrong?
Here is my reading: Even in the case of utilitymonster,
his/her concern about tuberculosis (say) in the near term is high enough, and
SIAI’s chances of lowering existential risk by a sufficient amount are low enough,
to imply that utilitymonster would get more expected utility from donating to StopTB than from donating to SIAI.
Also, multi isn’t denying that utilitymonster’s money would be better spent in some third way that directly pertains to existential risk. (However, such a denial may be implicit in multi’s own decision to give to GiveWell’s charities, depending on why he does it.)
I don’t know that we disagree very much, but I don’t want to lose sight of the original issue as to whether EY’s characterization accurately reflected what multifoliaterose was saying. I think we may agree that it takes an extra step in interpreting multifoliaterose’s post to get to EY’s characterization, and that there may be sufficient ambiguity in the original post such that not everyone would take that step:
I did implicitly read such a denial into the original post. As Carl noted:
For me, the references to the Givewell-approved charities and the lack of references to alternate existential risk reducing charities like FHI seemed to suggest that multifoliaterose was implicitly denying the existence of a third alternative. Perhaps EY read the post similarly.
I agree that this is the most probable meaning. The only other relevant consideration I know of is multi’s statement upstream that he uses GiveWell in part to encourage transparency in other charities. Maybe he sees this as a way to encourage existential-risk charities to do better, making them more likely to succeed.
Well, since multifoliaterose himself has been giving all of his charitable contributions to VillageReach, it’s a sensible reading.
That’s not what Multi said. He said we should assume the worst. You only need to assume something when you know that belief would be useful even though you don’t believe it. So he clearly doesn’t believe the worst (or if he does, he hasn’t said so).
He also said that he “believe[s] that reducing existential risk is ultimately more important than developing world aid.” How do you go from there to supporting StopTB over SIAI, unless you believe the worst?
I don’t use the word “assume” in the way that you describe, and I would be surprised if multi were.
Here I think we more-or-less agree. On my reading, multi is saying that, right now, the probability that SIAI is a money pit is high enough to outweigh the good that SIAI would do if it weren’t a money pit, relative to a tuberculosis charity. But multi is also saying that this probability assignment is unstable, so that some reasonable amount of evidence would lead him to radically reassign his probabilities.
Transparency requires reevaluation if we are talking about a small group of people that are responsible to give the thing in control of the fate of the universe and all entities in it it’s plan of action. That is, if you were able to evaluate that this is (1) necessary, (2) possible, (3) you are the right people for the job.
This lack of transparency could make people think (1) this is bullshit, (2) it’s impossible to do anyway, (3) you might apply a CEV of the SIAI rather than humanity, (4) given 3′ the utility payoff is higher donating to GiveWell’s top ranked charities.
I think that would be best, actually.