Your posts on SIAI have had a veneer of evenhandedness and fairness, and that continues here. But given what you don’t say in your posts, I cannot avoid the impression that you started out with the belief that SIAI was not a credible charity and rather than investigating the evidence both for and against that belief, you have marshaled the strongest arguments against donating to SIAI and ignored any evidence in favor of donating to SIAI. I almost hesitate to link to EY lest you dismiss me as one of his acolytes, but see, for example, A Rational Argument.
In your top-level posts you have eschewed references to any of the publicly visible work that SIAI does such as the Summit and the presentation and publication of academic papers. Some of this work is described at this link to SIAI’s description of its 2009 achievements. The 2010 Summit is described here. As for Eliezer’s current project, at the 2009 achievements link, SIAI has publicized the fact that he is working on a book on rationality:
Yudkowsky is now converting his blog sequences into the planned rationality book, which he hopes will significantly assist in attracting and inspiring talented individuals to effectively work towards the aims of a beneficial Singularity and reduced existential risk.
You could have chosen to make part of your evaluation of SIAI an analysis of whether or not EY’s book will ultimately be successful in this goal or whether it’s the most valuable work that EY should be doing to reduce existential risk, but I’m not sure how his work on transforming the fully public LW sequences into a book is insufficiently transparent or not something for which he and SIAI can be held accountable when it is published.
Moreover, despite your professed interest in existential risk reduction and references in others’ comments to your posts about the Future of Humanity Institute at Oxford, you suggest donating to Givewell-endorsed charities as an alternative to SIAI donations without even a mention of FHI as a possible alternative in the field of existential risk reduction. Perhaps you find FHI equally non-credible/non-accountable as a charity, but whatever FHI’s failings, it’s hard to see how they are exactly the same ones which you have ascribed to SIAI. Perhaps you believe that if a charity has not been evaluated and endorsed by Givewell, it can’t possibly be worthwhile. I can’t avoid the thought that if you were really interested in existential risk reduction, you would spend at least some tiny percentage of the time you’ve spent writing up these posts against SIAI on investigating FHI as an alternative.
I would be happy to engage with you or others on the site in a fair and unbiased examination of the case for and against SIAI (and/or FHI, the Foresight Institute, the Lifeboat Foundation, etc.). Although I may come across as strongly biased in favor of SIAI in this comment, I have my own concerns about SIAI’s accountability and public relations, and have had numerous conversations with those within the organization about those concerns. But with limited time on my hands and faced with such a one-sided and at times even polemical presentation from you, I find myself almost forced into the role of SIAI defender, so that I can least provide some of the positive information about SIAI that you leave out.
I cannot avoid the impression that you started out with the belief that SIAI was not a credible charity and rather than investigating the evidence both for and against that belief, you have marshaled the strongest arguments against donating to SIAI and ignored any evidence in favor of donating to SIAI.
“If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.” -- Black Belt Bayesian
If multifoliaterose took the position of a advocatus diaboli, what would be wrong with that?
Although I always love a good quote from Black Belt Bayesian (a/k/a steven0461 a/k/a my husband), I think he’s on board with my interpretation of multifoliaterose’s posts. (At least, he’d better be!)
Going on to the substance, it doesn’t seem that multifoliaterose is just playing devil’s advocate here rather than arguing his actual beliefs – indeed everything he’s written suggests that he’s doing the latter. Beyond that, there may be a place for devil’s advocacy (so long as it doesn’t cross the line into mere trolling, which multifoliaterose’s posts certainly do not) at LW. But I think that most aspiring rationalists (myself included) should still try to evaluate evidence for and against some position, and only tread into devil’s advocacy with extreme caution, since it is a form of argument where it is all too easy to lose sight of the ultimate goal of weighing the available evidence accurately.
Although I always love a good quote from Black Belt Bayesian (a/k/a steven0461 a/k/a my husband)
Wow, I managed to walk into the lion’s den there!
Going on to the substance, it doesn’t seem that multifoliaterose is just playing devil’s advocate here...
Yeah, I wasn’t actually thinking that to be the case either. But since nobody else seems to be following your husbands advice...at least someone tries to argue against the SIAI. Good criticism can be a good thing.
...and only tread into devil’s advocacy with extreme caution...
I see, I’ll take your word for it. I haven’t thought about it too much. So far I thought your husbands quote is universally applicable.
If multifoliaterose took the position of a advocatus diaboli, what would be wrong with that?
Multi has already refuted the opponent’s arguments, well at least Eliezer more or less refuted them for him. Now it is time to do just what Black Belt Bayesian suggested and try to fix the SIAI’s arguments for them. Because advocacy—including devil’s advocacy—is mostly bullshit.
Remind SIAI of what they are clearly doing right and also just what a good presentation of their strengths would look like—who knows, maybe it’ll spur them on and achieve in some measure just the kind of changes you desire!
So while telling the truth is maximally accurate relative to your epistemic state, concealment is deception by misguidance which is worse than the purest form of deception that is lying (falsehood). Bullshit however is not even wrong.
I don’t see how devil’s advocacy fits into this as I perceive it to be a temporary adjustment of someones mental angel to look back at one’s own position from a different point of view.
Perhaps you find FHI equally non-credible/non-accountable as a charity, but whatever FHI’s failings, it’s hard to see how they are exactly the same ones which you have ascribed to SIAI. Perhaps you believe that if a charity has not been evaluated and endorsed by Givewell, it can’t possibly be worthwhile. I can’t avoid the thought that if you were really interested in existential risk reduction, you would spend at least some tiny percentage of the time you’ve spent writing up these posts against SIAI on investigating FHI as an alternative.
Thanks for your thoughtful comments. I’m missing a good keyboard right now so can’t respond in detail, but I’ll make a few remarks.
I’m well aware that SIAI has done some good things. The reason why I’ve focusing on the apparent shortcomings of SIAI is to encourage SIAI to improve its practices. I do believe that at the margin the issue worthy of greatest consideration is transparency and accountability and I believe that this justifies giving to VillageReach over SIAI.
But I’m definitely open to donating to and advocating that others donate to SIAI and FHI in the future provided that such organizations clear certain standards for transparency and accountability and provide a clear and compelling case for room for more funding.
Again, I would encourage you (and others) who are interested in existential risk to write to the GiveWell staff requesting that GiveWell evaluate existential risk organizations including SIAI and FHI. I would like to see GiveWell do such work soon.
I do believe that at the margin the issue worthy of greatest consideration is transparency and accountability and I believe that this justifies giving to VillageReach over SIAI.
What about everything else that isn’t the margin? What is your expected value of SIAI’s public accomplishments, to date, in human lives saved? What is that figure for VillageReach? Use pessimistic figures for SIAI and optimistic ones for VillageReach if you must, but come up with numbers and then multiply them. Your arguments are not consistent with expected utility maximization.
You would be much better off if you were directly offering SIAI financial incentives to improve the expected value of its work. Donating to VillageReach is not the optimal use of money for maximizing what you value.
You would be much better off if you were directly offering SIAI financial incentives to improve the expected value of its work. Donating to VillageReach is not the optimal use of money for maximizing what you value.
You may well be right about this, I’ll have to think some more about this :-). Thanks for raising this issue.
You’ve provided reasons for why you are skeptical of the ability of SIAI to reduce existential risk. It’s clear you’ve dedicated a good amount of effort to your investigation.
Why are you content to leave the burden of investigating FHI’s abilities to GiveWell, rather than investigate yourself, as you have with SIAI?
Why are you content to leave the burden of investigating FHI’s abilities to GiveWell, rather than investigate yourself, as you have with SIAI?
The reason that I have not investigated FHI is simply because I have not gotten around to doing so. I do plan to change this soon. I investigated SIAI first because I came into contact with SIAI before I came into contact with FHI.
My initial reaction to FHI is that it looks highly credible to me, but that I doubt that it has room for more funding. However, I look forward to looking more closely into this matter in the hopes of finding a good opportunity for donors to lower existential risk.
You should definitely do research to confirm this on your own, but the last I heard (somewhat informally through the grapevine) was that FHI does indeed have room for more funding, for example, in the form of funding for an additional researcher or post-doc to join their team. You can than evaluate whether an additional academic trying to research and publish in these areas would be useful, but given how small the field currently is, my impression would be that an additional such academic would probably be helpful.
I’m well aware that SIAI has done some good tings. The reason why I’ve focusing on the apparent shortcomings of SIAI is to encourage SIAI to improve its practices. I do believe that at the margin the issue worthy of greatest consideration is transparency and accountability
I would very much like to see what positive observations you have made during your research into the SIAI. I know that you believe there is plenty of potential—there would be no reason to campaign for improvements if you didn’t see the chance that it would make a difference. That’d be a pointless or counter-productive for your interests given that it certainly doesn’t win you any high status friends!
How about you write a post on a different issue regarding SIAI or FAI in general using the same standard of eloquence that you have displayed?
wedrifid—Thanks for your kind remarks. As I said in my top level post, I’ll be taking a long break from LW. As a brief answer to your question:
(a) I think that Eliezer has inspired people (including myself) to think more about existential risk and that this will lower existential risk. I thank Eliezer for this.
(b) I think that Less Wrong has provided a useful venue for smart people (of a certain kind) to network and find friends and that this too will lower existential risk.
(c) Most of what I know about the good things that SIAI has done on an institutional level are from Carl Shulman. You might like to ask him for more information.
Your posts on SIAI have had a veneer of evenhandedness and fairness, and that continues here. But given what you don’t say in your posts, I cannot avoid the impression that you started out with the belief that SIAI was not a credible charity and rather than investigating the evidence both for and against that belief, you have marshaled the strongest arguments against donating to SIAI and ignored any evidence in favor of donating to SIAI. I almost hesitate to link to EY lest you dismiss me as one of his acolytes, but see, for example, A Rational Argument.
In your top-level posts you have eschewed references to any of the publicly visible work that SIAI does such as the Summit and the presentation and publication of academic papers. Some of this work is described at this link to SIAI’s description of its 2009 achievements. The 2010 Summit is described here. As for Eliezer’s current project, at the 2009 achievements link, SIAI has publicized the fact that he is working on a book on rationality:
You could have chosen to make part of your evaluation of SIAI an analysis of whether or not EY’s book will ultimately be successful in this goal or whether it’s the most valuable work that EY should be doing to reduce existential risk, but I’m not sure how his work on transforming the fully public LW sequences into a book is insufficiently transparent or not something for which he and SIAI can be held accountable when it is published.
Moreover, despite your professed interest in existential risk reduction and references in others’ comments to your posts about the Future of Humanity Institute at Oxford, you suggest donating to Givewell-endorsed charities as an alternative to SIAI donations without even a mention of FHI as a possible alternative in the field of existential risk reduction. Perhaps you find FHI equally non-credible/non-accountable as a charity, but whatever FHI’s failings, it’s hard to see how they are exactly the same ones which you have ascribed to SIAI. Perhaps you believe that if a charity has not been evaluated and endorsed by Givewell, it can’t possibly be worthwhile. I can’t avoid the thought that if you were really interested in existential risk reduction, you would spend at least some tiny percentage of the time you’ve spent writing up these posts against SIAI on investigating FHI as an alternative.
I would be happy to engage with you or others on the site in a fair and unbiased examination of the case for and against SIAI (and/or FHI, the Foresight Institute, the Lifeboat Foundation, etc.). Although I may come across as strongly biased in favor of SIAI in this comment, I have my own concerns about SIAI’s accountability and public relations, and have had numerous conversations with those within the organization about those concerns. But with limited time on my hands and faced with such a one-sided and at times even polemical presentation from you, I find myself almost forced into the role of SIAI defender, so that I can least provide some of the positive information about SIAI that you leave out.
“If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.” -- Black Belt Bayesian
If multifoliaterose took the position of a advocatus diaboli, what would be wrong with that?
Although I always love a good quote from Black Belt Bayesian (a/k/a steven0461 a/k/a my husband), I think he’s on board with my interpretation of multifoliaterose’s posts. (At least, he’d better be!)
Going on to the substance, it doesn’t seem that multifoliaterose is just playing devil’s advocate here rather than arguing his actual beliefs – indeed everything he’s written suggests that he’s doing the latter. Beyond that, there may be a place for devil’s advocacy (so long as it doesn’t cross the line into mere trolling, which multifoliaterose’s posts certainly do not) at LW. But I think that most aspiring rationalists (myself included) should still try to evaluate evidence for and against some position, and only tread into devil’s advocacy with extreme caution, since it is a form of argument where it is all too easy to lose sight of the ultimate goal of weighing the available evidence accurately.
Wow, I managed to walk into the lion’s den there!
Yeah, I wasn’t actually thinking that to be the case either. But since nobody else seems to be following your husbands advice...at least someone tries to argue against the SIAI. Good criticism can be a good thing.
I see, I’ll take your word for it. I haven’t thought about it too much. So far I thought your husbands quote is universally applicable.
Multi has already refuted the opponent’s arguments, well at least Eliezer more or less refuted them for him. Now it is time to do just what Black Belt Bayesian suggested and try to fix the SIAI’s arguments for them. Because advocacy—including devil’s advocacy—is mostly bullshit.
Remind SIAI of what they are clearly doing right and also just what a good presentation of their strengths would look like—who knows, maybe it’ll spur them on and achieve in some measure just the kind of changes you desire!
Interesting! Levels of epistemic accuracy:
(1) Truth
(2) Concealment
(3) Falsehood
(4) Bullshit (Not even wrong)
So while telling the truth is maximally accurate relative to your epistemic state, concealment is deception by misguidance which is worse than the purest form of deception that is lying (falsehood). Bullshit however is not even wrong.
I don’t see how devil’s advocacy fits into this as I perceive it to be a temporary adjustment of someones mental angel to look back at one’s own position from a different point of view.
See my response to Jordan’s comment.
Hi Airedale,
Thanks for your thoughtful comments. I’m missing a good keyboard right now so can’t respond in detail, but I’ll make a few remarks.
I’m well aware that SIAI has done some good things. The reason why I’ve focusing on the apparent shortcomings of SIAI is to encourage SIAI to improve its practices. I do believe that at the margin the issue worthy of greatest consideration is transparency and accountability and I believe that this justifies giving to VillageReach over SIAI.
But I’m definitely open to donating to and advocating that others donate to SIAI and FHI in the future provided that such organizations clear certain standards for transparency and accountability and provide a clear and compelling case for room for more funding.
Again, I would encourage you (and others) who are interested in existential risk to write to the GiveWell staff requesting that GiveWell evaluate existential risk organizations including SIAI and FHI. I would like to see GiveWell do such work soon.
What about everything else that isn’t the margin? What is your expected value of SIAI’s public accomplishments, to date, in human lives saved? What is that figure for VillageReach? Use pessimistic figures for SIAI and optimistic ones for VillageReach if you must, but come up with numbers and then multiply them. Your arguments are not consistent with expected utility maximization.
You would be much better off if you were directly offering SIAI financial incentives to improve the expected value of its work. Donating to VillageReach is not the optimal use of money for maximizing what you value.
You may well be right about this, I’ll have to think some more about this :-). Thanks for raising this issue.
You’ve provided reasons for why you are skeptical of the ability of SIAI to reduce existential risk. It’s clear you’ve dedicated a good amount of effort to your investigation.
Why are you content to leave the burden of investigating FHI’s abilities to GiveWell, rather than investigate yourself, as you have with SIAI?
The reason that I have not investigated FHI is simply because I have not gotten around to doing so. I do plan to change this soon. I investigated SIAI first because I came into contact with SIAI before I came into contact with FHI.
My initial reaction to FHI is that it looks highly credible to me, but that I doubt that it has room for more funding. However, I look forward to looking more closely into this matter in the hopes of finding a good opportunity for donors to lower existential risk.
You should definitely do research to confirm this on your own, but the last I heard (somewhat informally through the grapevine) was that FHI does indeed have room for more funding, for example, in the form of funding for an additional researcher or post-doc to join their team. You can than evaluate whether an additional academic trying to research and publish in these areas would be useful, but given how small the field currently is, my impression would be that an additional such academic would probably be helpful.
Thanks for the info.
I would very much like to see what positive observations you have made during your research into the SIAI. I know that you believe there is plenty of potential—there would be no reason to campaign for improvements if you didn’t see the chance that it would make a difference. That’d be a pointless or counter-productive for your interests given that it certainly doesn’t win you any high status friends!
How about you write a post on a different issue regarding SIAI or FAI in general using the same standard of eloquence that you have displayed?
wedrifid—Thanks for your kind remarks. As I said in my top level post, I’ll be taking a long break from LW. As a brief answer to your question:
(a) I think that Eliezer has inspired people (including myself) to think more about existential risk and that this will lower existential risk. I thank Eliezer for this.
(b) I think that Less Wrong has provided a useful venue for smart people (of a certain kind) to network and find friends and that this too will lower existential risk.
(c) Most of what I know about the good things that SIAI has done on an institutional level are from Carl Shulman. You might like to ask him for more information.