After all this extended conversation with people, I’ve changed my mind on a few things that I will elaborate here. I hope in doing so I can (1) clarify my original position and (2) explain where I now stand in light of all the debate so people can engage with my current ideas as opposed to the ideas I no longer hold. My opinions on things tend to change quickly, so I think updates like this will help.
My Argument, As It Currently Stands
If I were to communicate one main point of my essay, based on what I believe now, it would be when you’re in a position of high uncertainty, the best response is to use a strategy of exploration rather than a strategy of exploitation.
What I mean by this is that given the high uncertainty of impact we see now, especially with regard to the far future, we’re better off trying to find more information about impact and reduce our uncertainty (exploration) rather than pursuing whatever we think is best (exploitation).
The implications of this would mean that:
We should develop more of an attitude that our case for impact is neither clear nor proven.
We should apply more skepticism to our causes and more self-skepticism to our personal beliefs about impact.
We should use the language of “information” and “exploration” more often than the language of “impact” and “exploitation”.
We should focus more on finding specific and concrete attempts to ensure we’re making progress and figure out our impact (whether it be surveys, experiments, soliciting external review from relevant experts, etc.).
We should focus more on transparency about what we’re doing and thinking and why, when relevant and not exceedingly costly.
And to be clear, here are specific statements that address misconceptions about what I have argued:
I do think it is wrong to ignore unproven causes completely and stop pursuing them.
I don’t think we should be donating everything to the Against Malaria Foundation instead of speculative causes.
I don’t think the Against Malaria Foundation has the highest impact of all current opportunities to donate.
I do think we can say useful things about the far future.
I don’t think the correct way to think about high uncertainty and low evidence is to “suspend judgement”. Rather, I think we should make a judgement that we expect the estimate to be much lower than initially claimed in light of all the things I’ve said earlier about the history of past cost-effectiveness estimates.
And, lastly, if I were to make a second important point it would be it’s difficult to find good opportunities to buy information. It’s easy to think that any donation to an organization will generate good information or that we’ll automatically make progress just by working. I think some element of random pursuit is important (see below), but all things considered I think we’re doing too much random pursuit right now.
Specific Things I Changed My Mind About
Here are the specific places I changed my mind on:
I used to think donating to AMF, at least in part, was important for me. Now I don’t.
I underestimated the power of exploring and the existing opportunities, so I think that 100% of my donations should be going to trying to assess impact. I’ve been persuaded that there is already quite a lot of money going toward AMF and we might not need more money as quickly as thought, so for the time being it’s probably more appropriate to save and then donate to opportunities to buy information as they come up.
I now agree that there are relevant economies of scale in pursuing information that I hadn’t taken into account.
What I mean by this is it might not be appropriate for individuals to work on purchasing information themselves. Instead, this could end up splitting up the time of organizations unnecessarily as they provide information to a bunch of different people. Also, many people don’t have the time to do this themselves.
I think this has two implications:
We should put more trust in larger scale organizations who are doing exploring, like GiveWell, and pool our resources.
Individuals should work harder to put relevant information about information we gather online.
I was partially mistaken in thinking about how to “prove” speculative causes.
I think there was some value in my essay “What Would It Take to Prove a Speculative Cause?” because it talked concretely about strategies some organizations could take to get more information about their impact.
But the overall concept is mistaken—there is no arbitrary threshold of evidence at which a speculative cause needs to cross and I was wasting my time by trying to come up with one. Instead, I think it’s appropriate to continue doing expected value calculations as long as we maintain a self-skeptical, pro-measurement mindset.
I had previously not fully taken into account the cost of acquiring further information.
The important question in value of information is not “what does this information get me in terms of changing my beliefs and actions?” but actually “how valuable is this information?”, as in, do the benefits of gathering this information outweigh all the costs? In some cases, I think the benefits of further proving a cause probably don’t outweigh the costs.
For one possibly extreme example, while I don’t know the rationale for doing a 23rd randomized controlled trial on anti-malaria bednets after performing the previous 22, it’s likely that doing that RCT would have to be testing something more specific than the general effectiveness of bednets to justify the high cost of doing an RCT.
Likewise, there are costs on organizations to devoting resources to measuring themselves and being more transparent. I don’t think these costs are particularly high or defeat the idea of devoting more resources to this area, but I hadn’t really taken them into account before.
I’m slightly more in favor of acting randomly (trial and error).
I still think it’s difficult to acquire good value of information and it’s very easy to get caught “spinning our wheels” in research, especially when that research has no clear feedback loops. One example, perhaps somewhat controversial, would be to point to the multi-century lack of progress on some problems in philosophy (think meta-ethics) as an example of what can happen to a field when there aren’t good feedback loops to ground yourself.
However, I underestimated the amount of information that comes forward just doing ones normal activities. The implication here is that it’s more worthwhile than I initially thought to fund speculative causes just to have them continue to scale and operate.
Where I’ve Changed My Mind on My Approach to Speculative Causes
Follow up to Why I’m Skeptical About Unproven Causes (And You Should Be Too)
Previously, I wrote “Why I’m Skeptical About Unproven Causes (And You Should Be Too)” and a follow up essay “What Would It Take to Prove a Speculative Cause?”. Both of these sparked a lot of discussion on LessWrong, on the Effective Altruist blog, and my own blog, as well as many hours of in person conversation.
After all this extended conversation with people, I’ve changed my mind on a few things that I will elaborate here. I hope in doing so I can (1) clarify my original position and (2) explain where I now stand in light of all the debate so people can engage with my current ideas as opposed to the ideas I no longer hold. My opinions on things tend to change quickly, so I think updates like this will help.
My Argument, As It Currently Stands
If I were to communicate one main point of my essay, based on what I believe now, it would be when you’re in a position of high uncertainty, the best response is to use a strategy of exploration rather than a strategy of exploitation.
What I mean by this is that given the high uncertainty of impact we see now, especially with regard to the far future, we’re better off trying to find more information about impact and reduce our uncertainty (exploration) rather than pursuing whatever we think is best (exploitation).
The implications of this would mean that:
We should develop more of an attitude that our case for impact is neither clear nor proven.
We should apply more skepticism to our causes and more self-skepticism to our personal beliefs about impact.
We should use the language of “information” and “exploration” more often than the language of “impact” and “exploitation”.
We should focus more on finding specific and concrete attempts to ensure we’re making progress and figure out our impact (whether it be surveys, experiments, soliciting external review from relevant experts, etc.).
We should focus more on transparency about what we’re doing and thinking and why, when relevant and not exceedingly costly.
And to be clear, here are specific statements that address misconceptions about what I have argued:
I do think it is wrong to ignore unproven causes completely and stop pursuing them.
I don’t think we should be donating everything to the Against Malaria Foundation instead of speculative causes.
I don’t think the Against Malaria Foundation has the highest impact of all current opportunities to donate.
I do think we can say useful things about the far future.
I don’t think the correct way to think about high uncertainty and low evidence is to “suspend judgement”. Rather, I think we should make a judgement that we expect the estimate to be much lower than initially claimed in light of all the things I’ve said earlier about the history of past cost-effectiveness estimates.
And, lastly, if I were to make a second important point it would be it’s difficult to find good opportunities to buy information. It’s easy to think that any donation to an organization will generate good information or that we’ll automatically make progress just by working. I think some element of random pursuit is important (see below), but all things considered I think we’re doing too much random pursuit right now.
Specific Things I Changed My Mind About
Here are the specific places I changed my mind on:
I used to think donating to AMF, at least in part, was important for me. Now I don’t.
I underestimated the power of exploring and the existing opportunities, so I think that 100% of my donations should be going to trying to assess impact. I’ve been persuaded that there is already quite a lot of money going toward AMF and we might not need more money as quickly as thought, so for the time being it’s probably more appropriate to save and then donate to opportunities to buy information as they come up.
I now agree that there are relevant economies of scale in pursuing information that I hadn’t taken into account.
What I mean by this is it might not be appropriate for individuals to work on purchasing information themselves. Instead, this could end up splitting up the time of organizations unnecessarily as they provide information to a bunch of different people. Also, many people don’t have the time to do this themselves.
I think this has two implications:
We should put more trust in larger scale organizations who are doing exploring, like GiveWell, and pool our resources.
Individuals should work harder to put relevant information about information we gather online.
I was partially mistaken in thinking about how to “prove” speculative causes.
I think there was some value in my essay “What Would It Take to Prove a Speculative Cause?” because it talked concretely about strategies some organizations could take to get more information about their impact.
But the overall concept is mistaken—there is no arbitrary threshold of evidence at which a speculative cause needs to cross and I was wasting my time by trying to come up with one. Instead, I think it’s appropriate to continue doing expected value calculations as long as we maintain a self-skeptical, pro-measurement mindset.
I had previously not fully taken into account the cost of acquiring further information.
The important question in value of information is not “what does this information get me in terms of changing my beliefs and actions?” but actually “how valuable is this information?”, as in, do the benefits of gathering this information outweigh all the costs? In some cases, I think the benefits of further proving a cause probably don’t outweigh the costs.
For one possibly extreme example, while I don’t know the rationale for doing a 23rd randomized controlled trial on anti-malaria bednets after performing the previous 22, it’s likely that doing that RCT would have to be testing something more specific than the general effectiveness of bednets to justify the high cost of doing an RCT.
Likewise, there are costs on organizations to devoting resources to measuring themselves and being more transparent. I don’t think these costs are particularly high or defeat the idea of devoting more resources to this area, but I hadn’t really taken them into account before.
I’m slightly more in favor of acting randomly (trial and error).
I still think it’s difficult to acquire good value of information and it’s very easy to get caught “spinning our wheels” in research, especially when that research has no clear feedback loops. One example, perhaps somewhat controversial, would be to point to the multi-century lack of progress on some problems in philosophy (think meta-ethics) as an example of what can happen to a field when there aren’t good feedback loops to ground yourself.
However, I underestimated the amount of information that comes forward just doing ones normal activities. The implication here is that it’s more worthwhile than I initially thought to fund speculative causes just to have them continue to scale and operate.
-
(This was also cross-posted on my blog.)