Even if there are many false claims of evidence, there could still be some real evidence. If you think that the chance that you could find evidence, which is the conjunction of evidence actually existing and it being findable, isn’t too low, than you could try to search for it. However, from what you said, it seems that this improbability lowers the expected utility enough that it you find it preferable to contribute to other causes. Is that your reasoning? Also, do you think that all this applies to the SIAI?
There is almost certainly real evidence at some level; human beings (and thus human society) are fundamentally deterministic physical systems. I don’t know any method to distinguish the evidence from the noise in the case of, for example, the Nuclear Threat Initiative . . . except handing the problem to a friendly superhuman intelligence. (Which probably will use some method other than the NTI’s to ending the existential threat of global thermonuclear war anyway, rendering such a search for evidence moot.)
It doesn’t apply to the SIAI, because I can’t think of an SIAI high-negative failure mode that isn’t more likely to happen in the absence of the SIAI. The SIAI might make a paperclip maximizer or a sadist . . . but I expect anybody trying to make AIs without taking the explicit care SIAI is using is at least as likely to do so by accident, and I think eventual development of AI is near-certain in the short term (the next thousand years, which against billions of years of existence is certainly the short term). Donations to SIAI accordingly come with an increase in existential threat avoidance (however small and hard-to-estimate the probability), but not an increase in existential threat creation (AI is coming anyway).
(So why haven’t I donated to SIAI? Akrasia. Which isn’t a good thing, but being able to identify it as such in the SIAI donation case at least increases my confidence that my anti-NTI argument isn’t just a rationalization of akrasia in that case.)
Even if there are many false claims of evidence, there could still be some real evidence. If you think that the chance that you could find evidence, which is the conjunction of evidence actually existing and it being findable, isn’t too low, than you could try to search for it. However, from what you said, it seems that this improbability lowers the expected utility enough that it you find it preferable to contribute to other causes. Is that your reasoning? Also, do you think that all this applies to the SIAI?
There is almost certainly real evidence at some level; human beings (and thus human society) are fundamentally deterministic physical systems. I don’t know any method to distinguish the evidence from the noise in the case of, for example, the Nuclear Threat Initiative . . . except handing the problem to a friendly superhuman intelligence. (Which probably will use some method other than the NTI’s to ending the existential threat of global thermonuclear war anyway, rendering such a search for evidence moot.)
It doesn’t apply to the SIAI, because I can’t think of an SIAI high-negative failure mode that isn’t more likely to happen in the absence of the SIAI. The SIAI might make a paperclip maximizer or a sadist . . . but I expect anybody trying to make AIs without taking the explicit care SIAI is using is at least as likely to do so by accident, and I think eventual development of AI is near-certain in the short term (the next thousand years, which against billions of years of existence is certainly the short term). Donations to SIAI accordingly come with an increase in existential threat avoidance (however small and hard-to-estimate the probability), but not an increase in existential threat creation (AI is coming anyway).
(So why haven’t I donated to SIAI? Akrasia. Which isn’t a good thing, but being able to identify it as such in the SIAI donation case at least increases my confidence that my anti-NTI argument isn’t just a rationalization of akrasia in that case.)
I was thinking more of human-comprehensible evidence when I said `evidence’, but you seem to have found that none of that exists.
I agree with your reasoning about the SIAI.
http://lesswrong.com/lw/3kl/optimizing_fuzzies_and_utilons_the_altruism_chip/ suggests a method for motivating oneself to donate. I haven’t tried this, but the poster found it quite effective.