I’ll try to answer the question about timescales, but first let me explain in some detail why I don’t think the decision should be dominated by that question.
He was in part addressing the tradeoff between environmental work and work on technology related to AGI or other existential risks. In this context I agree with his position.
But more broadly, as a person setting out into the wold and deciding what I should do with each moment, the question about timescales is one of the most important issues bearing on my decision and my uncertainty about it (coupled with the difficulty of acquiring evidence) is almost physically painful.
If AGI is likely in the next couple of decades (I am rather skeptical) then long-term activism or outreach are probably pointless. If AGI is not likely within this century (which also seems unlikely) then working on AGI is probably pointless.
I believe it is quite possible that I am smart enough to have a significant effect on the course of whatever field I participate in. I also believe I could have a significant impact on the number of altruistic rationalists in the world. It seems likely that one of these options is way better than the other, and spending some time figuring out which one (and answering related, more specific questions) seems important. One of the most important ingredients in that calculation is a question of timescales. I don’t trust the opinion of anyone involved with the SIAI. I don’t trust the opinion of anyone in the mainstream. (In both cases I am happy to update on evidence they provide.) I don’t have any good ideas on how to improve my estimate, but it feels like I should be able to.
I encounter relatively smart people giving estimates completely out of line with mine which would radically alter my behavior if I believed them. What argument have I not thought through? What evidence have I not seen? I like to believe that smart, rational people don’t disagree too dramatically about questions of fact that they have huge stakes in. General confusion about AI was fine when I had it walled off in a corner of my brain with other abstruse speculation, but now that the question matters to me my uncertainty seems more dire.
Timescales Matter
In an interview with John Baez, Eliezer responds:
He was in part addressing the tradeoff between environmental work and work on technology related to AGI or other existential risks. In this context I agree with his position.
But more broadly, as a person setting out into the wold and deciding what I should do with each moment, the question about timescales is one of the most important issues bearing on my decision and my uncertainty about it (coupled with the difficulty of acquiring evidence) is almost physically painful.
If AGI is likely in the next couple of decades (I am rather skeptical) then long-term activism or outreach are probably pointless. If AGI is not likely within this century (which also seems unlikely) then working on AGI is probably pointless.
I believe it is quite possible that I am smart enough to have a significant effect on the course of whatever field I participate in. I also believe I could have a significant impact on the number of altruistic rationalists in the world. It seems likely that one of these options is way better than the other, and spending some time figuring out which one (and answering related, more specific questions) seems important. One of the most important ingredients in that calculation is a question of timescales. I don’t trust the opinion of anyone involved with the SIAI. I don’t trust the opinion of anyone in the mainstream. (In both cases I am happy to update on evidence they provide.) I don’t have any good ideas on how to improve my estimate, but it feels like I should be able to.
I encounter relatively smart people giving estimates completely out of line with mine which would radically alter my behavior if I believed them. What argument have I not thought through? What evidence have I not seen? I like to believe that smart, rational people don’t disagree too dramatically about questions of fact that they have huge stakes in. General confusion about AI was fine when I had it walled off in a corner of my brain with other abstruse speculation, but now that the question matters to me my uncertainty seems more dire.