A simpler version of my question can be: does a healthy, effective rationalist community make unfriendly AI more or less likely? I’d like to see some evidence that the question has at least been seriously considered.
Not everything is about AI and direct existential risks.
For instance, the first thing that comes to my mind in this space is effective rationalist memes becoming coupled with evil, or just insufficiently-thoughtful and ethics-challenged, ideologies or projects, creating dangerous and powerful social movements or conspiracies. Nothing to do with AI and not a direct x-risk, but something that could make the world more violent and chaotic in a way that would likely increase x-risk.
I upvoted the OP and think this topic deserves attention, but share JoshuaZ’s criticisms of the initial examples.
I trust the judgment of my rational successors more than my own judgment; insofar as the decision to work on FAI or not is based on correct reasoning, I would rather defer it to a community of effective rationalists. So I don’t believe that the proportion of working going into safe technological development is likely to decrease, unless it should.
A good default seems to be that increasing the rate of technical progress is neutral towards different outcomes. Increasing the effectiveness of researches presumably increases the rate of intellectual progress as compared to progress in manufacturing processes (which are tied to the longer timescales associated with manufacturing) which is a significant positive effect. I don’t see any other effect to counteract this one.
We may also worry that today rationality provides a differential advantage to those developing technology safely which will be eroded as it becomes more common. Unfortunately, I don’t think that a significant advantage yet exists, and so developing rationality further will at least do very little harm. This does not rule out the possibility that an alternative course, which avoids spreading rationality too much (or spreading whatever magical efficiency-enhancing fruits rationality may provide), may do even more good. I strongly suspect this isn’t the case, but it is a reasonable thing to think about and the argument is certainly more subtle.
A good default seems to be that increasing the rate of technical progress is neutral towards different outcomes.
I think the second-order terms are important here. Increasing technological progress benefits hard ideas (AI, nanotech) more than comparatively easy ones (atomic bombs, biotech?). Both categories are scary, but I think the second is scarier, especially since we can use AI to counteract existential risk much more than we can use the others. Humans will die ‘by default’ - we already have the technology to kill ourselves, but not that which could prevent such a thing.
A simpler version of my question can be: does a healthy, effective rationalist community make unfriendly AI more or less likely? I’d like to see some evidence that the question has at least been seriously considered.
Not everything is about AI and direct existential risks.
For instance, the first thing that comes to my mind in this space is effective rationalist memes becoming coupled with evil, or just insufficiently-thoughtful and ethics-challenged, ideologies or projects, creating dangerous and powerful social movements or conspiracies. Nothing to do with AI and not a direct x-risk, but something that could make the world more violent and chaotic in a way that would likely increase x-risk.
I upvoted the OP and think this topic deserves attention, but share JoshuaZ’s criticisms of the initial examples.
Suggest putting this in your post. One sentence summaries are always good.
I trust the judgment of my rational successors more than my own judgment; insofar as the decision to work on FAI or not is based on correct reasoning, I would rather defer it to a community of effective rationalists. So I don’t believe that the proportion of working going into safe technological development is likely to decrease, unless it should.
A good default seems to be that increasing the rate of technical progress is neutral towards different outcomes. Increasing the effectiveness of researches presumably increases the rate of intellectual progress as compared to progress in manufacturing processes (which are tied to the longer timescales associated with manufacturing) which is a significant positive effect. I don’t see any other effect to counteract this one.
We may also worry that today rationality provides a differential advantage to those developing technology safely which will be eroded as it becomes more common. Unfortunately, I don’t think that a significant advantage yet exists, and so developing rationality further will at least do very little harm. This does not rule out the possibility that an alternative course, which avoids spreading rationality too much (or spreading whatever magical efficiency-enhancing fruits rationality may provide), may do even more good. I strongly suspect this isn’t the case, but it is a reasonable thing to think about and the argument is certainly more subtle.
I think the second-order terms are important here. Increasing technological progress benefits hard ideas (AI, nanotech) more than comparatively easy ones (atomic bombs, biotech?). Both categories are scary, but I think the second is scarier, especially since we can use AI to counteract existential risk much more than we can use the others. Humans will die ‘by default’ - we already have the technology to kill ourselves, but not that which could prevent such a thing.