Why is there so much focus on the potential benefits to humanity of a FAI, as against our present situation?
An FAI becomes a singleton and prevents a paperclip maximizer from arising. Anyone who doesn’t think a UAI in a box is dangerous will undoubtedly realize that an intelligent enough UAI could cure cancer, etc.
If a person is concerned about UAI, they are more or less sold on the need for Friendliness.
If a person is not concerned about UAI, they will not think potential benefits of a FAI are greater than those of a UAI in a box, or a UAI developed through reinforcement learning, etc. so there is no need to discuss the benefits to humanity of a superintelligence.
Why is there so much focus on the potential benefits to humanity of a FAI, as against our present situation?
An FAI becomes a singleton and prevents a paperclip maximizer from arising. Anyone who doesn’t think a UAI in a box is dangerous will undoubtedly realize that an intelligent enough UAI could cure cancer, etc.
If a person is concerned about UAI, they are more or less sold on the need for Friendliness.
If a person is not concerned about UAI, they will not think potential benefits of a FAI are greater than those of a UAI in a box, or a UAI developed through reinforcement learning, etc. so there is no need to discuss the benefits to humanity of a superintelligence.