There is no doubt that given the concept of the Common Good Principle, everyone would be FOR it prior to complete development of ASI. But once any party gains an advantage they are not likely to share, particularly with those they see as their competitors or enemies. This is an unfortunate fact of human nature that has little chance of evolving toward greater altruism in the necessary timescale. In both Bostrom’s and Brundage’s arguments there are a lot of “ifs”. Yes, it would be great if we could develop AI for the Greater Good, but human nature seems to indicate that our only hope of doing so would be through an early and inextricably intertwined collaboration, so that no party would have the capability of seizing the golden ring of domination by cheating during development.
What do you think of the ‘Common Good Principle’?
There is no doubt that given the concept of the Common Good Principle, everyone would be FOR it prior to complete development of ASI. But once any party gains an advantage they are not likely to share, particularly with those they see as their competitors or enemies. This is an unfortunate fact of human nature that has little chance of evolving toward greater altruism in the necessary timescale. In both Bostrom’s and Brundage’s arguments there are a lot of “ifs”. Yes, it would be great if we could develop AI for the Greater Good, but human nature seems to indicate that our only hope of doing so would be through an early and inextricably intertwined collaboration, so that no party would have the capability of seizing the golden ring of domination by cheating during development.