Good point, but I think Bostrom’s point about risk aversion does much to ameliorate it. If the US had had a 50% chance of securing global hegemonicy, and a 50% chance of destruction from such a move, it probably would not have done it. A non-risk-averse, non-deontological AI, on the other hands, with its eye on the light cone, might consider the gamble worthwhile.
Good point, but I think Bostrom’s point about risk aversion does much to ameliorate it. If the US had had a 50% chance of securing global hegemonicy, and a 50% chance of destruction from such a move, it probably would not have done it. A non-risk-averse, non-deontological AI, on the other hands, with its eye on the light cone, might consider the gamble worthwhile.