IMO, this is a better way of splitting up the argument that we should be funding AI safety research than the one presented in the OP. My only gripe is in point 2. Many would argue that it wouldn’t be really bad for a variety of reasons, such as there are likely to be other ‘superintelligent AIs’ working in our favour. Alternatively, if the decision making were only marginally better than a human’s it wouldn’t be any worse than a small group of people working against humanity.
IMO, this is a better way of splitting up the argument that we should be funding AI safety research than the one presented in the OP. My only gripe is in point 2. Many would argue that it wouldn’t be really bad for a variety of reasons, such as there are likely to be other ‘superintelligent AIs’ working in our favour. Alternatively, if the decision making were only marginally better than a human’s it wouldn’t be any worse than a small group of people working against humanity.
TBC, I’m definitely NOT thinking of this as an argument for funding AI safety.