almost all of the reasons that the former is currently a lot more likely are mistake theory reasons.
Not necessarily. The primary reason is that building an aligned superintelligence is strictly a subset of building a superintelligence, and is therefore harder. Just how much harder is unknown, but likely orders of magnitude, and perhaps nigh-impossible.
For the question of whether to stop doing a very hard but profitable and likely possible thing, to wait for (since it’s unclear how to work on) an extremely hard and maybe impossible thing, there will be a mix of honest disagreement (mistake theory) and adversarial ignorance (conflict). And there’s no easy way to know which is which, as the conflict side will find it easy to pretend ignorance.
Not necessarily. The primary reason is that building an aligned superintelligence is strictly a subset of building a superintelligence, and is therefore harder. Just how much harder is unknown, but likely orders of magnitude, and perhaps nigh-impossible.
For the question of whether to stop doing a very hard but profitable and likely possible thing, to wait for (since it’s unclear how to work on) an extremely hard and maybe impossible thing, there will be a mix of honest disagreement (mistake theory) and adversarial ignorance (conflict). And there’s no easy way to know which is which, as the conflict side will find it easy to pretend ignorance.