This is more or less why I chose to go into genetics instead of AI: I simply couldn’t think of any realistic positive future with AGI in it. All positive scenarios rely on a benevolent dictator, or some kind of stable equilibrium with multiple superintelligent agents whose desires are so alien to mine, and whose actions are so unpredictable that I can’t evaluate such an outcome with my current level of intelligence.
This is more or less why I chose to go into genetics instead of AI: I simply couldn’t think of any realistic positive future with AGI in it. All positive scenarios rely on a benevolent dictator, or some kind of stable equilibrium with multiple superintelligent agents whose desires are so alien to mine, and whose actions are so unpredictable that I can’t evaluate such an outcome with my current level of intelligence.
That probably doesn’t lead to nice outcomes without additional constraints, either.