This just isn’t true. AGI is a “gun that can aim itself”. The user not being to aim it doesn’t mean it won’t aim and achieve something, quite effectively.
Less metaphorically: if the AGI performs a semi-random walk through goal space, or just misses your intended goal by enough, it may settle (even temporarily) on a coherent goal that’s incompatible with yours. It may then eliminate humanity as a competitor to its reaching that goal.
This just isn’t true. AGI is a “gun that can aim itself”. The user not being to aim it doesn’t mean it won’t aim and achieve something, quite effectively.
Less metaphorically: if the AGI performs a semi-random walk through goal space, or just misses your intended goal by enough, it may settle (even temporarily) on a coherent goal that’s incompatible with yours. It may then eliminate humanity as a competitor to its reaching that goal.