7. That is, advanced AI that pursues objectives of its own, which aren’t compatible with human existence. I’ll be writing more about this idea. Existing discussions of it include the books Superintelligence, Human Compatible, Life 3.0, and The Alignment Problem. The shortest, most accessible presentation I know of is The case for taking AI seriously as a threat to humanity (Vox article by Kelsey Piper). This report on existential risk from power-seeking AI, by Open Philanthropy’s Joe Carlsmith, lays out a detailed set of premises that would collectively imply the problem is a serious one.
7. That is, advanced AI that pursues objectives of its own, which aren’t compatible with human existence. I’ll be writing more about this idea. Existing discussions of it include the books Superintelligence, Human Compatible, Life 3.0, and The Alignment Problem. The shortest, most accessible presentation I know of is The case for taking AI seriously as a threat to humanity (Vox article by Kelsey Piper). This report on existential risk from power-seeking AI, by Open Philanthropy’s Joe Carlsmith, lays out a detailed set of premises that would collectively imply the problem is a serious one.