That is, can we get crazy impressive outputs from an AI system without that AI system posing an existential risk or being aligned?
Yes. Deep Blue was impressive in 1997.
If so, what feature distinguishes AI systems that do pose existential risks and need to be aligned from those that don’t?
Generality + intelligence. Deep Blue was domain-specific. Your laptop computer is perfectly general but has little intelligence.
If not, what necessary aspect of the AI system that produces the output above is it that makes it so the system now poses an existential risk and needs to be aligned?
Yes. Deep Blue was impressive in 1997.
Generality + intelligence. Deep Blue was domain-specific. Your laptop computer is perfectly general but has little intelligence.
None. Neither pose an existential risk.