Layperson question here; I appreciate any answers/comments that might help me understand this topic better.
When thinking about DALL·E 2 and Google’s PaLM this week I wondered if in the future we could hypothetically have AI systems like this...
Input: “an award-winning 2-hour sci-fi drama feature film about blah blah blah”
Output: an incredible film that people living in the 1970s would assume was made by future humans, not future AI, and deserves to win Best Picture because it made them laugh and cry and was amazing
...all without having to solve the alignment problem because the system that produces the film isn’t actually intelligent in the ways that matter to AI risk.
That is, can we get crazy impressive outputs from an AI system without that AI system posing an existential risk or being aligned?
If so, what feature distinguishes AI systems that do pose existential risks and need to be aligned from those that don’t?
If not, what necessary aspect of the AI system that produces the output above is it that makes it so the system now poses an existential risk and needs to be aligned?
[Question] Can AI systems have extremely impressive outputs and also not need to be aligned because they aren’t general enough or something?
Layperson question here; I appreciate any answers/comments that might help me understand this topic better.
When thinking about DALL·E 2 and Google’s PaLM this week I wondered if in the future we could hypothetically have AI systems like this...
Input: “an award-winning 2-hour sci-fi drama feature film about blah blah blah”
Output: an incredible film that people living in the 1970s would assume was made by future humans, not future AI, and deserves to win Best Picture because it made them laugh and cry and was amazing
...all without having to solve the alignment problem because the system that produces the film isn’t actually intelligent in the ways that matter to AI risk.
That is, can we get crazy impressive outputs from an AI system without that AI system posing an existential risk or being aligned?
If so, what feature distinguishes AI systems that do pose existential risks and need to be aligned from those that don’t?
If not, what necessary aspect of the AI system that produces the output above is it that makes it so the system now poses an existential risk and needs to be aligned?