This review of M3GAN didn’t get the attention it deserves!
I only just came across your review a few hours ago and decided to stop and watch the movie immediately after reading your Mostly Spoiler-Free Review In Brief section, before reading Aaronson’s review and the rest of yours.
In my opinion, the most valuable part of this review is your articulation of how the film illustrates ~10 AI safety-related problems (in the Don’t-Kill-Everyoneism Problems section).
This is now my favorite post of yours, Zvi, thanks to the above and your amazing Galaxy Brained Take section. While I agree that it’s unlikely the writer intended this interpretation, I took your interpretation to heart and decided to give this film a 10 out of 10 on IMDb, putting it in the top 4% of the 1,340+ movies I have now seen (and rated) in my life, and making it the most underrated movie in my opinion (measured by My Rating minus IMDB Rating).
While objectively it’s not as good as many films I’ve given 9s and 8s to, I really enjoyed watching it, think it’s one of the best films on AI from a realism perspective I’ve seen (Colossus: The Forbin Project is my other top contender).
I agreed with essentially everything in your review, including your reaction to Aaronson’s commentary re Asimov’s laws.
This past week I read Nate’s post on the sharp left turn (which emphasizes how people tend to ignore this hard part of the alignment problem) and recently watched Eliezer express hopelessness related to humanity not taking alignment seriously in his We’re All Gonna Die interview on the Bankless podcast.
This put me in a state of mind such that when I saw Aaronson suggest that an AI system as capable as M3GAN could plausibly follow Asimov’s First and Second Laws (and thereby be roughly aligned?), it was fresh on my mind to feel that people were downplaying the AI alignment problem and not taking it sufficiently seriously. This made me feel put off by Aaronson’s comment even though he had just said “Please don’t misunderstand me here to be minimizing the AI alignment problem, or suggesting it’s easy” in his previous sentence.
So while I explicitly don’t want to criticize Aaronson for this due to him making clear that he did not intend to minimize the alignment problem with his statement re Asimov’s Laws, I do want to say that I’m glad you took the time to explain clearly why Asimov’s Laws would not save the world from M3GAN.
I also appreciated your insights into the film’s illustration of the AI safety-related problems.
This review of M3GAN didn’t get the attention it deserves!
I only just came across your review a few hours ago and decided to stop and watch the movie immediately after reading your Mostly Spoiler-Free Review In Brief section, before reading Aaronson’s review and the rest of yours.
In my opinion, the most valuable part of this review is your articulation of how the film illustrates ~10 AI safety-related problems (in the Don’t-Kill-Everyoneism Problems section).
This is now my favorite post of yours, Zvi, thanks to the above and your amazing Galaxy Brained Take section. While I agree that it’s unlikely the writer intended this interpretation, I took your interpretation to heart and decided to give this film a 10 out of 10 on IMDb, putting it in the top 4% of the 1,340+ movies I have now seen (and rated) in my life, and making it the most underrated movie in my opinion (measured by My Rating minus IMDB Rating).
While objectively it’s not as good as many films I’ve given 9s and 8s to, I really enjoyed watching it, think it’s one of the best films on AI from a realism perspective I’ve seen (Colossus: The Forbin Project is my other top contender).
I agreed with essentially everything in your review, including your reaction to Aaronson’s commentary re Asimov’s laws.
This past week I read Nate’s post on the sharp left turn (which emphasizes how people tend to ignore this hard part of the alignment problem) and recently watched Eliezer express hopelessness related to humanity not taking alignment seriously in his We’re All Gonna Die interview on the Bankless podcast.
This put me in a state of mind such that when I saw Aaronson suggest that an AI system as capable as M3GAN could plausibly follow Asimov’s First and Second Laws (and thereby be roughly aligned?), it was fresh on my mind to feel that people were downplaying the AI alignment problem and not taking it sufficiently seriously. This made me feel put off by Aaronson’s comment even though he had just said “Please don’t misunderstand me here to be minimizing the AI alignment problem, or suggesting it’s easy” in his previous sentence.
So while I explicitly don’t want to criticize Aaronson for this due to him making clear that he did not intend to minimize the alignment problem with his statement re Asimov’s Laws, I do want to say that I’m glad you took the time to explain clearly why Asimov’s Laws would not save the world from M3GAN.
I also appreciated your insights into the film’s illustration of the AI safety-related problems.