Given Joe Biden seems to have become more worried about AI risk after having seen the movie, it seems worth putting my observations about it into its own post. This is what I wrote back then, except for the introduction and final note.
We now must modify the paragraph about whether to see this movie. Given its new historical importance, combined with its action scenes being pretty good, if you have not yet seen it you should now probably see this movie. And of course it now deserves a much higher rating than 70.
There are of course things such as ‘it is super cool to jump from a motorcycle into a dive onto a moving train’ but also there are actual things to ponder here.
Spoiler-Free Review
There may never be a more fitting title than Mission Impossible: Dead Reckoning. Each of these four words is doing important work. And it is very much a Part 1.
There are two clear cases against seeing this movie.
This is a two hour and forty five minute series of action set pieces whose title ends in part one. That is too long. The sequences are mostly very good and a few are great, but at some point it is enough already. They could have simply had fewer and shorter set pieces that contained all the best ideas and trimmed 30-45 minutes—everyone should pretty much agree on a rank order here.
This is not how this works. This is not how any of this works. I mean, some of it is sometimes how some of it works, including what ideally should be some nasty wake-up calls or reality checks, and some of it has already been established as how the MI-movie-verse works, but wow is a lot of it brand new complete nonsense, not all of it even related to the technology or gadgets. Which is also a hint about how, on another level, any of this works. That’s part of the price of admission.
Thus, you should see this movie if and only if the idea of watching a series of action scenes sounds like a decent time, as they will come in a fun package and with a side of actual insight into real future questions if you are paying attention to that and able to look past the nonsense.
If that’s not your cup of tea, then you won’t be missing much.
MI has an 81 on Metacritic. It’s good, but it’s more like 70 good.
No One Noticed or Cared That The Alignment Plan Was Obvious Nonsense
Most real world alignment plans cannot possibly work. There still are levels. The idea that, when faced with a recursively self-improving intelligence that learns, rewrites its own code and has taken over the internet, you can either kill or control The Entity by using an early version of its code stored in a submarine but otherwise nothing can be done?
I point this out for two reasons.
First, it is indeed the common pattern. People flat out do not think about whether scenarios make sense or plans would work, or how they would work. No one calls them out on it. Hopefully a clear example of obvious nonsense illustrates this.
Second, they have the opportunity in Part 2 to do the funniest thing possible, and I really, really hope they do. Which is to have the whole McGuffin not work. At all. Someone gets hold of the old code, tries to use it to control the AI. It flat out doesn’t work. Everyone dies. End of franchise.
Presumably they would then instead invent a way Hunt saves the day anyway, that also makes no sense, but even then it would at least be something.
Then there is the Even Worse Alignment Plan, where in quite the glorious scene someone claims to be the only one who has the means to control or kill The Entity and proposes a partnership, upon which The Entity, of course, kills him on the spot, because wow you are an idiot. I presume your plan is not quite so stupid as this, but consider the possibility that it mostly is not.
No One Cares That the Threat is Extinction, They All Want Control
Often people assume that an AI, if it wanted to take over or kill everyone, would have to face down a united humanity led by John Connor, who pivot instantly to caring only about containing the threat.
Yeah. No. That is not how any of this would work. If this is part of your model of why things will be all right, your model is wrong, please update accordingly.
The movie actually gets this one far closer to correct.
At first, everyone sees The Entity loose on the internet, uncontrolled, doing random stuff and attacking everything in sight, and thinks ‘good, this is tactically good for my intelligence operations sometimes, what could go wrong?’
Then it gets out of hand on another level. Even then, of all the people in the world who learn about the threat, only Ethan Hunt notices that, if you have a superintelligence loose on the internet that explicitly is established as wanting everyone dead, the correct move is to kill it.
Even then, Ethan, and later the second person who comes around to this position, emphasize the ‘no one should have that kind of power’ angle, rather than ‘this will not work and you will get everyone killed’ angle.
No one, zero people, not even Hunt, even raises the ‘shut down the internet’ option, or other non-special-McGuffin methods for everyone not dying. It does not come up. No one notices. Not one review that I saw, or discussion I saw, brings up such possibilities. It is not in the Overton Window. Nor does anyone propose working together to ensure the entity gets killed.
The Movie Makes it Very Clear Why Humanity Won’t Win, Then Ignores It
Again, quite a common pattern. I appreciated seeing it in such an explicit form.
The Entity makes it clear it knows everything that is going to happen before it happens. Consider Gabriel’s predictions, his actions on the train at several different points, the bomb at the airport, and so on. This thing is a hundred steps ahead, playing ten dimensional chess, you just did what I thought you were gonna do.
The team even has a conversation about exactly this, that they up against something smarter and more powerful and more knowledgeable than they are, that can predict their actions, so anything they do could be playing into its hands.
The entire script is essentially The Entity’s plan, except that when required, Ethan Hunt is magic and palms the McGuffin. Ethan Hunt is the only threat to the Entity, and has the ability to be the voice in his ear telling him where to go, yet manages to not kill him while letting Ethan fix this hole in security, also that was part of the plan all along, or it wasn’t, or what exactly?
The only interpretation that makes sense is that The Key is useless. Because the whole alignment plan is useless. It won’t do anything. Ethan Hunt is being moved around as a puppet on a string in order to do things The Entity wants for unrelated reasons, who knows why. No, that doesn’t make that much more sense, but at least it is coherent.
There are other points as well where it is clear that The Entity could obviously win. Air gap your system? No good, humans are a vulnerability and can be blackmailed or otherwise controlled, you cannot trust anyone anywhere. The Entity can hack any communication device, at any security level, and pretend to be anyone convincingly. It has effective control over the whole internet. It hacked every security service, then seemed to choose to do nothing with that. It plants a bomb in order for the heroes to disarm it with one second left to send them a message.
We were clearly never in it.
Tyler Cowen gestures at this in his review, talking about the lengths to which the movie goes to make it seem like individual humans matter. Quite so. There is no reason any of the machinations in the movie should much matter, or the people in it. The movie is very interested in torturing Ethan Hunt, in exploring these few people, when the world should not care, The Entity should not care and I can assure that most of the audience also does not care. That’s not why we are here.
Similarly, Tyler correctly criticizes The Entity being embodied in Gabriel, given a face, treated mostly as a human, and given this absurd connection to Hunt. I agree it is a poor artistic choice, I would however add it more importantly points to fundamental misunderstandings across the board.
Warning Shots are Repeatedly Ignored
The Entity’s early version ‘got overenthusiastic’ and destroyed the Sevastopol. No one much cared about this, or was concerned, that it was displaying instrumental convergence and unexpected capabilities and not following instructions and rather out of control already. Development continued. It got loose on the internet and no one much worried about that, either. The whole thing was a deliberate malicious government project, no less.
Approximately No One Noticed Any of This
I get that this is a pulpy, fun action movie mostly about hitting action beats and doing cool stunts. There is nothing wrong with any of that. But perhaps this could serve as an illustration of how people and governments and power might react in potential situations, of how people would be thinking about such situations and the quality of such thinking, and especially of people’s ability to be in denial about what is about to hit them and what it can do, and their stubborn refusal to realize that the future might soon no longer be in human hands.
Is it all fictional evidence? Sort of. The evidence is that they chose to write it this way, and that we chose to react to it this way. That was the real experiment.
The other real experiment is that Joe Biden saw the movie, and it made him far more worried about AI alignment. So all of this seems a lot more important now.
Mission Impossible: Dead Reckoning Part 1 AI Takeaways
Given Joe Biden seems to have become more worried about AI risk after having seen the movie, it seems worth putting my observations about it into its own post. This is what I wrote back then, except for the introduction and final note.
We now must modify the paragraph about whether to see this movie. Given its new historical importance, combined with its action scenes being pretty good, if you have not yet seen it you should now probably see this movie. And of course it now deserves a much higher rating than 70.
There are of course things such as ‘it is super cool to jump from a motorcycle into a dive onto a moving train’ but also there are actual things to ponder here.
Spoiler-Free Review
There may never be a more fitting title than Mission Impossible: Dead Reckoning. Each of these four words is doing important work. And it is very much a Part 1.
There are two clear cases against seeing this movie.
This is a two hour and forty five minute series of action set pieces whose title ends in part one. That is too long. The sequences are mostly very good and a few are great, but at some point it is enough already. They could have simply had fewer and shorter set pieces that contained all the best ideas and trimmed 30-45 minutes—everyone should pretty much agree on a rank order here.
This is not how this works. This is not how any of this works. I mean, some of it is sometimes how some of it works, including what ideally should be some nasty wake-up calls or reality checks, and some of it has already been established as how the MI-movie-verse works, but wow is a lot of it brand new complete nonsense, not all of it even related to the technology or gadgets. Which is also a hint about how, on another level, any of this works. That’s part of the price of admission.
Thus, you should see this movie if and only if the idea of watching a series of action scenes sounds like a decent time, as they will come in a fun package and with a side of actual insight into real future questions if you are paying attention to that and able to look past the nonsense.
If that’s not your cup of tea, then you won’t be missing much.
MI has an 81 on Metacritic. It’s good, but it’s more like 70 good.
No One Noticed or Cared That The Alignment Plan Was Obvious Nonsense
Most real world alignment plans cannot possibly work. There still are levels. The idea that, when faced with a recursively self-improving intelligence that learns, rewrites its own code and has taken over the internet, you can either kill or control The Entity by using an early version of its code stored in a submarine but otherwise nothing can be done?
I point this out for two reasons.
First, it is indeed the common pattern. People flat out do not think about whether scenarios make sense or plans would work, or how they would work. No one calls them out on it. Hopefully a clear example of obvious nonsense illustrates this.
Second, they have the opportunity in Part 2 to do the funniest thing possible, and I really, really hope they do. Which is to have the whole McGuffin not work. At all. Someone gets hold of the old code, tries to use it to control the AI. It flat out doesn’t work. Everyone dies. End of franchise.
Presumably they would then instead invent a way Hunt saves the day anyway, that also makes no sense, but even then it would at least be something.
Then there is the Even Worse Alignment Plan, where in quite the glorious scene someone claims to be the only one who has the means to control or kill The Entity and proposes a partnership, upon which The Entity, of course, kills him on the spot, because wow you are an idiot. I presume your plan is not quite so stupid as this, but consider the possibility that it mostly is not.
No One Cares That the Threat is Extinction, They All Want Control
Often people assume that an AI, if it wanted to take over or kill everyone, would have to face down a united humanity led by John Connor, who pivot instantly to caring only about containing the threat.
Yeah. No. That is not how any of this would work. If this is part of your model of why things will be all right, your model is wrong, please update accordingly.
The movie actually gets this one far closer to correct.
At first, everyone sees The Entity loose on the internet, uncontrolled, doing random stuff and attacking everything in sight, and thinks ‘good, this is tactically good for my intelligence operations sometimes, what could go wrong?’
Then it gets out of hand on another level. Even then, of all the people in the world who learn about the threat, only Ethan Hunt notices that, if you have a superintelligence loose on the internet that explicitly is established as wanting everyone dead, the correct move is to kill it.
Even then, Ethan, and later the second person who comes around to this position, emphasize the ‘no one should have that kind of power’ angle, rather than ‘this will not work and you will get everyone killed’ angle.
No one, zero people, not even Hunt, even raises the ‘shut down the internet’ option, or other non-special-McGuffin methods for everyone not dying. It does not come up. No one notices. Not one review that I saw, or discussion I saw, brings up such possibilities. It is not in the Overton Window. Nor does anyone propose working together to ensure the entity gets killed.
The Movie Makes it Very Clear Why Humanity Won’t Win, Then Ignores It
Again, quite a common pattern. I appreciated seeing it in such an explicit form.
The Entity makes it clear it knows everything that is going to happen before it happens. Consider Gabriel’s predictions, his actions on the train at several different points, the bomb at the airport, and so on. This thing is a hundred steps ahead, playing ten dimensional chess, you just did what I thought you were gonna do.
The team even has a conversation about exactly this, that they up against something smarter and more powerful and more knowledgeable than they are, that can predict their actions, so anything they do could be playing into its hands.
The entire script is essentially The Entity’s plan, except that when required, Ethan Hunt is magic and palms the McGuffin. Ethan Hunt is the only threat to the Entity, and has the ability to be the voice in his ear telling him where to go, yet manages to not kill him while letting Ethan fix this hole in security, also that was part of the plan all along, or it wasn’t, or what exactly?
The only interpretation that makes sense is that The Key is useless. Because the whole alignment plan is useless. It won’t do anything. Ethan Hunt is being moved around as a puppet on a string in order to do things The Entity wants for unrelated reasons, who knows why. No, that doesn’t make that much more sense, but at least it is coherent.
There are other points as well where it is clear that The Entity could obviously win. Air gap your system? No good, humans are a vulnerability and can be blackmailed or otherwise controlled, you cannot trust anyone anywhere. The Entity can hack any communication device, at any security level, and pretend to be anyone convincingly. It has effective control over the whole internet. It hacked every security service, then seemed to choose to do nothing with that. It plants a bomb in order for the heroes to disarm it with one second left to send them a message.
We were clearly never in it.
Tyler Cowen gestures at this in his review, talking about the lengths to which the movie goes to make it seem like individual humans matter. Quite so. There is no reason any of the machinations in the movie should much matter, or the people in it. The movie is very interested in torturing Ethan Hunt, in exploring these few people, when the world should not care, The Entity should not care and I can assure that most of the audience also does not care. That’s not why we are here.
Similarly, Tyler correctly criticizes The Entity being embodied in Gabriel, given a face, treated mostly as a human, and given this absurd connection to Hunt. I agree it is a poor artistic choice, I would however add it more importantly points to fundamental misunderstandings across the board.
Warning Shots are Repeatedly Ignored
The Entity’s early version ‘got overenthusiastic’ and destroyed the Sevastopol. No one much cared about this, or was concerned, that it was displaying instrumental convergence and unexpected capabilities and not following instructions and rather out of control already. Development continued. It got loose on the internet and no one much worried about that, either. The whole thing was a deliberate malicious government project, no less.
Approximately No One Noticed Any of This
I get that this is a pulpy, fun action movie mostly about hitting action beats and doing cool stunts. There is nothing wrong with any of that. But perhaps this could serve as an illustration of how people and governments and power might react in potential situations, of how people would be thinking about such situations and the quality of such thinking, and especially of people’s ability to be in denial about what is about to hit them and what it can do, and their stubborn refusal to realize that the future might soon no longer be in human hands.
Is it all fictional evidence? Sort of. The evidence is that they chose to write it this way, and that we chose to react to it this way. That was the real experiment.
The other real experiment is that Joe Biden saw the movie, and it made him far more worried about AI alignment. So all of this seems a lot more important now.