it looks like it is essentially from the technological-progress-is-bad-because-hubris end of science fiction
I didn’t get that vibe: it looked like the terrorists blowing up AI labs were being depicted as being bad (or at least not-good) guys, whereas some of the main characters seemed genuinely conflicted and torn about whether to try to upload their friend in an attempt to save him, and whether to even keep him running after he’d uploaded. If they had been going for the hubris angle, I would have expected a lot more of a gung-ho attitude towards building potential superintelligences.
And maybe I’m reading too much into it, but I get the feeling that this has a lot more of a shades-of-gray morality than is normal for Hollywood: e.g. it’s not entirely clear whether the terrorists really are bad guys, nor whether the main character should have been uploaded, etc.
And this seems to ignore the implicit issue that uploads are one of the safer results, not only because they would be near us in mindspace, but because the incredible kludge that is the human brain makes recursive self-improvement less likely.
Well, there’s only as much that you can pack into a two-hour movie while still keeping it broadly accessible. If it manages to communicate even a couple of major concepts even semi-accurately, while potentially getting a lot of people interested in the field in general, that’s still a big win. A movie doesn’t need to communicate every subtlety of a topic if it regardless gets people to read up on the topic on their own. (Supposedly science fiction has historically inspired a lot of people to pursue scientific careers, particularly related to e.g. space exploration, though I don’t know how accurate this common-within-the-scifi-community belief is.)
If it manages to communicate even a couple of major concepts even semi-accurately, while potentially getting a lot of people interested in the field in general, that’s still a big win.
If that were it (couple major concepts semi-accurately, the rest entertainment/drama), I’d agree. However, “imagine a machine with a full range of human emotion” (quote from the trailer) and the invariable AI-stopped-using-stupid-gimmicks ending (there’s gonna be a happy ending) is more likely to create yet another Terminator-style distortion/caricature to fight. The false concepts that get planted along with the semi-accurate ones can do large net harm by muddling the issue using powerful visual saliency cheats (how can ‘boring forum posts’ measure up against flashy Hollywood movies).
“Oh, you’re into AI safety? Yea, just like Terminator! Oh, not like that? Like Transcendence, then?” anticipatory facepalm
I expect that any people whose concepts get hopelessly distorted by this movie would be a lost cause anyway. Reasoning correctly about AI risk already requires the ability to accept a number of concepts that initially seem counterintuitive: if you can’t manage “this doesn’t work the way it does in movies”, you probably wouldn’t have managed “an AI doesn’t work the way all of my experience about minds says a mind should work” either.
Granted. Still, the general public is never going to have an accurate understanding of any complex concept, be that concept evolution, climate change, or the Singularity. The understanding of non-specialists in any domain is always going to be more or less distorted. The best we can hope for is that the popularizations that make the biggest splash are even semi-accurate so that the popular understanding won’t be too badly distorted: and considering everything that Hollywood could have done with this movie, this looks pretty promising.
I didn’t get that vibe: it looked like the terrorists blowing up AI labs were being depicted as being bad (or at least not-good) guys, whereas some of the main characters seemed genuinely conflicted and torn about whether to try to upload their friend in an attempt to save him, and whether to even keep him running after he’d uploaded. If they had been going for the hubris angle, I would have expected a lot more of a gung-ho attitude towards building potential superintelligences.
And maybe I’m reading too much into it, but I get the feeling that this has a lot more of a shades-of-gray morality than is normal for Hollywood: e.g. it’s not entirely clear whether the terrorists really are bad guys, nor whether the main character should have been uploaded, etc.
Well, there’s only as much that you can pack into a two-hour movie while still keeping it broadly accessible. If it manages to communicate even a couple of major concepts even semi-accurately, while potentially getting a lot of people interested in the field in general, that’s still a big win. A movie doesn’t need to communicate every subtlety of a topic if it regardless gets people to read up on the topic on their own. (Supposedly science fiction has historically inspired a lot of people to pursue scientific careers, particularly related to e.g. space exploration, though I don’t know how accurate this common-within-the-scifi-community belief is.)
And you can put even less in a two and a halve minute trailer.
If that were it (couple major concepts semi-accurately, the rest entertainment/drama), I’d agree. However, “imagine a machine with a full range of human emotion” (quote from the trailer) and the invariable AI-stopped-using-stupid-gimmicks ending (there’s gonna be a happy ending) is more likely to create yet another Terminator-style distortion/caricature to fight. The false concepts that get planted along with the semi-accurate ones can do large net harm by muddling the issue using powerful visual saliency cheats (how can ‘boring forum posts’ measure up against flashy Hollywood movies).
“Oh, you’re into AI safety? Yea, just like Terminator! Oh, not like that? Like Transcendence, then?” anticipatory facepalm
I expect that any people whose concepts get hopelessly distorted by this movie would be a lost cause anyway. Reasoning correctly about AI risk already requires the ability to accept a number of concepts that initially seem counterintuitive: if you can’t manage “this doesn’t work the way it does in movies”, you probably wouldn’t have managed “an AI doesn’t work the way all of my experience about minds says a mind should work” either.
“hopelessly” probably not. But that doesn’t mean that the distortion is insignificant.
Granted. Still, the general public is never going to have an accurate understanding of any complex concept, be that concept evolution, climate change, or the Singularity. The understanding of non-specialists in any domain is always going to be more or less distorted. The best we can hope for is that the popularizations that make the biggest splash are even semi-accurate so that the popular understanding won’t be too badly distorted: and considering everything that Hollywood could have done with this movie, this looks pretty promising.