Bostrom versus Transcendence
Nick Bostrom takes on the facts, the fictions and the speculations in the movie Transcendence:
Could you upload Johnny Depp’s brain? Oxford Professor on Transcendence
How soon until machine intelligence? Oxford professor on Transcendence
Would you have warning before artificial superintelligence? Oxford professor on Transcendence
Oxford professor on Transcendence: how could you get a machine intelligence?
- 25 May 2015 4:51 UTC; 6 points) 's comment on Open Thread, May 25 - May 31, 2015 by (
My biggest problem with this movie is an unforced error in one of the first scenes. Johnny Depp is putting up a copper mesh roof over his patio and talking about how it will block radio waves, and shows the cell phone with zero bars. I work in facilities that do the same thing… except that of course you need walls and a ceiling to make it work and the object is a well-known bit of tech called a Faraday cage. Why they wouldn’t have slipped some cell phone guy a fiver to review what they were doing just pissed me off, and it was hard for me to push my mind to not damning the whole movie for a slip like this in the beginning.
The movie is awful, by the way.
Except that it isn’t awful. It’s not great either, but it is good and interesting.
The emphasis on the emulation being approximate and the widespread (but not universal) acceptance that the mind in the machine did not have the same identity as the man who had died showed more insight into what this tech will look like was insightful.
The swarms of nanobots were presented very well. I find it incredibly pleasurable and I think educational to see things like that depicted in a movie. It is one thing to read about swarms another to see them.
What is described as “magic” by detractors is partially the difference between a movie and a PhD thesis, and probably a pretty accurate version of how these things would actually look to people seeing them. It is not the job of a movie to use its scarce 120 minutes to provide a technical explanation for HOW the technology works. There was nothing technologically presented here that went beyond the hypothetical bounds of how these things would play out IF an emulation could be uploaded AND IF the emulation had access to sufficient resources in order to be self-modifying.
Why would they build in a desert town rather than a city? How is it not obvious that if you need acres of solar PV that you prefer a scantily occupied desert? How is it not obvious that being able to do a significant part of your facility development without 100,000 nosey neighbors wondering what the hell is going on? The choice of location struck me as obviously sensible.
Etc.
Anyways, care to mention what you disliked? Or anyone else?
Everything worked by magic, nothing made any sense whatsoever.
Of course, Hollywood movies often have plots that don’t make any sense, but at least they usually compensate for that by having neat action scenes or interesting and witty characters or… something else that this movie didn’t have.
You say magic, I say they didn’t waste limited minutes making up a technical back story. The things that worked by “magic” were all well within bounds of what an AI with access to computing power, solar power, and swarms of nanobots would accomplish, sure seems to me.
It’s not just the upload being powerful that made it magic, it was also the fact that the technology wasn’t even internally consistent. None of its powers or limitations were derived from anything scientific, it was just a complete mess wearing the attire of science. I call such a mess “magic”.
Stuff like:
Despite the untested and jury-rigged nature of the upload procedure, the upload managed to rewrite and optimize itself (rather than just editing itself to insanity) almost instantly after being uploaded. Just a few moments after that, it became capable of playing the financial markets so as to give itself millions of dollars as well as hacking pretty much every computer in existence. Even though the original AI system that it was uploaded on had never shown any signs of being capable of anything like that. Even when the original system had had many times as much computing power at its disposal (they uploaded the scientist on just a couple of computing cores stolen from the original system).
The upload had two years after that to continue rewriting and optimizing itself, but someone who had the “source code” to the original AI system could use still use the source code to write a virus that would disable the upload if the virus was just delivered right. When they guy couldn’t even have known whether the upload was still running on any of the original source code or whether it had all been rewritten.
The same virus could also somehow disable pretty much all technology on Earth while it was at it.
For all its hyper-technology and ability to precisely scan the physiology of another person, the upload completely failed to develop its abilities when it came to emotional intelligence. Develop arbitrarily advanced medical technology? Sure! Figure out the basics of social understanding that would allow you to know when you were doing stuff that would completely freak out your wife? Unpossible!
The AI might have nanotechnology that gets all around the planet as well as enhanced superhuman cyborgs in its employ, but build an underground Faraday cage and you’ll be protected from most of its attempts at attacking you. (I forget the details, but I seem to recall that the cyborgs were pretty much entirely disabled if brought to the cages.)
Are there any posts to use as common ground in how to watch Science Fiction? Every one of your concerns is answerable. Indeed i can hardly resist answering them point by point but I am not sure that is even really the point.
Aw WTF, this is entertainment, so here goes point by point:
The guy uploaded is perhaps the premier computer scientist and coder of AI in the world. With CPU clock rates about a million X faster than a brain and having been uploaded on to a massively parallel machine (and quantum to boot from what they were saying), presumably he can have some streams of consciousness that run at pushing a million X regular human rate. At a millionX, a subjective year in that thread takes 10π seconds in our world. So maybe the simulation is a factor of 100 off just the raw CPU clock rate ratio, and a year of time for some of Depp’s threads takes a full 50 minutes. How much should Depp be able to accomplish in 1 hour of real time, with some number of threads going and working together? “Almost instantly,” presumably Depp does set down to doing various cleanups and bug fixes that might take a Depp a few hours to a few days to accomplish, those will appear almost instant.
There are a lot of games that will be won by whoever is fastest, and don’t require some great amount of inventiveness or analysis to figure out how to play. Can you think of some rule that would preclude any of the myriad of quant winning strategies to be like that when one conscious agent has a 10,000 X advantage in speed over the other? I picture metaphorically those Sci Fi situations where one person is in sped up time and everybody else is frozen. A Depp thread could walk casually around examining every clue the various HF traders were leaving around, figure out what they were going to do next, and in a leisurely way, front-run the front runners. Being attacked in a completely novel way, existing HF algorithms would have no hardening against attacks from something faster than they are.
Many threads of the best AI-CS guy around running at 1 year per hour or more. And presumably the thread count of Depp-coders rising exponentially as more and more resources are hacked. How long SHOULD it take? How much did the movie miss it by?
Is there a reason in the world to suspect that the main owner and coder of the original system would not have taken that system over first and had all of those resources available to him in perhaps only hours of his subjective time, which would be seconds of world-time?
They had previously uploaded I thought I heard them say a rhesus monkey brain. Could it be many order of magnitude easier to prevent self-modification from an uploaded monkey than from an uploaded CS-AI genius? This does not stretch the boundaries of hypotheticals for me.
We know Depp-AI understood that he had enemies. Perhaps he did not imagine that his own wife and colleagues were among them until they were attacking. Even when they were attacking he may not have imagined they would want to kill him. Part of the beauty of the Depp-AI being an emulation of a human rather than a de novo simulation is we KNOW how complex humans can be, failing to consciously recognize things which outsiders with much less information can see. We KNOW how irrational and non-rational and arational human minds are. We know the human mind trusts people too much because on average that makes it easier for us to operate collectively with all the advantages that gives us.
So WOULD an AI with lots of threads running hundreds of thousands of years of subjective experience not have at some point gone through a “they are all out to get me” phase and completely rewritten its own code so as to make it mysterious to those who had known it well enough to attack it? Maybe yes, maybe no. I think, though, to use this as a damnable offense there has to be no way the answer could be no.
Our job here is not to make sure Transcendence is the best set of assumptions that WE could imagine. Rather it is to determine that if we came across these facts in the real world, we would take them as proof that we were in somebody else’s simulation because there’s no way the AI could possibly be so stupid.
I think trusting your friends and partners while still hardening yourself against an enemy which seemed very fringey and wacky is not out of the question for the AI, so I’m OK with this.
I presumed this was because Depp had to be expected to have secreted copies of himself in nearly every place that one could imagine.
I interpreted the movie to be showing that 1) the wife was not completely freaked out, but was rather conflicted. And especially when she was around Depp, tended to be drawn in and 2) Depp DID figure out she was attacking him, she was stressed and lying, and he did what she wanted anyway, because deep within the original Depp had been a motivation to do whatever he could to make her happy. This last theme was essentially stated out loud near the end of the movie.
Design is like that. I would imagine Depp had fallback low-bandwidth mechanisms to communicate with his peeps in the absence of a good RF channel. But he was not building his peeps as a weaponized army in the first place. They were intended to be local to his operation, where the tradeoff is often made to have more capabilities with fewer redundancies than vice versa. The primary redundancy here would be having many peeps, the spider strategy for reliably appearing in the next generation rather than the rhinoceros strategy.
So putting one of the peeps in a faraday cage shuts down the high bandwidth RF channel. Most of the time the peeps operate just fine without central control or monitoring, they were not designed to go on a non-stealth reconnoiter, Depp was using the peeps he had available to handle an emergent situation.
This may just be one of those YMMV situations. I am not so brilliant that I am never surprised by other’s technological results in real life, and so my threshold in a sci fi movie is “if it could be possibly true, then the author has done her job and it is my job, as it would be in real life, to try to figure out the ways it might possibly be happening.” My real life smart friends often hate sci fi movies I like. In these cases, I generally consider that I possess superior skills at watching these movies… lets face it it is ALL fiction anyway, so there is hard to quantify win in figuring out how to hate the movie. Perhaps it is analogous to the idea that anybody learning debate should practice by debating on the side that they are against, and not only on the side they favor. Perhaps it is analogous to the improv rule that you have to say yes.
I keep wondering why, oh why can’t more of the film production teams hire a decent screenwriter (that’s the person who makes the story make sense, right)?
I think most of us could have predicted that, unfortunately. Hollywood screenwriters are not known for getting their science right, nor for writing genuinely thought-provoking plots and scenarios. They more sort of scream things in big, bold capital letters, like FREEDOM versus ORDER, or TECHNOLOGY versus NATURAL HUMANITY… blah blah blah.
Yes, I’m planning on going to see this film quite soon with some heavy alcohol secreted in my coat.
EDIT: Saw the film. The AV Club’s review is quite accurate. I do give the film points for vague gestures in the direction of Basic AI Drives, uploading as Indirect Normativity, and Friendliness being an issue at all. 6⁄10 just for being seemingly as well-informed as you’d expect from a Hollywood screenwriter trying to translate some MIRI/FHI propaganda into a film, but still a fairly bad movie. Better than the latest Captain America?
Ugh. Still such bad character performances that for the second time in my life I gave up and rooted for the UFAI.
I have heard that Her was really good, and it dealt with similar themes, so that might have gotten people’s hopes up.
I liked Her, but it was quite a different story. Her was not about an emulation, but about a constructed AI. A significant part of the piquancy of Transcendence is the “how human is he” aspect. In general, an AI from an emulation should be a lot less predictable than a constructed AI, since with emulation you are uploading all sorts of pieces without necessarily understanding them, while with a constructed AI you can skip anything you don’t need and build all sorts of clunky hacks to make it appear as you want it.
Her did not address an AI becoming powerful and active in human affairs at all, rather an interesting theme of Transcendence.
From what I’ve heard, Her featured the “AI are people like us” Phlebotinum, but that’s Phlebotinum, and they did pass the first test of not having the AIs be evil people.
I like how “MIRI” is written on the whiteboard behind him (in all but the first videos).
(This comment in the spirit of ignoring core messages and focusing on some minor tidbit.)
The true message of the first video is even more subliminal: The whiteboard behind him shows some math recently developed by MIRI, along with a (rather boring) diagram of Botworld :-)