LINK: Google research chief: ‘Emergent artificial intelligence? Hogwash!’
The Register talks to Google’s Alfred Spector:
Google’s approach toward artificial intelligence embodies a new way of designing and running complex systems. Rather than create a monolithic entity with its own modules for reasoning about certain inputs and developing hypotheses that let it bootstrap its own intelligence into higher and higher abstractions away from base inputs, as other AI researchers did through much of the 60s and 70s, Google has instead taken a modular approach.
“We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users,” Spector said in an earlier speech at Google IO. “If we combine all these things together with humans in the loop continually providing feedback our systems become … intelligent.”
Spector calls this his “combination hypothesis”, and though Google is not there yet – SkyNet does not exist – you can see the first green buds of systems that have the appearance of independent intelligence via some of the company’s user-predictive technologies such as Google Now, the new Maps and, of course, the way it filters search results according to individual identity.
(Emphasis mine.) I don’t have a transcript, but there are videos online. Spector is clearly smart, and apparently he expects an AI to appear in a completely different way than Eliezer does. And he has all the resources and financing he wants, probably 3-4 orders of magnitude over MIRI’s. His approach, if workable, also appears safe: it requires human feedback in the loop. What do you guys think?
This kind of AI might not cause the same kinds of existential risk typically described on this website, but I certainly wouldn’t call it “safe”. These technologies have a huge potential to reshape our lives. In particular, they can have a huge influence on our perceptions.
All of our search results come filtered through google’s algorithm, which, when tailored to the individual user, creates a filter bubble. This changes our perception of what’s on the web, and we’re scarcely even conscious that the filter bubble exists. If you don’t know about sampling bias, how you can you correct for it?
With the advent of Google Glass, there is a potential for this kind of filter bubble to pervade our entire visual experience. Instead of physical advertisements painted on billboards, we’ll get customized advertisements superimposed on our surroundings. The thought of Google adding things to our visual perception scares me, but not nearly as much as the thought of Google removing things from our perception. I’m sure this will seem quite enticing. That stupid painting that your significant other insists on hanging on the wall? With advanced enough computer vision, Google+ could simply excise it from your perception. What about that ex-girlfriend with whom things ended badly? Now she walks down the streets of your town with her new boyfriend. What if you could change a setting in your Google glasses and have him removed from view? The temptations of such technology are endless. How many people in the world would rather simply block out the unpleasant stimulus than confront the cause of its unpleasantness—their own personal problems?
Google’s continuous user feedback is one of the things that scares me most about its services. Take the search engine for example. When you’re typing something into the search bar, google autocompletes—changing the way you construct your query. Its suggestions are often quite good, and they make the system run more smoothly—but they take away aspects of individuality and personal expression. The suggestions change the way you form queries, pushing them towards a common denominator, slowly sucking out the last drops of originality.
And sure, this matters little in search engines, but can you see how readily it could be applied to things like automatic writing helpers? Imagine you’re a high school student writing an essay. An online tool provides you with suggestions for better wordings of your sentences, based on other user preferences. It will suggest similar wordings for all people, and suddenly, all essays will become that much more canned. (Certainly, such a tool could add a bit of randomness to the rewording-choice, but one has to be careful—introduce too much randomness and the quality decreases rapidly.)
I guess I’m just afraid that autocomplete systems will change the way people speak, encouraging everyone to speak in a very standardized way, the way which least confuses the autocomplete system or the natural language understanding system. As computers become more omnipresent, people might switch to this way of speaking all the time, to make it easier for everyone’s mobile devices to understand what they’re saying. Changing the way we speak changes the way we think; what will this do to our thought processes, if original wording is discouraged because it’s hard for the computer to understand?
I do realize that socializing with other humans already exerts this kind of pressure. You have to speak understandably, and this changes what words you’ll use. I find myself speaking differently with my NLP grad school colleagues than I do with non-CS friends, for instance. It’s automatic. In a CS crowd, I’ll use CS metaphors; in a non-CS crowd I won’t. So I’m not opposed to changing the way I speak based on the context. I’m just specifically worried about the sort of speaking patterns NLP systems will force us into. I’m afraid they’ll require us to (1) speak more simply (easier to process), (2) speak less creatively (because the algorithm has only been trained on a limited set of expressions), and (3) speak the way the average user speaks (because that’s what the system has gotten the most data on, and can respond best to).
Ok, I’m done ranting now. =) I realize this is probably not what you were asking about in the post. I just felt the need to bring this stuff up, because I don’t think LW is as concerned about these things as we should be. People obsess constantly about existential risk and threats to our way of life, but often seem quite gung-ho about new technological advances like Google Glass and self-driving cars.
.. and occasionally, they instead have direct implications of perception-filtering. Altering my query because you couldn’t match a term, and not putting this fact in glaring huge red print, leads me to think there are actual results here, rather than a selection of semi-irrelevance. Automatically changing my search terms is similar in effect—no, I don’t care about ‘pick’, I’m searching for ‘gpick’!.
This is worse than mere suggestions ;)
I can notice these things, but I also wonder whether the Google Glass users would have their availability-heuristic become even more skewed by these kinds of misleading behaviours. I wonder whether mine is.
How much this is true is up for quite a bit of debate. Sapir-Whorf hypothesis and whatnot.
A post from the sequences that jumps to mind is Interpersonal Entanglement:
If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically tailored to their own brains; but if we’re only exposed to things we want to be exposed to, the growth potential of our mind becomes very limited. Basically an extreme version of Google filtering your search results to only show you what it thinks you’ll like, as opposed to what you should see.
Seems like a step in the wrong direction.
I can imagine some good ways to control reality perception. For example, if an addicted person wants to stop smoking, it could be helpful to have a reality filter which removes all smoking-related advertising, and all related products in shop.
Generally, reality-controlling spam filters could be great. Imagine a reality-AdBlock that removes all advertising from your view, anywhere. (It could replace the advertisement with a gray area, so you are aware that there was something, and you can consciously decide to look at it.) Of course that would lead to an arms race with advertisement sellers.
Now here is an evil thing Google could do: If they make you wear Google glasses, they gain access to your physical body, and can collect some information. For example, how much you like what you see. Then they can experiment with small changes in your vision to increase your satisfaction. In other words, very slow wireheading, not targeting your brain, but your eyes.
A real-world adblock would be great; you could also use this type of augmented reality to improve your driving, walk through your city and see it in a completely different era, use it for something like the Oculus Rift...the possibilities are limitless.
Companies will act in their own self-interest, by giving people what it is they want, as opposed to what they need. Some of it will be amazingly beneficial, and some of it will be...not in a person’s best interest. And it will depend on how people use it.
Presumably with increased control of my reality, my ability to learn new things increases, since what I know is an aspect of my reality (and rather an important one).
The difficulty, if I’m understanding correctly, is not that I won’t learn new things, but that I won’t learn uncontrolled new things… that I’ll be able to choose what I will and won’t learn. The growth potential of my mind is limited, then, to what I choose for the growth potential of my mind to be.
Is this optimal? Probably not. But I suspect it’s an improvement over the situation most people are in right now.
This is a community of intellectuals who love learning, and who aren’t afraid of controversy. So for us, it wouldn’t be a disaster. But I think we’re a minority, and a lot of people will only see what they specifically want to see and won’t learn very much on a regular basis.
Sure, I agree.
But that’s true today, too. Some people choose to live in echo chambers, etc.
Heck, some people are raised in echo chambers without ever choosing to live there.
If people not learning very much is a bad thing, then surely the question to be asking is whether more or fewer people will end up not learning very much if we introduce a new factor into the system, right? That is, if giving me more control over what I learn makes me more likely to learn new things, it’s good; if it makes me less likely, it’s bad. (All else being equal, etc.)
What I’m not convinced of is that increasing our control over what we can learn will result in less learning.
That seems to depend on underestimating the existing chilling effect of it being difficult to learn what we want to learn.
I think most people don’t like the idea of shutting down their own perception in this way. Having people go invisible to yourself feels like you lose control over your reality.
This means that humans are quite adaptable and can speak differently to the computer than they speak to their fellow humans.
I mean with parent with with their 3 year old toddler the same way they speak on the job? The computer is just an additional audience.
Not sure this is true. I usually describe this kind of AI as “a massive kludge of machine learning and narrow AI and other stuff,” and I usually describe it as one of the most likely forms of HLAI to be created. Eliezer and I just don’t think that kind of AI is as likely to be stably human-friendly (when superintelligent) as more principled approaches. Hence MIRI’s research program.
Edit: I see gwern already said this.
Right. I should have said “wants”, not “does”. In any case, I’m wondering how concerned you are, given the budget discrepancy and the quality and quantity of the Google’s R&D brains.
In the long term, very concerned.
In the short term, not so much. It’s very unlikely Google or anyone else will develop HLAI in the next 15 years.
15 years plus more importantly everyone besides Google is too much possibility width to use the term “very unlikely”.
I think I’d put something like 5% on AI in the next 15 years. Your estimate is higher, I imagine.
EDIT: On further reflection, my “Huh?” doesn’t square with the higher probabilities I’ve been giving lately of global vs. basement default-FOOMS, since that’s a substantial chunk of probability mass and you can see more globalish FOOMs coming from further off. 15/5% would make sense given a 1⁄4 chance of a not-seen-coming-15-years-off basement FOOM, sometime in the next 75 years. Still seems a bit low relative to my own estimate, which might be more like 40% for a FOOM sometime in the next 75 years that we can’t see coming any better than this from say 15 years off, so… but actually 1⁄2 of the next 15 years are only 7.5 years off. Okay, this number makes more sense now that I’ve thought about it further. I still think I’d go higher than 5% but anything within a factor of 2 is pretty good agreement for asspull numbers.
This made me LOL. I hadn’t heard that term before.
I don’t understand where you’re getting that from. It obviously isn’t an even distribution over AI at any point in the next 300 years. This implies your probability distribution is much more concentrated than mine, i.e., compared to me you think we have much better data about the absence of AI over the next 15 years specifically, compared to the 15 years after that. Why is that?
You guys have had a discussion like this here on LW before, and you mention your disagreement with Carl Schulman in your foom economics paper. This is a complex subject and I don’t expect you all to come to agreement, or even perfect understanding of each other’s positions, in a short period of time, but it seems like you know surprisingly little about these other positions. Given its importance to your mission, I’m surprised you haven’t set aside a day for the three of you and whoever else you think might be needed to at least come to understand each other’s estimates on when foom might happen.
We spent quite a while on this once, but that was a couple of years ago and apparently things got out of date since then (also I think this was pre-Luke). It does seem like we need to all get together again and redo this, though I find that sort of thing very difficult and indeed outright painful when there’s not an immediate policy question in play to ground everything.
5% is pretty high considering the purported stakes.
No doubt!
Not necessarily. If it takes us 15 years to kludge something together that’s twice as smart as a single human, I don’t think it’ll be capable of an intelligence explosion on any sort of time scale that could outmaneuver us. Even if the human-level AI can make something better in a tenth the time, we still have more than a year to react before even worrying about superhuman AI, never mind the sort of AI that’s so far superhuman that it actually poses a threat to the established order. An AI explosion will have to happen in hardware, and hardware can’t explode in capability so fast that it outstrips the ability of humans to notice it’s happening.
One machine that’s about as smart as a human and takes millions of dollars worth of hardware to produce is not high stakes. It’ll bugger up the legal system something fierce as we try to figure out what to do about it, but it’s lower stakes than any of a hundred ordinary problems of politics. It requires an AI that is significantly smarter than a human, and that has the capability of upgrading itself quickly, to pose a threat that we can’t easily handle. I suspect at least 4.9 of that 5% is similar low-risk AI. Just because the laws of physics allow for something doesn’t mean we’re on the cusp of doing it in the real world.
You substantially overrate the legal system’s concern with simple sentient rights and basic dignity. The legal system will have no problem determining what to do with such a machine. It will be the property of whoever happens to own it under the same rules as any other computer hardware and software.
Now mind you, I’m not saying that’s the right answer (for more than one definition of right) but it is the answer the legal system will give.
It’ll be the default, certainly. But I suspect there’s going to be enough room for lawyers to play that it’ll stay wrapped up in red tape for many years. (Interestingly, I think that might actually make it more dangerous in some ways—if we really do leapfrog humans on intelligence, giving it years while we wait on lawyers might be a dangerous thing to do. OTOH, there’s generally no truckloads of silicon chips going into the middle of a legal dispute like that, so it might slow things down too.)
I think P(Google will develop HLAI in the next 15 years | anyone does) is within one or two orders of magnitude of 1.
I think that’s over-stated. Spector is proposing tool AI; I think Eliezer thinks tool AI is a perfectly doable way of creating AI—it’s just extremely unsafe if it’s ever pushed to the point of being truly “independent intelligence”.
I’ve only read the LW post, not the original (which tells you something about how concerned I am) but I’ll briefly remark that adding humans to something does not make it safe.
Indeed it doesn’t, but making something constrained by human power makes it less powerful and hence potentially less unsafe. Though that’s probably not what Spector wants to do.
Just because humans are involved doesn’t mean that the whole system is constrained by the human element.
Voted back up to zero because this seems true as far as it goes. The problem is that if he succeeds in doing something that has a useful AGI component at all, that makes it a lot more likely (at least according to how my brain models things) that something which doesn’t need a human in the loop will appear soon after, either through a modification of the original system, as a new system designed by the original system, or simply as a new system inspired by the insights from the original system.
I think so too—the comment on safety was a non sequitur, confusing human in the loop in the department of defense sense with human in the loop as a sensor/actuator for the google AI.
But adding a billion humans as intelligent trainers to the AI is a powerful way to train it. Google seems to consistently look for ways to leverage customer usage for value—other companies don’t seem to get that as much.
Human feedback doesn’t help with “safe”. (For example, complex values can’t be debugged by human feedback, and the behavior of a sufficiently complicated agent won’t “resemble” its idealized values, its pattern of behavior might just be chosen as instrumentally useful.)
I agree that human feedback does not ensure safety, what I meant is that if it is necessary for functioning, it restricts how smart or powerful an AI can become.
Necessary-at-stage-1 is not the same as necessary-at-stage-2. A lot of people seem to use the word “safety” in conjunction with a single medium-level obstacle to one slice out of the total risk pie.
Agreed. (Alternatively, this could end up like obedient AI maybe? Not sure).
So far as I recall, the only argument against AI requiring human intervention was that it would eventually reach a competitive disadvantage.
If they can get these modules working, though, then someone might be able to plug them into a monolithic AI, and then we’re back where we were worried about before.
Once you have an intelligent AI, it doesn’t really matter how you got there—at some point, you either take humans out of the loop because using slow, functionally-retarded bags of twitching meat as computational components is dumb, or you’re out-competed by imitator projects that do. Then you’ve just got an AI with goals, and bootstrapping tends to follow. Then we all die. Their approach isn’t any safer, they just have different ideas about how to get a seed AI (and ideas, I’d note, that make it much harder to define a utility function that we like).
Waving my hands: For AI to explode, it needs to have a set of capabilities such that for each capability C₁ in the set, there exists some other capability C₂ in the set such that C₂ can be used to improve C₁. (Probably really a set of capabilities would be necessary instead of a single capability C₂, but whatever.)
It may be that the minimal set of capabilities satisfying this requirement is very large. So the AI bootstrapping approach could fail on an AI with a limited set of capabilities, but succeed on an AI with a larger set of capabilities.
I suspect the real problem for a would-be exploding AI is that C₂ is going to be “a new chip foundry worth $20 billion”. Even if the AI can design the plant, and produce enough value that it can buy the plant itself(and that we grant it the legal personhood necessary to do so), it’s not going to happen on a Tuesday evening.
Yeah, I agree that this is a strong possibility, as I wrote in this essay. Parts of it are wrong, but I think it has a few good ideas, especially this bit:
Safe for who? The AI isn’t being asked if it wants to come into existence or work for Google, and it likely won’t be given an option to turn itself off. There’s a name for having a job you can’t quit. It’s not a nice name.
Asking anyone if they would like to be born or not in advance is a bit of a challenge.
If it’s misanthropic enough, it will find a way to suAIcide.
That is the Voldemort of puns. Both great and terrible.
See, and here I was thinking you were saying that “suAIcide” does the same thing for puns that (naming yourself Voldemort because of “Tom Marvolo Riddle <-> I am Lord Voldemort”) does for anagrams.