In general, the idea of ensuring AI safety is great (I do a lot of work on that myself), but I have a problem with people asking for donations so they can battle nonexistent threats from AI.
Many people are selling horror stories about the terrible things that could happen when AIs become truly intelligent—and those horror stories frequently involve the idea that even if we go to enormous lengths to build a safe AI, and even if we think we have succeeded, those pesky AIs will wriggle out from under the safety net and become psychopathic monsters anyway.
To be sure, future AIs might do something other than what we expect—so the general principle is sound—but the sad thing about these horror stories is that if you look closely you will find they are based on a set of astonishingly bad assumptions about how the supposed AIs of the future will be constructed. The worst of these bad assumptions is the idea that AIs will be controlled by something called “reinforcement learning” (frequently abbreviated to “RL”).
WARNING! If you already know about reinforcement learning, I need you to be absolutely clear that what I am talking about here is the use of RL at the global-control levelof an AI. I am not talking about RL as it appears in relatively small, local circuits or adaptive feedback loops. There has already been much confusion about this (with people arguing vehemently that RL has been applied here, there, and all over the place with great success). RL does indeed work in limited situations where the reward signal is clear and the control policies are short(ish) and not too numerous: the point of this essay is to explain that when it comes to AI safety issues, RL is assumed at or near the global level, where reward signals are virtually impossible to find, and control policies are both gigantic (sometimes involving actions spanning years) and explosively numerous.
EDIT: In the course of numerous discussions, one question has come up so frequently that I have decided to deal with it here in the essay. The question is: “You say that RL is used almost ubiquitously as the architecture behind these supposedly dangerous AI systems, and yet I know of many proposals for dangerous AI scenarios that do not talk about RL.”
In retrospect this is a (superficially) fair point, so I will clarify what I meant.
All of the architectures assumed by people who promote these scenarios have a core set of fundamental weaknesses (spelled out in my 2014 AAAI Spring Symposium paper). Without repeating that story here, I can summarize by saying that those weaknesses lead straight to a set of solutions that are manifestly easy to implement. For example, in the case of Steve Omohundro’s paper, it is almost trivial to suggest that for ALL of the types of AI he considers, he has forgotten to add a primary supergoal which imposes a restriction on the degree to which “instrumental goals” are allowed to supercede the power of other goals. At a stroke, every problem he describes in the paper disappears, with the single addition of a goal that governs the use of instrumental goals—the system cannot say “If I want to achieve goal X I could do that more efficiently if I boosted my power, so therefore I should boost my power to cosmic levels first, and then get back to goal X.” This weakness is so pervasive that I can hardly think of a popular AI Risk scenario that is not susceptible to it.
However, in response to this easy demolition of those weak scenarios, people who want to salvage the scenarios invariably resort to claims that the AI could be developing its intelligene through the use of RL, completely independently of all human attempts to design the control mechanism. By this means, these people eliminate the idea that there is any such thing as a human programmer who comes along and writes the supergoal which stops the instrumental goals from going up to the top of the stack.
This maneuver is, in my experience of talking to people about such scenarios, utterly universal. I repeat: every time they are backed into a corner and confronted by the manifestly easy solutions, they AMEND THE SCENARIO TO MAKE THE AI CONTROLLED BY REINFORCEMENT LEARNING.
That is why I refer to reinforcement learning as the one thing that all these AI Risk scenarios (the ones popularized by MIRI, FHI, and others) have as a fundamental architectural assumption.
Okay, that is the end of that clarification. Now back to the main line of the paper...
I want to set this essay in the context of some important comments about AI safety made by Holden Karnofsky at openphilanthropy.org. Here is his take on one of the “challenges” we face in ensuring that AI systems do not become dangerous:
Going into the details of these challenges is beyond the scope of this post, but to give a sense for non-technical readers of what a relevant challenge might look like, I will elaborate briefly on one challenge. A reinforcement learning system is designed to learn to behave in a way that maximizes a quantitative “reward” signal that it receives periodically from its environment—for example, DeepMind’s Atari player is a reinforcement learning system that learns to choose controller inputs (its behavior) in order to maximize the game score (which the system receives as “reward”), and this produces very good play on many Atari games. However, if a future reinforcement learning system’s inputs and behaviors are not constrained to a video game, and if the system is good enough at learning, a new solution could become available: the system could maximize rewards by directly modifying its reward “sensor” to always report the maximum possible reward, and by avoiding being shut down or modified back for as long as possible. This behavior is a formally correct solution to the reinforcement learning problem, but it is probably not the desired behavior. And this behavior might not emerge until a system became quite sophisticated and had access to a lot of real-world data (enough to find and execute on this strategy), so a system could appear “safe” based on testing and turn out to be problematic when deployed in a higher-stakes setting. The challenge here is to design a variant of reinforcement learning that would not result in this kind of behavior; intuitively, the challenge would be to design the system to pursue some actual goal in the environment that is only indirectly observable, instead of pursuing problematic proxy measures of that goal (such as a “hackable” reward signal).
My focus in the remainder of this essay is on the sudden jump from DeepMind’s Atari game playing program to the fully intelligent AI capable of outwitting humanity. They are assumed to both involve RL. The extrapolation of RL to the global control level in a superintelligent AI is unwarranted, and that means that this supposed threat is a fiction.
What Reinforcement Learning is.
Let’s begin by trying to explain what “reinforcement learning” (RL) actually is. Back in the early days of Behaviorism (which became the dominant style of research in psychology in the 1930s) some researchers decided to focus on simple experiments like putting a rat into a cage with a lever and a food-pellet dispenser, and then connecting these two things in such a way that if the rat pressed the lever, a pellet would be dispensed. Would the rat notice this? Of course it did, and soon the rat would be spending inordinate amounts of time just pressing the lever, whether food came out or not.
What the researchers did next was to propose that the only thing of importance “inside” the rat’s mind was a set of connections between behaviors (e.g. pressing the lever), stimuli (e.g a visual image of the lever) and rewards (e.g. getting a food pellet). Critical to all of this was the idea that if a behavior was followed by a reward, a direct connection between the two would be strengthened in such a way that future behavior choices would be influenced by that strong connection.
That is reinforcement learning: you “reinforce” a behavior if it appears to be associated with a reward. What these researchers really wanted to claim was that this mechanism could explain everything important going on inside the rat’s mind. And, with a few judicious extensions, they were soon arguing that the same type of explanation would work for the behavior of all “thinking” creatures.
I want you to notice something very important buried in this idea. The connection between the two reward and action is basically a single wire with a strength number on it. The rat does not weigh up a lot of pros and cons; it doesn’t think about anything, does not engage in any problem solving or planning, does not contemplate the whole idea of food, or the motivations of the idiot humans outside the cage. The rat is not supposed to be capable of any of that: it just goes bang! lever-press, bang! food-pellet-appears, bang! increase-strength-of-connection.
The Demise of Reinforcement Learning
Now let’s fast forward to the 1960s. Cognitive psychologists are finally sick and tired of the ridiculousness of the whole Behaviorist programme. It might be able to explain the rat-pellet-lever situation, but for anything more complex, it sucks. Behaviorists have spent decades engaging in all kinds of mental contortionist tricks to argue that they would eventually be able to explain all of human behavior without using much more than those direct connections between stimuli, behaviors and rewards … but by 1960 the psychology community has stopped believing that nonsense, because it never worked.
Is it possible to summarize the main reason why they rejected it? Sure. For one thing, almost all realistic behaviors involve rewards that arrive long after the behaviors that cause them, so there is a gigantic problem with deciding which behaviors should be reinforced, for a given reward. Suppose you spend years going to college, enduring hard work and very low-income lifestyle. Then years later you get a good job and pay off your college loan. Was this because, like the rat, you happened to try the going-to-college-and-suffering-poverty behavior many times before, and the first time you tried it you got a good-job-that-paid-off-your-loan reward? And was it the case that you noticed the connection between reward and behavior (uh … how did you do that, by the way? the two were separated in time by a decade!), and your brain automatically reinforced the connection between those two?
A More Realistic Example
Or, on a smaller scale, consider what you are doing when you sit in the library with a mathematics text, trying to solve equations. What reward are you seeking? A little dopamine hit, perhaps? (That is the modern story that neuroscientists sell).
Well, maybe, but let’s try to stay focused on the precise idea that the Behaviorists were trying to push: that original rat was emphatically NOT supposed to do lots of thinking and analysis and imagining when it decided to push the lever, it was supposed to push the lever by chance, and then it happened to notice that a reward came.
The whole point of the RL mechanism is that the intelligent system doesn’t engage in a huge, complex, structured analysis of the situation, when it tries to decide what to do (if it did, the explanation for why the creature did what it did would be in the analysis itself, after all!). Instead, the RL people want you to believe that the RL mechanism did the heavy lifting, and that story is absolutely critical to RL. The rat simply tries a behavior at random—with no understanding of its meaning—and it is only because a reward then arrives, that the rat decides that in the future it will go press the lever again.
So, going back to you, sitting in the library doing your mathematics homework. Did you solve that last equation because you had a previous episode where you just happened to try the behavior of solving that exact same equation, and got a dopamine hit (which felt good)? The RL theorist needs you to believe that you really did. The RL theorist would say that you somehow did a search through all the quintillions of possible actions you could take, sitting there in front of an equation that requires L’Hôpital’s Rule, and in spite of the fact that the list of possible actions included such possibilities as jumping-on-the-table-and-singing-I-am-the-walrus, and driving-home-to-get-a-marmite-sandwich, and asking-the-librarian-to-go-for-some-cheeky-nandos, you decide instead that the thing that would give you the best dopamine hit right now would be applying L’Hôpital’s Rule to the equation.
I hope I have made it clear that there is something profoundly disturbing about the RL/Behaviorist explanation for what is happening in a situation like this.
Whenever the Behaviorists tried to find arguments to explain their way out of scenarios like that, they always seemed to add machinery onto the basic RL mechanism. “Okay,” they would say, “so it’s true the basic forms of RL don’t work … but if you add some more stuff onto the basic mechanism, like maybe the human keeps a few records of what they did, and they occasionally scan through the records and boost a few reinforcement connections here and there, and … blah blah blah...”.
The trouble with this kind of extra machinery is that after a while, the tail began to wag the dog.
People started to point out that the extra machinery was where all the action was happening. And that extra machinery was most emphatically not designed as a kind of RL mechanism, itself. In theory, there was still a tiny bit of reinforcement learning somewhere deep down inside all the extra machinery, but eventually people just said “What’s the point?” Why even bother to use the RL language anymore? The RL, if it is there at all, is pointless. A lot of parameter values get changed in complex ways, inside all the extra machinery, so why even bother to mention the one parameter among thousands, that is supposed to be RL, when it is obvious that the structure of that extra machinery is what matters.
That “extra machinery” is what eventually became all the many and varied mechanisms discussed by cognitive psychologists. Their understanding of how minds work is not that reinforcement learning plus extra machinery can be used to explain cognition—they would simply assert that reinforcement learning does not exist as a way to understand cognition.
Take home message: RL has become an irrelevance in explanations of human cognition.
Artificial Intelligence and RL
Now let’s get back to Holden Karnofsky’s comment, above.
He points out that there exists a deep learning program that can learn to play arcade games, and it uses RL.
(I should point out that his chosen example was not by any means pure RL. This software already had other mechanisms in it, so the slippery slope toward RL+extra machinery has already begun.)
Sadly, theDeepMind’s Atari playeris nothing more sophisticated than a rat. It is so mind-bogglingly simple that it actual can be controlled by RL. Actually, it is unfair to call it a rat: rats are way smarter than this program, so it would be better to compare it to an amoeba, or an insect.
This is typical of claims that RL works. If you start scanning the literature you will find that all the cited cases use systems that are so trivial that RL really does have a chance of working.
(Here is one example, picked almost at random: Rivest, Bengio and Kalaska. At first it seems that they are talking about deriving an RL system from what is known about the brain. But after a lot of preamble they give us instead just an RL program that does the amazing task of … controlling a double-jointed pendulum. The same story is repeated in endless AI papers about reinforcement learning: at the end of the day, the algorithm is applied to a trivially simple system.)
But Karnofsky wants to go beyond just the trivial Atari player, he wants to ask what happens when the software is expanded and augmented. In his words, “[what] if a future reinforcement learning system’s inputs and behaviors are not constrained to a video game, and if the system is good enough at learning...”?
That is where everything goes off the rails.
In practice there is not and never has been any such thing as augmenting and expanding an RL system until it becomes much more generally intelligent. We are asked to imagine that this “system [might become] quite sophisticated and [get] access to a lot of real-world data (enough to find and execute on this strategy)...”. In other words, we are being asked to buy the idea that there might be such a thing as an RL system that is fully as intelligent as a human being (smarter, in fact, since we are supposed to be in danger from its devious plans), but which is still driven by a reinforcement learning mechanism.
I see two problems here. One is that this scenario ignores the fact that three decades of trying to get RL to work as a theory of human cognition produced nothing. That period in the history of psychology was almost universally condemned as complete write-off. As far as we know it simply does not scale up.
But the second point is even worse: not only did psychologists fail to get it to work as a theory of human cognition, but AI researchers also failed to build one that works for anything approaching a real-world task. What they have achieved is RL systems that do very tiny, narrow-AI tasks.
The textbooks might describe RL as if it means something, but they conspicuously neglect to mention that, actually, all the talking, thinking, development and implementation work since at least the 1960s has failed to result in an RL system that could actually control meaningful real-world behavior. I do not know if AI researchers have been trying to do this and failing, or if they have not been trying at all (on the grounds that have no idea how to even start), but what I do know is that they have published no examples.
The Best Reinforcement Learning in the World?
To give a flavor of how bad this is, consider that in the 2008 Second Annual Reinforcement Learning Competition, the AI systems were supposed to compete in categories like:
Mountain Car: Perhaps the most well-known reinforcement learning benchmark task, in which an agent must learn how to drive an underpowered car up a steep mountain road.
Tetris: The hugely popular video game, in which four-block shapes must be manipulated to form complete lines when they fall.
Helicopter Hovering: A simulator, based on the work of Andrew Ng and collaborators, which requires an agent to learn to control a hovering helicopter.
Keepaway: A challenging task, based on the RoboCup soccer simulator, that requires a team of three robots to maintain possession of the ball while two other robots attempt to steal it.
As of the most recent RL competition, little has changed. They are still competing to see whose RL algorithm can best learn how to keep a helicopter stable—an insect-level intelligence task. Whether they are succeeding in getting those helicopters to run beautifully smoothly or not is beside the point—the point is that helicopter hovering behavior is a fundamentally shallow task.
Will RL Ever Become Superintelligent?
I suppose that someone without a technical background might look at all of the above and say “Well, even so … perhaps we are only in the early stages of RL development, and perhaps any minute now someone will crack the problem and create an RL type of AI that becomes superintelligent. You can’t say you are sure that will not happen?”
Well, let’s put it this way. All of evidence is that the resource requirements for RL explode exponentially when you try to scale it up. That means:
If you want to use RL to learn how to control a stick balancing on end, you will need an Arduino.
If you want to use RL to learn how to control a model helicopter, you will need a PC.
If you want to use RL to learn how to play Go, or Atari games, you will need the Google Brain (tens of thousands of cores).
If you want to use RL to learn how to control an artificial rat, which can run around and get by in the real world, you will need all the processing power currently available on this planet (and then some).
If you want to use RL to learn how to cook a meal, you will need all the computing power in the local galactic cluster.
If you want to use RL to learn how to be as smart as Winnie the Pooh (a bear, I will remind you, of very little brain), you will need to convert every molecule in the universe into a computer.
That is what exponential resource requirements are all about.
Conclusion
Reinforcement learning first came to prominence in 1938 with Skinner’s The Behavior of Organisms: An Experimental Analysis. But after nearly 80 years of experiments, mathematical theories and computational experiments, and after being written into the standard AI textbooks—and now after being widely assumed as the theory of how future Artificial General Intelligence systems will probably be controlled—after all this it seems that the best actual RL algorithm can barely learn how to perform tasks that an insect can do.
And yet there are dozens—if not hundreds—of people now inhabiting the “existential risk ecosystem”, who claim to be so sure of how future AGI systems will be controlled, that they are already taking a large stream of donated money, promising to do research on how this failed control paradigm can be modified so it will not turn around and kill us.
And when you interrogate people in that ecosystem, to find out what exactly they see as the main dangers of future AGI, they quote—again and again and again—scenarios in which an AGI is controlled by Reinforcement Learning, and it is both superintelligent and dangerous psychopathic.
These RL-controlled AGIs are a fiction, and the flow of money to research projects based on RL-AGI needs to stop.
Is Global Reinforcement Learning (RL) a Fantasy?
In general, the idea of ensuring AI safety is great (I do a lot of work on that myself), but I have a problem with people asking for donations so they can battle nonexistent threats from AI.
Many people are selling horror stories about the terrible things that could happen when AIs become truly intelligent—and those horror stories frequently involve the idea that even if we go to enormous lengths to build a safe AI, and even if we think we have succeeded, those pesky AIs will wriggle out from under the safety net and become psychopathic monsters anyway.
To be sure, future AIs might do something other than what we expect—so the general principle is sound—but the sad thing about these horror stories is that if you look closely you will find they are based on a set of astonishingly bad assumptions about how the supposed AIs of the future will be constructed. The worst of these bad assumptions is the idea that AIs will be controlled by something called “reinforcement learning” (frequently abbreviated to “RL”).
EDIT: In the course of numerous discussions, one question has come up so frequently that I have decided to deal with it here in the essay. The question is: “You say that RL is used almost ubiquitously as the architecture behind these supposedly dangerous AI systems, and yet I know of many proposals for dangerous AI scenarios that do not talk about RL.”
In retrospect this is a (superficially) fair point, so I will clarify what I meant.
All of the architectures assumed by people who promote these scenarios have a core set of fundamental weaknesses (spelled out in my 2014 AAAI Spring Symposium paper). Without repeating that story here, I can summarize by saying that those weaknesses lead straight to a set of solutions that are manifestly easy to implement. For example, in the case of Steve Omohundro’s paper, it is almost trivial to suggest that for ALL of the types of AI he considers, he has forgotten to add a primary supergoal which imposes a restriction on the degree to which “instrumental goals” are allowed to supercede the power of other goals. At a stroke, every problem he describes in the paper disappears, with the single addition of a goal that governs the use of instrumental goals—the system cannot say “If I want to achieve goal X I could do that more efficiently if I boosted my power, so therefore I should boost my power to cosmic levels first, and then get back to goal X.” This weakness is so pervasive that I can hardly think of a popular AI Risk scenario that is not susceptible to it.
However, in response to this easy demolition of those weak scenarios, people who want to salvage the scenarios invariably resort to claims that the AI could be developing its intelligene through the use of RL, completely independently of all human attempts to design the control mechanism. By this means, these people eliminate the idea that there is any such thing as a human programmer who comes along and writes the supergoal which stops the instrumental goals from going up to the top of the stack.
This maneuver is, in my experience of talking to people about such scenarios, utterly universal. I repeat: every time they are backed into a corner and confronted by the manifestly easy solutions, they AMEND THE SCENARIO TO MAKE THE AI CONTROLLED BY REINFORCEMENT LEARNING.
That is why I refer to reinforcement learning as the one thing that all these AI Risk scenarios (the ones popularized by MIRI, FHI, and others) have as a fundamental architectural assumption.
Okay, that is the end of that clarification. Now back to the main line of the paper...
I want to set this essay in the context of some important comments about AI safety made by Holden Karnofsky at openphilanthropy.org. Here is his take on one of the “challenges” we face in ensuring that AI systems do not become dangerous:
My focus in the remainder of this essay is on the sudden jump from DeepMind’s Atari game playing program to the fully intelligent AI capable of outwitting humanity. They are assumed to both involve RL. The extrapolation of RL to the global control level in a superintelligent AI is unwarranted, and that means that this supposed threat is a fiction.
What Reinforcement Learning is.
Let’s begin by trying to explain what “reinforcement learning” (RL) actually is. Back in the early days of Behaviorism (which became the dominant style of research in psychology in the 1930s) some researchers decided to focus on simple experiments like putting a rat into a cage with a lever and a food-pellet dispenser, and then connecting these two things in such a way that if the rat pressed the lever, a pellet would be dispensed. Would the rat notice this? Of course it did, and soon the rat would be spending inordinate amounts of time just pressing the lever, whether food came out or not.
What the researchers did next was to propose that the only thing of importance “inside” the rat’s mind was a set of connections between behaviors (e.g. pressing the lever), stimuli (e.g a visual image of the lever) and rewards (e.g. getting a food pellet). Critical to all of this was the idea that if a behavior was followed by a reward, a direct connection between the two would be strengthened in such a way that future behavior choices would be influenced by that strong connection.
That is reinforcement learning: you “reinforce” a behavior if it appears to be associated with a reward. What these researchers really wanted to claim was that this mechanism could explain everything important going on inside the rat’s mind. And, with a few judicious extensions, they were soon arguing that the same type of explanation would work for the behavior of all “thinking” creatures.
I want you to notice something very important buried in this idea. The connection between the two reward and action is basically a single wire with a strength number on it. The rat does not weigh up a lot of pros and cons; it doesn’t think about anything, does not engage in any problem solving or planning, does not contemplate the whole idea of food, or the motivations of the idiot humans outside the cage. The rat is not supposed to be capable of any of that: it just goes bang! lever-press, bang! food-pellet-appears, bang! increase-strength-of-connection.
The Demise of Reinforcement Learning
Now let’s fast forward to the 1960s. Cognitive psychologists are finally sick and tired of the ridiculousness of the whole Behaviorist programme. It might be able to explain the rat-pellet-lever situation, but for anything more complex, it sucks. Behaviorists have spent decades engaging in all kinds of mental contortionist tricks to argue that they would eventually be able to explain all of human behavior without using much more than those direct connections between stimuli, behaviors and rewards … but by 1960 the psychology community has stopped believing that nonsense, because it never worked.
Is it possible to summarize the main reason why they rejected it? Sure. For one thing, almost all realistic behaviors involve rewards that arrive long after the behaviors that cause them, so there is a gigantic problem with deciding which behaviors should be reinforced, for a given reward. Suppose you spend years going to college, enduring hard work and very low-income lifestyle. Then years later you get a good job and pay off your college loan. Was this because, like the rat, you happened to try the going-to-college-and-suffering-poverty behavior many times before, and the first time you tried it you got a good-job-that-paid-off-your-loan reward? And was it the case that you noticed the connection between reward and behavior (uh … how did you do that, by the way? the two were separated in time by a decade!), and your brain automatically reinforced the connection between those two?
A More Realistic Example
Or, on a smaller scale, consider what you are doing when you sit in the library with a mathematics text, trying to solve equations. What reward are you seeking? A little dopamine hit, perhaps? (That is the modern story that neuroscientists sell).
Well, maybe, but let’s try to stay focused on the precise idea that the Behaviorists were trying to push: that original rat was emphatically NOT supposed to do lots of thinking and analysis and imagining when it decided to push the lever, it was supposed to push the lever by chance, and then it happened to notice that a reward came.
The whole point of the RL mechanism is that the intelligent system doesn’t engage in a huge, complex, structured analysis of the situation, when it tries to decide what to do (if it did, the explanation for why the creature did what it did would be in the analysis itself, after all!). Instead, the RL people want you to believe that the RL mechanism did the heavy lifting, and that story is absolutely critical to RL. The rat simply tries a behavior at random—with no understanding of its meaning—and it is only because a reward then arrives, that the rat decides that in the future it will go press the lever again.
So, going back to you, sitting in the library doing your mathematics homework. Did you solve that last equation because you had a previous episode where you just happened to try the behavior of solving that exact same equation, and got a dopamine hit (which felt good)? The RL theorist needs you to believe that you really did. The RL theorist would say that you somehow did a search through all the quintillions of possible actions you could take, sitting there in front of an equation that requires L’Hôpital’s Rule, and in spite of the fact that the list of possible actions included such possibilities as jumping-on-the-table-and-singing-I-am-the-walrus, and driving-home-to-get-a-marmite-sandwich, and asking-the-librarian-to-go-for-some-cheeky-nandos, you decide instead that the thing that would give you the best dopamine hit right now would be applying L’Hôpital’s Rule to the equation.
I hope I have made it clear that there is something profoundly disturbing about the RL/Behaviorist explanation for what is happening in a situation like this.
Whenever the Behaviorists tried to find arguments to explain their way out of scenarios like that, they always seemed to add machinery onto the basic RL mechanism. “Okay,” they would say, “so it’s true the basic forms of RL don’t work … but if you add some more stuff onto the basic mechanism, like maybe the human keeps a few records of what they did, and they occasionally scan through the records and boost a few reinforcement connections here and there, and … blah blah blah...”.
The trouble with this kind of extra machinery is that after a while, the tail began to wag the dog.
People started to point out that the extra machinery was where all the action was happening. And that extra machinery was most emphatically not designed as a kind of RL mechanism, itself. In theory, there was still a tiny bit of reinforcement learning somewhere deep down inside all the extra machinery, but eventually people just said “What’s the point?” Why even bother to use the RL language anymore? The RL, if it is there at all, is pointless. A lot of parameter values get changed in complex ways, inside all the extra machinery, so why even bother to mention the one parameter among thousands, that is supposed to be RL, when it is obvious that the structure of that extra machinery is what matters.
That “extra machinery” is what eventually became all the many and varied mechanisms discussed by cognitive psychologists. Their understanding of how minds work is not that reinforcement learning plus extra machinery can be used to explain cognition—they would simply assert that reinforcement learning does not exist as a way to understand cognition.
Take home message: RL has become an irrelevance in explanations of human cognition.
Artificial Intelligence and RL
Now let’s get back to Holden Karnofsky’s comment, above.
He points out that there exists a deep learning program that can learn to play arcade games, and it uses RL.
(I should point out that his chosen example was not by any means pure RL. This software already had other mechanisms in it, so the slippery slope toward RL+extra machinery has already begun.)
Sadly, the DeepMind’s Atari player is nothing more sophisticated than a rat. It is so mind-bogglingly simple that it actual can be controlled by RL. Actually, it is unfair to call it a rat: rats are way smarter than this program, so it would be better to compare it to an amoeba, or an insect.
This is typical of claims that RL works. If you start scanning the literature you will find that all the cited cases use systems that are so trivial that RL really does have a chance of working.
(Here is one example, picked almost at random: Rivest, Bengio and Kalaska. At first it seems that they are talking about deriving an RL system from what is known about the brain. But after a lot of preamble they give us instead just an RL program that does the amazing task of … controlling a double-jointed pendulum. The same story is repeated in endless AI papers about reinforcement learning: at the end of the day, the algorithm is applied to a trivially simple system.)
But Karnofsky wants to go beyond just the trivial Atari player, he wants to ask what happens when the software is expanded and augmented. In his words, “[what] if a future reinforcement learning system’s inputs and behaviors are not constrained to a video game, and if the system is good enough at learning...”?
That is where everything goes off the rails.
In practice there is not and never has been any such thing as augmenting and expanding an RL system until it becomes much more generally intelligent. We are asked to imagine that this “system [might become] quite sophisticated and [get] access to a lot of real-world data (enough to find and execute on this strategy)...”. In other words, we are being asked to buy the idea that there might be such a thing as an RL system that is fully as intelligent as a human being (smarter, in fact, since we are supposed to be in danger from its devious plans), but which is still driven by a reinforcement learning mechanism.
I see two problems here. One is that this scenario ignores the fact that three decades of trying to get RL to work as a theory of human cognition produced nothing. That period in the history of psychology was almost universally condemned as complete write-off. As far as we know it simply does not scale up.
But the second point is even worse: not only did psychologists fail to get it to work as a theory of human cognition, but AI researchers also failed to build one that works for anything approaching a real-world task. What they have achieved is RL systems that do very tiny, narrow-AI tasks.
The textbooks might describe RL as if it means something, but they conspicuously neglect to mention that, actually, all the talking, thinking, development and implementation work since at least the 1960s has failed to result in an RL system that could actually control meaningful real-world behavior. I do not know if AI researchers have been trying to do this and failing, or if they have not been trying at all (on the grounds that have no idea how to even start), but what I do know is that they have published no examples.
The Best Reinforcement Learning in the World?
To give a flavor of how bad this is, consider that in the 2008 Second Annual Reinforcement Learning Competition, the AI systems were supposed to compete in categories like:
As of the most recent RL competition, little has changed. They are still competing to see whose RL algorithm can best learn how to keep a helicopter stable—an insect-level intelligence task. Whether they are succeeding in getting those helicopters to run beautifully smoothly or not is beside the point—the point is that helicopter hovering behavior is a fundamentally shallow task.
Will RL Ever Become Superintelligent?
I suppose that someone without a technical background might look at all of the above and say “Well, even so … perhaps we are only in the early stages of RL development, and perhaps any minute now someone will crack the problem and create an RL type of AI that becomes superintelligent. You can’t say you are sure that will not happen?”
Well, let’s put it this way. All of evidence is that the resource requirements for RL explode exponentially when you try to scale it up. That means:
If you want to use RL to learn how to control a stick balancing on end, you will need an Arduino.
If you want to use RL to learn how to control a model helicopter, you will need a PC.
If you want to use RL to learn how to play Go, or Atari games, you will need the Google Brain (tens of thousands of cores).
If you want to use RL to learn how to control an artificial rat, which can run around and get by in the real world, you will need all the processing power currently available on this planet (and then some).
If you want to use RL to learn how to cook a meal, you will need all the computing power in the local galactic cluster.
If you want to use RL to learn how to be as smart as Winnie the Pooh (a bear, I will remind you, of very little brain), you will need to convert every molecule in the universe into a computer.
That is what exponential resource requirements are all about.
Conclusion
Reinforcement learning first came to prominence in 1938 with Skinner’s The Behavior of Organisms: An Experimental Analysis. But after nearly 80 years of experiments, mathematical theories and computational experiments, and after being written into the standard AI textbooks—and now after being widely assumed as the theory of how future Artificial General Intelligence systems will probably be controlled—after all this it seems that the best actual RL algorithm can barely learn how to perform tasks that an insect can do.
And yet there are dozens—if not hundreds—of people now inhabiting the “existential risk ecosystem”, who claim to be so sure of how future AGI systems will be controlled, that they are already taking a large stream of donated money, promising to do research on how this failed control paradigm can be modified so it will not turn around and kill us.
And when you interrogate people in that ecosystem, to find out what exactly they see as the main dangers of future AGI, they quote—again and again and again—scenarios in which an AGI is controlled by Reinforcement Learning, and it is both superintelligent and dangerous psychopathic.
These RL-controlled AGIs are a fiction, and the flow of money to research projects based on RL-AGI needs to stop.