30th Soar workshop
This is a report from a LessWrong perspective, on the 30th Soar workshop. Soar is a cognitive architecture that has been in continuous development for nearly 30 years, and is in a direct line of descent from some of the earliest AI research (Simon’s LT and GPS). Soar is interesting to LessWrong readers for two reasons:
Soar is a cognitive science theory, and has had some success at modeling human reasoning—this is relevant to the central theme of LessWrong, improving human rationality.
Soar is an AGI research project—this is relevant to the AGI risks sub-theme of LessWrong.
Where I’m coming from: I’m a skeptic about EY/SIAI dogmas that AI research is more risky than software development, and that FAI research is not AI research, and has little to learn from the field of AI research. In particular, I want to understand why AI researchers are generally convinced that their experiments and research are fairly safe—I don’t think that EY/SIAI are paying sufficient attention to these expert opinions.
Overall summary: John Laird and his group are smart, dedicated, and funded. Their theory and implementation moves forward slowly but continuously. There’s no (visible) work being done on self-modifying, bootstrapping or approximately-universal (e.g. AIXItl) entities. There is some concern about how to build trustworthy and predictable AIs (for the military’s ROE) - for example, Scott Wallace’s research.
As far as I can tell, the Soar group’s work is no more (or less) risky than narrow AI research or ostensibly non-AI software development. To be blunt—package managers like APT seem more risky than Soar, because the economic forces that push them to more capability and complexity are more difficult to control.
Impressions of (most of) the talks—they can be roughly categorized into three types.
Miscellaneous
Paul Rosenbloom (one of the original authors of Soar, with John Laird and Allen Newell) spoke about trying to create a layer architecturally beneath Soar based on Bayesian graphical models; specifically factor graphs. There were not a lot of people using or trying to use Bayesian magic at the workshop, but Paul Rosenbloom is definitely a Bayesian magician; he’s getting Rete to run as an emergent consequence of the sum-product algorithm.
Ken Forbus (of “Structure-Mapping Engine” fame) spoke generally about his research, including Companions and CogSketch. I got the impression that there are worlds and worlds within academia, and I only have faint visibility into one or a couple of them.
Extending, combining and unifying the existing Soar capabilities (uniformly by Laird and his students):
Nate Derbinsky’s tutorials and talks were on various extensions recently added to Soar (“RL”, “SMem”, “EpMem”).
Integrating reinforcement learning with the (already complicated) Soar architecture must have been difficult, but tabular Q-learning/SARSA is now well integrated, and there’s some support in the released code for eligibility traces and hierarchical reinforcement learning, but not value function approximators. I believe that means that Soar-RL is not as capable at RL tasks as the cutting edge of RL research, but of course, the cutting edge of RL research is not as capable at the symbolic processing tasks that are Soar’s bread and butter.
SMem is essentially a form of content-addressable storage that is under Soar’s explicit control. This is in contrast to Soar’s working memory, which is content-addressible using (Rete) pattern-matching, which is more analogous to being involuntarily reminded of something, than deliberately building a cue and searching one’s memory for a match. This means that SMem scales to larger sizes than working memory.
EpMem is a memory of the content of past working memories. Unlike SMem, (if this feature is turned on) Soar needn’t explicitly store into this memory—every working memory will be stored into EpMem. Fetching from EpMem is content-addressible similarly to SMem, though once an episode has been fetched, Soar can ask what happened next or before that.John Laird spoke about how the new features of Soar enabled new forms of action modeling. Action modeling is what you use to simulate, internally, the consequences of actions, in the current state or some anticipated future state. Action modeling is necessary to do planning. The standard way in Soar to do this has been for the programmer to add rules explaining the consequences of actions. SMem, EpMem and imagery (a currently-being-developed extension to Soar) can each enable new forms of action modeling.
Mitchell Bloch replicated a particular hierarchical reinforcement learning experiment (Dietterich 1998, MAXQ). My takeaway was that Soar-RL is steadily advancing, incorporating ideas from the RL literature, and though they’re behind the state of the (RL) art now, that’s changing.
Nick Gorski studies combining learning and memory (“learning to use memory”).
Joseph Xu’s first talk was about combining symbolic planning (an old strength of Soar) with imagery and learning. His domain was two-dimensional block stacking with continuous space and gravity, though it might eventually be The Incredible Machine.
In his second talk, he spoke about an architectural variant of Soar-RL, that learns a model and a policy simultaneously, and eliminates a free parameter. His domain here was a probabilistic version of vector racer. Again, Soar-RL is steadily advancing.Yongjia Wang’s talks both used a domain of learning to hunt various kinds of prey with various kinds of weapons. As far as I could tell, the goal was more integration of RL technology (hierarchical value function approximators) into Soar.
Sam Wintermute spoke about imagery—a not-yet-released extension to Soar. His system learns to play Frogger and two other Atari video games, and learns much better using imagery than without.
Justin Li announced an intention to develop ways to learn subgoal hierarchies. Again, the theme is integration of capabilities; learning and subgoal hierarchies.
Applications of Soar
Kate Tyrol’s and my talks were fairly trivial, trying to connect Soar to Ms. Pacman and Rogue respectively. (We basically used Soar badly; using small fractions of its capabilities and becoming flummoxed by bugs that we introduced.)
I studied Rogue partly because Rogue has escape-to-shell functionality (so a sufficiently clever Rogue AI could easily escape and become a “rogue AI”), and I wanted to understand my own implicit safety case while developing it.Isaiah Hines (no relation) and Nikhil Mangla announced that they are connecting Soar to Unreal.
Brian Magerko studies improv theatre, and makes some use of Soar in some of his models of improvisation. As far as I can tell, his current use of Soar could be easily replaced by ordinary “non-AI” programming; his association with Soar is in the past (as one of John Laird’s students) and in the future, as his micro-models of improv theatre are (hopefully) combined into an agent that can play improv games.
Shiwali Mohan attacked the Infinite Mario task with Soar-RL using various strategies. My takeaway was that framing the problem for Soar-RL is tricky on realistically difficult tasks like Infinite Mario (rather than “gimme” tasks like T-mazes and broomstick balancing). You can give the task of “make it learn” to a smart grad student and have them tweak and poke at it for a year or so, and still have substantially lower learned performance than handcoded agents.
Bryan Smith spoke about putting OWL ontologies into Soar’s SMem. As far as I can tell, this wasn’t for any particular purpose, but just to play with the two technologies.
Sean Bittle spoke about using Soar to learn heuristics for constraint programming. As far as I can tell, this was a negative or intermediate result; essentially no successful learning, or too little for the amount of effort and complexity introduced.
Olivier Georgeon spoke about a model of early developmental learning. I believe this is intended to model human learning, but the domain he was using was a fairly inhuman gridworld.
Bob Wray spoke about using Soar to create learning experiences.
I was a bit disappointed, since (as far as I can tell) this work simply used Soar as an exotic programming language. I believe this is one of the primary ways that Soar could help expand human rationality: if the procedures that a human is supposed to learn are encoded as Soar productions, and the training software can test whether any student has any given production, and instill it if it is not there, then (assuming Soar is a decent model of how humans think), instilling all of the necessary productions should also instill the complete procedure.Margaux Lhommet announced an intention to use Soar in a training simulation to control simulated victims of a radioactive / biological terrorist attack.
Bill Kennedy announced that he was looking for something like Soar, but lighter-weight, so he could do large-scale agent-based simulations.
Nate Derbinsky wrote a piece of middleware (Soar2Soar) so that Soar programmers can use the Soar programming language to write the environment. I’m so unskilled with Soar that that doesn’t sound like a win to me, but Soar is sometimes very declarative, and might be appropriate for rapid development. He also got Soar to run on an iPhone.
Jonathan Voight spoke about Sproom, a mixed virtual/real robot simulation. The agent can control a virtual robot or a real robot—and even when its controlling a real robot, the virtual environment can augment the inputs, so a wall-avoiding robot would avoid both real and virtual walls. The bot’s effectors can be implemented in the virtual reality, so it can drive around in the real world, picking up virtual objects and putting them away.
John Laird spoke about using Sproom to study “Situated Interactive Instruction”—so that an agent could be taught by interacting in semi-formal language with a human, while it is performing its task. The domain is robots moving through a building doing IED-clearing; the IEDs and the operations on them (pickup, defuse) are virtual. As I understand it, some but not all of this functionality is currently functioning.
Shiwali Mohan used the same sort of situated interactive instruction in the Infinite Mario domain, though not very much data was conveyed (yet) via instruction. As I understand it, Mohan’s previous hardwired agent had three verbs like “tackle-monster” and “get-coin”; the instruction consists of the agent asking “I see a coin, which verb should I use for it?”—so after the human has answered the three object-verb correspondences, it knows everything it will ever learn via instruction. However, it’s working and could be extended.
I want to emphasize that these just my impressions (which are probably flawed—because of my inexperience I probably misunderstood important points), and the proceedings (that is, the slides that everyone used to talk with) will soon be available, so you can read them and form your own impressions.
There are three forks to my implicit safety case while developing. I’m not claiming this is a particularly good safety case or that developing Rogue-Soar was safe—just that it’s what I have.
The first fork is that tasks vary in their difficulty (Pickering’s “resistances”), and entities vary in their strength or capability. There’s some domain-ish structure to entity’s strengths (a mechanical engineering task will be easier for someone trained as a mechanical engineer than a chemist), and intention matters—difficult tasks are rarely accomplished unintentionally. I’m fairly weak, and my agent-in-progress was and is very, very weak. The chance that I or my agent solves a difficult task (self-improving AGI) unintentionally while writing Rogue-Soar is incredibly small, and comparable to the risk of my unintentionally solving self-improving AGI while working at my day job. This suggests that safer (not safe) AI development might involve: One, tracking ELO-like scores of people’s strengths and task difficulties, and Two, tracking and incentivizing people’s intentions.
The second fork is that even though I’m surprised sometimes while developing, the surprises are still confined to an envelope of possible behavior. The agent could crash, run forever, move in a straight line or take only one step when I was expecting it to wander randomly, but pressing ”!” when I expected it to be confined to “hjkl” would be beyond this envelope. Of course, there are many nested envelopes, and excursions beyond the bounds of the narrowest are moderately frequent. This suggests that safer AI development might involve tracking these behavior envelopes (altogether they might form a behavior gradient), and the frequency and degree of excursions, and deciding whether development is generally under control—that is, acceptably risky compared to the alternatives.
The third fork is that the runaway takeoff arguments necessarily involve circularities and feedback. By structural inspection and by intention, if the AI is dealing with Rogue, and not learning, programming, or bootstrapping, then it’s unlikely to undergo takeoff. This suggests that carefully documenting and watching for circularities and feedback may be helpful for safer AI research.
I agree with all of your summaries, so any readers should partially discount the many occurrences of “so far as I can tell.” :)
How do you respond to the claim that the few researchers who are working directly on very general, Solomonoff-approximating AI systems are, when put together, dangerous?
Everything is somewhat dangerous; and relative risk is more important than absolute risk.
The entities that I think are most dangerous existential risks are inhumanly capable, distributed entities that hold many human lives and livelihoods hostage against direct assault and defend their continued existence ferociously, sometimes by disinformation or by promoting human irrationality. These entities are more like fault lines in a crystal or knots in string than they are like organisms. I’m thinking of various industries, the practice of advertising, perhaps capitalism, nationalism or some features of how they each are presently implemented—but the most dangerous entities are the ones that we don’t currently recognize.
I believe this is closer to Bill Joy’s pessimism, than EY’s. It has everything to do with these sort of large, difficult-to-control social forces, and almost nothing to do with doubts about scientists’ ability to control their experiments.
Studying AGI probably has some associated risks, but as a disruptive technology, it might be able to move some of those crystal flaws or knots out of the center of human society—and doing nothing, or simply allowing the invisible hand of the market to push you along, will not help.
A lot of people fear corporations. A lot of people fear governments. And not a few think religion is pernicious.
So none of them can be the ones we don’t currently recognize.
I feel like I’m opening the Necronomicon, but besides religions, corporations, and governments, what else should I be worried about?
The forces that have been operating all along, that we don’t perceive or name because they aren’t close enough to any of the patterns our brains evolved to perceive. http://lesswrong.com/lw/10n/why_safety_is_not_safe/
This belongs in a piece of Lovecraftian fiction.
I’ve contemplated writing something along those lines, but I haven’t yet managed to come up with something that would be an entertaining story and still keep the point. Even Lovecraft, after all, only managed to write evocative fiction about an uncaring universe via the metaphor of intelligent monsters.
There was a short story (sorry, no cite) about humans picking up radio transmissions from an alien species which had solved the problem of war and, iirc, poverty, too. They were so prosperous they could afford to eat off of lead plates! But they were dying for reasons they couldn’t understand.… IIRC, they died off before they could explain their social insight to humanity.
More subtly, I suspect that large scale mood disorders could affect the fate of cultures. Without fully invoking the mind-killer, if people stop caring enough to keep their society going, whether by not bothering with crucial details, or by building defection into their institutions, then there will be a long term effect.
I’ve heard of an old theory that sunspots could affect people by making them more irritable. (Again, no cite, and I don’t think it matters if it was a real theory.) Anything that makes people generally more irritable increases the chance of war because the people at the top are more likely to create and react to slights and threats.
Large scale mood disorders would be in our blind spot, but not in a spooky or interesting way. They’re just too big and too slow for us to notice, and besides, surely there’s nothing wrong with our national character.
Don’t mock me, I’m trying to stretch my imagination here.
What about something like shoes? The transition from barefooting to shoes-wearing norms might be prestige-driven (so despite the discomfort of switching TO shoes, it’s prestigious) and the norm could be enforced by the discomfort of switching AWAY from shoes, once your feet are adjusted.
Generally, things that encourage one-way society-level transitions that look like “society is addicted to X” might be dangerous. The “things” in question (social practices, memes, consumer products) might look much wierder or more innocent than shoes.
Cultural rachetting, so that people forget they have options, is probably a problem in most cases. It’s probably a good thing that people no longer think in terms of murderous feuds between families, though.
I do think rules rachetting—that it’s generally much easier to add rules than to get rid of them, is a very serious problem for human beings.
“Society is addicted to X?” Alcohol, tobacco, & caffeine come to mind. Of course, our present culture is trying hard to overcome its former tobacco addiction.
Addictive behaviors at the individual level are a problem, but a society seemingly unable to control itself could be caused by some other kind of irrationality at the individual level—or even something like individual “rational” self-interest.
Can you list some? If you mean AIXI-like systems, I see them as a new type of complexity theory used to help us understand the problem space, but not as a cognitive architecture that might be used in an AGI. Nothing using such an exhaustive approach could operate in the real world.
Sorry for the slow response. Solomonoff, until he died recently. Marcus Hutter is implementing an AIXI approximation last I heard. Eray Ozkural, implementing Solomonoff’s ideas. Sergey Pankov, implementing an AIXI approximation.
Say what? How does that work?
Pretty much all Soar work uses Soar as an exotic programming language. It’s a production system. When I studied it, which admittedly was 18 years ago, it had no learning component beyond a mysterious, programmer-dependent “chunking”. I would call it a cognitive infrastructure, that you can use to build a cognitive architecture.
Its claim to being a cognitive architecture relied (at least originally) largely on being able to model human performance time on different tasks. But any programming language will be able to do that, if the task naturally has some particular input-dependent runtime, and the task is described properly.
Soar has the problem that it’s funded by the military, who want to use it in simulations eg JSAF; but they want untrained military personnel to be able to build things with it. They don’t want a smarter AI as much as they want an easier-to-use AI.
I only saw the 15-minute version, and I didn’t understand it, but Rosenbloom’s papers, including an unpublished one about the “Bayesian Decision Cycle” are online: http://cs.usc.edu/~rosenblo/pubs.html
Every general-purpose AI tool converges on being a programming language.
The more it becomes a programming language, the less it has to do with AI.
I’m a bit confused as to what is Soar and how it works, but it does sound very interesting. Of course, trying to model the way the human mind works is the oppisite of what we should be trying to do. Immagine all of the short-comings of human reasoning highly exagerated in a computer simulation.
Re: “In particular, I want to understand why AI researchers are generally convinced that their experiments and research are fairly safe”
Stupid machines are the ones that kill people. Look at the roads. Machine intelligence will put an end that carnage.