[Book Review] Why Greatness Cannot Be Planned: The Myth of the Objective
Imagine you arrive at work and your boss tells you that instead of attending your daily meetings about benchmarks and milestones you should just do what you find most interesting. What would you do?
Kenneth O. Stanley and Joel Lehman, who are currently both AI researchers at OpenAI, begin their book Why Greatness Cannot Be Planned with this provocative question. Their book makes the argument that when it comes to achieving ambitious goals in society and culture, such as innovating in science, educating our children, or creating beautiful art, objectives are not only useless but actively detrimental.
Suppose that 1000 years ago someone would have come up with the theoretical idea of computation and had set himself the objective to build a computer. Most probably, this person would not have started by researching vacuum tubes, which were however an important stepping stone to the first computer. The invention of the vacuum tube had nothing to do with computers and was instead a product of research in electricity and radio waves. The point is that the stepping stone does not resemble the final product and that by directly optimizing towards an objective it might be harder to reach.
This can easily be seen in a maze that the authors provide as an example. If we would train an AI agent to solve this maze and we would give it the objective to minimize the distance to finish, it would learn to go up, hit the wall, and get stuck there. In order to solve the maze, the agent has to move away from the objective in order to reach the finish. The objective is in this case deceptive and the authors argue that this is true for “just about any problem that’s interesting”.
But what is the alternative to following some objective? Just performing random actions? That doesn’t seem terribly efficient either. The book proposes to follow the interesting and the novel instead. The authors developed an algorithm called novelty search, where instead of optimizing an objective, the agent just tries out behaviors that are as novel to it as possible. This algorithm solves the deceptive maze much quicker than objective-based algorithms, even though it is not even trying to succeed!
Why Greatness Cannot Be Planned is a fun and interesting book to read, but is somewhat repetitive and does not always use the most interesting examples. One of the chapters describes generating the butterfly on the cover by using a human-in-the-loop algorithm that follows stepping stones unrelated to the butterfly, which was probably one of the main inspirational examples for the authors, but is a bit too technical and dry for the reader. But if you seek some encouragement to follow your own idiosyncratic interests, this book is definitely for you!
- 7 Aug 2023 22:57 UTC; 8 points) 's comment on Growing Bonsai Networks with RNNs by (
Continuing the metaphor, what the authors are saying looks to some extent similar to stochastic gradient descent (which would be the real way you minimize the distance to finish in the maze analogy.)
Or A*, which is a much more computationally efficient and deterministic way to minimize the distance to finish the maze, if you have an appropriate heuristic. I don’t have an argument for it, but I feel like finding a good heuristic and leveraging it probably works very well as a generalizable strategy.