On Equivalence of Supergoals
This is a response to Ars Longa, Vita Brevis, an excellent piece by Scott Alexander. In fact, it moved me so much that I signed up on LessWrong just to write this response. I’m going to argue that the essay’s central idea is wrong, and that’s a good thing. You should read Alexander’s essay before reading mine.
Alexander writes:
The first student has no master, and must discover everything himself. He researches for 70 years, then writes his wisdom into a book before he dies. The second student reads the book, and in 7 years, he has learned 70 years of research. Then he does his own original research for 63 years and writes a book containing 133 years of research. The third student reads for 13.3 years, then does his own research for 66.7 years, ending up with 200 years. Imagine going further and further. After many generations, 690 years of research have been done, and it takes a student 69 years to master them. The student only has one year left of life to research further, leaving the world with 691 years of research total. So the cycle creeps onward, always approaching but never quite reaching 700 years of architectural research.
He then admits that real research doesn’t work that way, adding:
It would only work that way if there were an Art so unified, so perfect, that a seeker had to know the totality of what had been discovered before, if he wanted to know anything at all.
Of course, there are lots of ways in which this model is an over-simplification. Every model has to cut some corners, but this one has a bigger problem: it disagrees with the reality. The model predicts decelerating rate of progress with individual contributions diminishing over generations. In reality, however, scientific progress seems to be accelerating. Of course, it’s hard to measure, but I’ve seen claims that the portion of the 21st century that has passed has already brought more scientific discoveries than all of the 20th century, let alone the ones before, and these claims don’t seem implausible to me.
Still, the central idea of Alexander’s essay doesn’t look unreasonable. We do have more scientific knowledge to learn now than a century ago, in any given direction of study. However, the age at which a young scientist can start making a useful contribution, doesn’t seem to be increasing. Grad students do it all the time at roughly the same age.
At least a part of what keeps the time-to-cutting-edge from growing, must be increasing specialization. As a geometric metaphor, you can view the domain of human knowledge as a shape that grows over time. As it grows, so does its exterior. Nowadays it’s no longer possible to be such a broad-spectrum polymath as, say, Leonardo da Vinci was. You start at zero and still reach the exterior at about the same age as he did (it certainly doesn’t take 70 years), but the stretch of the exterior where you can contribute is now much narrower. Sure, in terms of an idealistic goal “to know everything”, the outlook is probably not good. But in any specfiic research program, such as “colonize Mars”, “cure cancer”, “stop global warming” or “end poverty”, the humanity is now in a better position than ever.
Still, what is it that makes the scientific progress accelerate towards supergoals like the above? I think the essay begins to answer this question:
You would have to be clever. We imagine each master writing down his knowledge in a book for the student who comes after, and each student reading it at a rate of ten times as quickly as the master discovered it. But what if there was a third person in between, an editor, who reads the book not to learn the contents, but to learn how to rewrite it better and more clearly? Someone whose job it is to figure out perfect analogies, clever shortcuts, new ways of graphing and diagramming the information involved. After he has processed the master’s notes, he redacts them into a textbook which can teach in only a twentieth the time it took the master to discover.
Indeed, one way to push beyond the postulated 700 research-year limit is to rewrite books and improve teaching. I like it how these improvements feed on themselves because improved teaching also improves the teaching of aspiring teachers, and improved book-writing yields better books on book-writing. In a way, good book-writing and teaching are ways to compress information, pushing its representation closer to the Kolmogorov optimum. But that’s not the only way to speed up scientific progress. Let’s look at a few other ways in which modern scientists are in a better position than thoughout earlier history:
Better information storage and retrieval. Access any scientific paper in seconds without going to a library. Search for them by keywords or full text.
Better communication. Collaborate with scientists anywhere, in real time, with video. Send gigabytes of data with a click. Participate in conferences by flying to the venue (yes, flying in the sky, like a bird!) or remotely.
Better computation. Analyze terabytes of data on a piece of commodity hardware they sell in shopping malls. Produce interactive visualizations. Use machine learning to look for patterns and correlations across numerous variables. Run complex simulations.
More people participating. The world population has increased almost eight-fold since 1804, and life expectancy has about doubled during the same period. The share of people getting educated and eventually becoming scientists has been growing, too.
Better funding. It’s hard to find data on combined public and private funding of scientific research over centuries, but a look at the top world economies suggests that having a Silicon Valley does more for a nation’s wealth than having a lot of oil. Both nations and private businesses these days fund research into things like spaceflight, superconductors, and gene therapy. All told, the humanity has more total resources these days, and is willing to spend a greater share of them on research rather than, say, war.
And it’s not just a list of things that add up. No, they more than add up! They feed on each other on each other on themselves. For example, better information storage and retrieval improves education, which leads to more people becoming scientists, teachers and information technologists, which leads to faster pace of progress. For a more specific pathway, the availability of internet (in particular, Wikipedia) allows more people in developing countries to educate themselves. Some of them become programmers and contribute to better information storage, communication and computation. Others become teachers and textbook writers. Others still pursue medical careers, contributing to longer lifespans, or agricultural research, allowing to feed more people and therefore increasing the number of participants.
Alexander’s essay doesn’t state what the supergoal of the research program is; the philosopher’s stone is only a stepping stone (lame pun intended) towards some greater question, like what the hell 42 means or something. But that supergoal doesn’t matter much. Human progress in every area seems to improve human progress in every other area, so the progress towards the 42-question is correlated with the progress towards the cure for cancer, the progress towards cheap renewable energy, and, in general, the progress towards maximizing almost any reasonable global utility function measuring human development and well-being.
This is good news. In a way, any supergoal from a certain class, if sufficiently difficult, is equivalent to any other supergoal in that it causes accelerating progress across the board. For example, if you take “end poverty” as the supergoal, then either it’s easy, or it will cause “stop global warming” and all other supergoals from the class to be achieved as well. And if you believe in friendly-AI singularity, then you must believe that this class includes “create friendly superhuman general-purpose AI”.
Should the alchemists take a break to heal the prince? In a more realistic model, it would be in the alchemists’ best interest to establish good public medicine, so that more people survive past childhood, live longer, and have high intelligence, and therefore more (and smarter) people join the aclhemy program. So the prince would be treated in one of the excellent hospitals, educated in one of the excellent schools, and have access to the excellent collection of human knowledge by the click of a mouse. And who knows, maybe His Highness, instead of leading the increasingly irrelevant army, would become interested in alchemy research.
There’s a recent paper on the topic “Are Ideas Getting Harder to Find?” According to data analyzed there rate of scientific discoveries in existing fields (e.g crop yields increase, Moore’s law, etc.) remains close to constant while number of researchers constantly increases.
Interesting.
This might have something to do with the fact that the problems are getting harder now that the low-hanging fruit has been picked. Every additional year to life expectancy is harder than the one before. Every cycle of Moore’s law is harder because we’re starting to deal with sizes comparable to molecules.
I admit that this makes my point weaker.
There’s this funny thing that happens when you play video games that model growing civilization. Eventually you get to the top of a skill tree and there’s nothing left to learn. You “maxxed out” the skill tree. It’s an odd thing to think about.
I expect if we can model brains better we can optimize teaching methods and get healthy happy adults almost guaranteed. From there, specialisation helps and from there we can make leaps towards the limits of knowledge.
My point is that to “model brains better” requires a lot more knowledge about the brains (neurology, microbiology, chemistry) and a lot more computing power, and those things require progress in other areas, and so on, so that sounds like one of those equivalent supergoals.
I am not sure that I agree about that. I think it’s possible to isolate parts of the brain. We have better rigour in scientific testing than ever before and we need that to come to reliable information of the brain.
I guess… I disagree with “a lot more” I think it’s more like, “A little more”… And time.
But saying so is not speaking of concretes. I don’t know how to quantify this.
I have to admit that I have no idea; my understanding of the brain isn’t enough to even assess the magnitude of the challenge. Intuitively, it seems at least as hard as “find cheap renewable energy”, but I might be completely wrong.
I see no reason why a friendly AGI has to be necessarily possible if you believe that there will be an AGI that creates a singularity.
Right. This should read “if you believe in friendly AI singularity”. Updating the post.