In this essay, ricraz argues that we shouldn’t expect a clean mathematical theory of rationality and intelligence to exist. I have debated em about this, and I continue to endorse more or less everything I said in that debate. Here I want to restate some of my (critical) position by building it from the ground up, instead of responding to ricraz point by point.
When should we expect a domain to be “clean” or “messy”? Let’s look at everything we know about science. The “cleanest” domains are mathematics and fundamental physics. There, we have crisply defined concepts and elegant, parsimonious theories. We can then “move up the ladder” from fundamental to emergent phenomena, going through high energy physics, molecular physics, condensed matter physics, biology, geophysics / astrophysics, psychology, sociology, economics… On each level more “mess” appears. Why? Occam’s razor tells us that we should prioritize simple theories over complex theories. But, we shouldn’t expect a theory to be more simple than the specification of the domain. The general theory of planets should be simpler than a detailed description of planet Earth, the general theory of atomic matter should be simpler than the theory of planets, the general theory of everything should be simpler than the theory of atomic matter. That’s because when we’re “moving up the ladder”, we are actually zooming in on particular phenomena, and the information we need to specify “where to zoom in” is translated to the description complexity of theory.
What does it mean in practice about understanding messy domains? The way science solves this problem is by building a tower of knowledge. In this tower, each floor benefits from the interactions both with the floor above it and the floor beneath it. Without understanding macroscopic physics we wouldn’t figure out atomic physics, and without figuring out atomic physics we wouldn’t figure out high energy physics. This is knowledge “flowing down”. But knowledge also “flows up”: knowledge of high energy physics allows understanding particular phenomena in atomic physics, knowledge of atomic physics allows predicting the properties of materials and chemical reactions. (Admittedly, some floors in the tower we have now are rather ramshackle, but I think that ultimately the “tower method” succeeds everywhere, as much as success is possible at all).
How does mathematics come in here? Importantly, mathematics is not used only on the lower floors of the tower, but on all floors. The way “messiness” manifests is, the mathematical models for the higher floors are either less quantitatively accurate (but still contain qualitative inputs) or have a lot of parameters that need to be determined either empirically, or using the models of the lower floors (which is one way how knowledge flows up), or some combination of both. Nevertheless, scientists continue to successfully build and apply mathematical models even in “messy” fields like biology and economics.
So, what does it all mean for rationality and intelligence? On what floor does it sit? In fact, the subject of rationality of intelligence is not a single floor, but its own tower (maybe we should imagine science as a castle with many towers connected by bridges).
The foundation of this tower should be the general abstract theory of rationality. This theory is even more fundamental than fundamental physics, since it describes the principles from which all other knowledge is derived, including fundamental physics. We can regard it as a “theory of everything”: it predicts everything by making those predictions that a rational agent should do. Solomonoff’s theory and AIXI are a part of this foundation, but not all it. Considerations like computational resource constraints should also enter the picture: complexity theory teaches us that they are also fundamental, they don’t requiring “zooming in” a lot.
But, computational resource constrains are only entirely natural when they are not tied to a particular model of computation. This only covers constraints such as “polynomial time” but not constraints such as O(n3) time and even less so 245n3 time. Therefore, once we introduce a particular model of computation (such as a RAM machine), we need to build another floor in the tower, one that will necessarily be “messier”. Considering even more detailed properties of the hardware we have, the input/output channels we have, the goal system, the physical environment and the software tools we employ will correspond to adding more and more floors.
Once we agree that it shoud be possible to create a clean mathematical theory of rationality and intelligence, we can still debate whether it’s useful. If we consider the problem of creating aligned AGI from an engineering perspective, it might seem for a moment that we don’t really need the bottom layers. After all, when designing an airplane you don’t need high energy physics. Well, high energy physics might help indirectly: perhaps it allowed predicting some exotic condensed matter phenomenon which we used to make a better power source, or better materials from which to build the aircraft. But often we can make do without those.
Such an approach might be fine, except that we also need to remember the risks. Now, safety is part of most engineering, and is definitely a part of airplane design. What level of the tower does it require? It depends on the kind of risks you face. If you’re afraid the aircraft will not handle the stress and break apart, then you need mechanics and aerodynamics. If you’re afraid the fuel will combust and explode, you better know chemistry. If you’re afraid a lightning will strike the aircraft, you need knowledge of meteorology and electromagnetism, possibly plasma physics as well. The relevant domain of knowledge, and the relevant floor in the tower is a function of the nature of the risk.
What level of the tower do we need to understand AI risk? What is the source of AI risk? It is not in any detailed peculiarities of the world we inhabit. It is not in the details of the hardware used by the AI. It is not even related to a particular model of computation. AI risk is the result of Goodhart’s curse, an extremely general property of optimization systems and intelligent agents. Therefore, addressing AI risk requires understanding the general abstract theory of rationality and intelligence. The upper floors will be needed as well, since the technology itself requires the upper floors (and since we’re aligning with humans, who are messy). But, without the lower floors the aircraft will crash.
In this essay, ricraz argues that we shouldn’t expect a clean mathematical theory of rationality and intelligence to exist. I have debated em about this, and I continue to endorse more or less everything I said in that debate. Here I want to restate some of my (critical) position by building it from the ground up, instead of responding to ricraz point by point.
When should we expect a domain to be “clean” or “messy”? Let’s look at everything we know about science. The “cleanest” domains are mathematics and fundamental physics. There, we have crisply defined concepts and elegant, parsimonious theories. We can then “move up the ladder” from fundamental to emergent phenomena, going through high energy physics, molecular physics, condensed matter physics, biology, geophysics / astrophysics, psychology, sociology, economics… On each level more “mess” appears. Why? Occam’s razor tells us that we should prioritize simple theories over complex theories. But, we shouldn’t expect a theory to be more simple than the specification of the domain. The general theory of planets should be simpler than a detailed description of planet Earth, the general theory of atomic matter should be simpler than the theory of planets, the general theory of everything should be simpler than the theory of atomic matter. That’s because when we’re “moving up the ladder”, we are actually zooming in on particular phenomena, and the information we need to specify “where to zoom in” is translated to the description complexity of theory.
What does it mean in practice about understanding messy domains? The way science solves this problem is by building a tower of knowledge. In this tower, each floor benefits from the interactions both with the floor above it and the floor beneath it. Without understanding macroscopic physics we wouldn’t figure out atomic physics, and without figuring out atomic physics we wouldn’t figure out high energy physics. This is knowledge “flowing down”. But knowledge also “flows up”: knowledge of high energy physics allows understanding particular phenomena in atomic physics, knowledge of atomic physics allows predicting the properties of materials and chemical reactions. (Admittedly, some floors in the tower we have now are rather ramshackle, but I think that ultimately the “tower method” succeeds everywhere, as much as success is possible at all).
How does mathematics come in here? Importantly, mathematics is not used only on the lower floors of the tower, but on all floors. The way “messiness” manifests is, the mathematical models for the higher floors are either less quantitatively accurate (but still contain qualitative inputs) or have a lot of parameters that need to be determined either empirically, or using the models of the lower floors (which is one way how knowledge flows up), or some combination of both. Nevertheless, scientists continue to successfully build and apply mathematical models even in “messy” fields like biology and economics.
So, what does it all mean for rationality and intelligence? On what floor does it sit? In fact, the subject of rationality of intelligence is not a single floor, but its own tower (maybe we should imagine science as a castle with many towers connected by bridges).
The foundation of this tower should be the general abstract theory of rationality. This theory is even more fundamental than fundamental physics, since it describes the principles from which all other knowledge is derived, including fundamental physics. We can regard it as a “theory of everything”: it predicts everything by making those predictions that a rational agent should do. Solomonoff’s theory and AIXI are a part of this foundation, but not all it. Considerations like computational resource constraints should also enter the picture: complexity theory teaches us that they are also fundamental, they don’t requiring “zooming in” a lot.
But, computational resource constrains are only entirely natural when they are not tied to a particular model of computation. This only covers constraints such as “polynomial time” but not constraints such as O(n3) time and even less so 245n3 time. Therefore, once we introduce a particular model of computation (such as a RAM machine), we need to build another floor in the tower, one that will necessarily be “messier”. Considering even more detailed properties of the hardware we have, the input/output channels we have, the goal system, the physical environment and the software tools we employ will correspond to adding more and more floors.
Once we agree that it shoud be possible to create a clean mathematical theory of rationality and intelligence, we can still debate whether it’s useful. If we consider the problem of creating aligned AGI from an engineering perspective, it might seem for a moment that we don’t really need the bottom layers. After all, when designing an airplane you don’t need high energy physics. Well, high energy physics might help indirectly: perhaps it allowed predicting some exotic condensed matter phenomenon which we used to make a better power source, or better materials from which to build the aircraft. But often we can make do without those.
Such an approach might be fine, except that we also need to remember the risks. Now, safety is part of most engineering, and is definitely a part of airplane design. What level of the tower does it require? It depends on the kind of risks you face. If you’re afraid the aircraft will not handle the stress and break apart, then you need mechanics and aerodynamics. If you’re afraid the fuel will combust and explode, you better know chemistry. If you’re afraid a lightning will strike the aircraft, you need knowledge of meteorology and electromagnetism, possibly plasma physics as well. The relevant domain of knowledge, and the relevant floor in the tower is a function of the nature of the risk.
What level of the tower do we need to understand AI risk? What is the source of AI risk? It is not in any detailed peculiarities of the world we inhabit. It is not in the details of the hardware used by the AI. It is not even related to a particular model of computation. AI risk is the result of Goodhart’s curse, an extremely general property of optimization systems and intelligent agents. Therefore, addressing AI risk requires understanding the general abstract theory of rationality and intelligence. The upper floors will be needed as well, since the technology itself requires the upper floors (and since we’re aligning with humans, who are messy). But, without the lower floors the aircraft will crash.