Uncontrolled argues along similar lines—that the physics/chemistry model of science, where we get to generalize a compact universal theory from a number of small experiments, is simply not applicable to biology/psychology/sociology/economics and that policy-makers should instead rely more on widespread, continuous experiments in real environments to generate many localized partial theories.
I’ll note that (non-extreme) versions of this position are consistent with ideas like “it’s possible to build non-opaque AGI systems.” The full answer to “how do birds work?” is incredibly complex, hard to formalize, and dependent on surprisingly detailed local conditions that need to be discovered empirically. But you don’t need to understand much of that complexity at all to build flying machines with superavian speed or carrying capacity, or to come up with useful theory and metrics for evaluating “goodness of flying” for various practical purposes; and the resultant machines can be a lot simpler and more reliable than a bird, rather than being “different from birds but equally opaque in their own alien way”.
This isn’t meant to be a response to the entire “rationality non-realism” suite of ideas, or a strong argument that AGI developers can steer toward less opaque systems than AlphaZero; it’s just me noting a particular distinction that I particularly care about.
The relevant realism-v.-antirealism disagreement won’t be about “can machines serve particular functions more transparently than biological organs that happen to serve a similar function (alongside many other functions)?”. In terms of the airplane analogy, I expect disagreements like “how much can marginal effort today increase transparency once we learn how to build airplanes?”, “how much useful understanding are we currently missing about how airplanes work?”, and “how much of that understanding will we develop by default on the path toward building airplanes?”.
I’ll note that (non-extreme) versions of this position are consistent with ideas like “it’s possible to build non-opaque AGI systems.” The full answer to “how do birds work?” is incredibly complex, hard to formalize, and dependent on surprisingly detailed local conditions that need to be discovered empirically. But you don’t need to understand much of that complexity at all to build flying machines with superavian speed or carrying capacity, or to come up with useful theory and metrics for evaluating “goodness of flying” for various practical purposes; and the resultant machines can be a lot simpler and more reliable than a bird, rather than being “different from birds but equally opaque in their own alien way”.
This isn’t meant to be a response to the entire “rationality non-realism” suite of ideas, or a strong argument that AGI developers can steer toward less opaque systems than AlphaZero; it’s just me noting a particular distinction that I particularly care about.
The relevant realism-v.-antirealism disagreement won’t be about “can machines serve particular functions more transparently than biological organs that happen to serve a similar function (alongside many other functions)?”. In terms of the airplane analogy, I expect disagreements like “how much can marginal effort today increase transparency once we learn how to build airplanes?”, “how much useful understanding are we currently missing about how airplanes work?”, and “how much of that understanding will we develop by default on the path toward building airplanes?”.