Chaos in complex systems is guaranteed but also bounded. I cannot know what the weather will be like in New York City one month from now. I can, however, predict that it probably won’t be “tornado” and near-certainly won’t be “five hundred simultaneous tornadoes level the city”. We know it’s possible to build buildings that can withstand ~all possible weather for a very long time. I imagine that a thing you’re calling a puppet-master could build systems that operate within predictable bounds robustly and reliably enough to more or less guarantee broad control.
Caveat: The transition from seed AI to global puppet-master is harder to predict than the end state. It might plausibly involve psychohistorian-like nudges informed by superhuman reasoning and modeling skills. But I’d still expect that the optimization pressure a superintelligence brings to bear could render the final outcome of the transition grossly overdetermined.
Verifying my understanding of your position: you are fine with the puppet-master and psychohistorian categories and agree with their implications, but you put the categories on a spectrum (systems are not either chaotic or robustly modellable, chaos is bounded and thus exists in degrees) and contend that ASI will be much closer to the puppet-master category. This is a valid crux.
To dig a little deeper, how does your objection sustain in light of my previous post, Lenses of Control? The basic argument there is that future ASI control systems will have to deal with questions like: “If I deploy novel technology X, what is the resulting equilibrium of the world, including how feedback might impact my learning and values?” Does the level chaos in such contexts remain narrowly bounded?
EDIT for clarification: the distinction between the puppet-master and psychohistorian metaphors is not the level of chaos in the system they are dealing with, but rather is about the extent of direct control that the control system of the ASI has on the world, where the control system is a part of the AI machinery as a whole (including subsystems that learn) and the AI is a part of the world. Chaos factors in as an argument for why human-compatible goals are doomed if AI follows the psychohistorian metaphor.
Chaos in complex systems is guaranteed but also bounded. I cannot know what the weather will be like in New York City one month from now. I can, however, predict that it probably won’t be “tornado” and near-certainly won’t be “five hundred simultaneous tornadoes level the city”. We know it’s possible to build buildings that can withstand ~all possible weather for a very long time. I imagine that a thing you’re calling a puppet-master could build systems that operate within predictable bounds robustly and reliably enough to more or less guarantee broad control.
Caveat: The transition from seed AI to global puppet-master is harder to predict than the end state. It might plausibly involve psychohistorian-like nudges informed by superhuman reasoning and modeling skills. But I’d still expect that the optimization pressure a superintelligence brings to bear could render the final outcome of the transition grossly overdetermined.
Verifying my understanding of your position: you are fine with the puppet-master and psychohistorian categories and agree with their implications, but you put the categories on a spectrum (systems are not either chaotic or robustly modellable, chaos is bounded and thus exists in degrees) and contend that ASI will be much closer to the puppet-master category. This is a valid crux.
To dig a little deeper, how does your objection sustain in light of my previous post, Lenses of Control? The basic argument there is that future ASI control systems will have to deal with questions like: “If I deploy novel technology X, what is the resulting equilibrium of the world, including how feedback might impact my learning and values?” Does the level chaos in such contexts remain narrowly bounded?
EDIT for clarification: the distinction between the puppet-master and psychohistorian metaphors is not the level of chaos in the system they are dealing with, but rather is about the extent of direct control that the control system of the ASI has on the world, where the control system is a part of the AI machinery as a whole (including subsystems that learn) and the AI is a part of the world. Chaos factors in as an argument for why human-compatible goals are doomed if AI follows the psychohistorian metaphor.