W. Ross Ashby’s Law of Requisite Variety (1956) suggests fundamental limits to human control over more capable systems.
This law sounds super enticing and I want to understand it more. Could you spell out how the law suggests this?
I did a quick search of LessWrong and Wikipedia regarding this law.
″… Ashby’s “Law of requisite variety”, which roughly speaking states that a system can only remain in homeostasis if it has more internal states than the external states it encounters.” from Yuxi_Liu, “Cybernetic dreams”.
“This law (of which Shannon’s theorem 10 relating to the suppression of noise is a special case) says that if a certain quantity of disturbance is prevented by a regulator from reaching some essential variables, then that regulator must be capable of exerting at least that quantity of selection.” from W. R. Ashby (1960), “Design for a Brain”, p. 229, quoted via Wikipedia page.
Enough testimonials, the Wikipedia page itself describes the law as based on the observation that in a two-player game between the environment (disturber) and a system trying to maintain stasis (regulator), if the environment has D moves that all lead to different outcomes (given any move from the system), and the system has R possible responses, then the best the system can do is restrict the number of outcomes to D/R.
I can see the link between this and the descriptions from Yuxi_Liu, Roman Leventov, and Ashby. Your reading is a couple of steps removed. How did you get from D/R outcomes in this game to “fundamental limits to human control over more capable systems”? My guess it that you simply mean that if the more capable system is more complex / has more moves available moves / more “variety” than humans then the law will apply with the human as the regulator and the AI as the disturber. Is that right? Could you comment on how you see capability in terms of variety?
This law sounds super enticing and I want to understand it more. Could you spell out how the law suggests this?
I did a quick search of LessWrong and Wikipedia regarding this law.
″… Ashby’s “Law of requisite variety”, which roughly speaking states that a system can only remain in homeostasis if it has more internal states than the external states it encounters.” from Yuxi_Liu, “Cybernetic dreams”.
“Either the AI is too simple to be an independent robust agent in human society, or it needs to be approximately as complex as humans themselves. Cf. the law of requisite variety.” from Roman Leventov, “For alignment, we should simultaneously use multiple theories of cognition and value”.
“This law (of which Shannon’s theorem 10 relating to the suppression of noise is a special case) says that if a certain quantity of disturbance is prevented by a regulator from reaching some essential variables, then that regulator must be capable of exerting at least that quantity of selection.” from W. R. Ashby (1960), “Design for a Brain”, p. 229, quoted via Wikipedia page.
Enough testimonials, the Wikipedia page itself describes the law as based on the observation that in a two-player game between the environment (disturber) and a system trying to maintain stasis (regulator), if the environment has D moves that all lead to different outcomes (given any move from the system), and the system has R possible responses, then the best the system can do is restrict the number of outcomes to D/R.
I can see the link between this and the descriptions from Yuxi_Liu, Roman Leventov, and Ashby. Your reading is a couple of steps removed. How did you get from D/R outcomes in this game to “fundamental limits to human control over more capable systems”? My guess it that you simply mean that if the more capable system is more complex / has more moves available moves / more “variety” than humans then the law will apply with the human as the regulator and the AI as the disturber. Is that right? Could you comment on how you see capability in terms of variety?