Isn’t a model of the outside world built in – implicit – in the robot’s design? Surely it has no explicit knowledge of the outside world, yet it was built in a certain way so that it can counteract outside forces. Randomly throwing together a robot most certainly will not get you such a behaviour – but design (or evolution!) will give you a robot with a implicit model of the outside world (maybe at some point one who can formulate explicit models). I wouldn’t be so fast and just throw away the notion of a model.
I find the perspective very intriguing, but I think of it more as nature’s (or human designer’s) way of building quick and dirty, simple and efficient machines. To achieve that goal implicit models are very important. There is no magic – you need a model, albeit one that is implicit.
Certainly I, as the designer, had a model of the robot and its environment when I wrote that program, and the program implements those models. But the robot itself has no model of its environment. It calculates the positions of its feet, relative to itself, by sensing its joint angles, knowing the lengths of its limb segments and calculating, so it does have a fairly limited model of itself: it knows its own dimensions. However, it does not know its own mass, or the characteristics of its sensors and actuators.
The fact that it works does not mean that it has an “implicit” model of the environment: “implicit”, in a context like this, means “not”. What is a model? A model is a piece of mathematics in which certain quantities correspond to certain properties of the thing modelled, and certain mathematical relationships between these correspond to certain physical relationships. Maxwell’s equations model electromagnetic phenomena. The Newton-Raphson (EDIT: I meant Navier-Stokes) equation models fluid flow. “Implicit model” is what one says, when one expects to find a model and finds none. The robot’s environment contains a simulated wind pushing on the robot, and a simulated hand giving it a shove. The robot knows nothing of this: there is no variable in the part of the program that deals with the robot’s sensors, actuators, and control algorithms that represents the forces acting on it. The robot no more models its environment than a thermostat models the room outside it.
Since it is possible to build systems that achieve goals without models, and also possible, but in general rather more complicated, to build such systems that do use models, I do not think that the blind god of evolution is likely to have put models anywhere. It has come up with something—us, and probably the higher animals—that can make models, but nothing currently persuades me that models are how brains must work. I see no need of that hypothesis.
I’d rather like to build that robot. If I did, I would very likely use an onboard computer just to have flexibility in reconfiguring its control algorithms, but the controllers themselves are just PID loops. If, having got it to work robustly, I were to hard-wire it, the control circuitry would consist of a handful of analogue components for each joint, and no computer required. I still find it remarkable, how much it can do with so little.
“Implicit model” is what one says, when one expects to find a model and finds none.
I think that’s unfair. The notion of an implicit model (meaning something like “a model such that a system making use of it would behave just like this one”) is a useful one; for instance, suppose you are presented with a system designed by someone else that isn’t working as it should; one way to diagnose its troubles is to work out what assumptions about the world are implicit in its design (they might not amount to anything quite so grand as a “model”, I suppose) and how they fail to match reality, and then—with the help of one’s own better model of the world—to adjust the system’s behaviour.
Or, of course, you can just poke at it until it behaves better. But then I’d be inclined to say that you’re still using a model of the world—you’re exploiting the world’s ability to be used as a model of itself. If a system gets “poked at until it behaves better” often enough and in varied enough ways, it can end up with a whole lot of information about the world built into it. If you don’t want to call that an “implicit model”, fair enough; but what’s wrong with doing so?
I didn’t say that poking at something until it works is revising a model, I said that it’s using a model (in, doubtless, a rather trivial sense). And, if I’m understanding your analogy right, surely the analogous claim would be that walking (as nearly as possible given that one remains on the surface of the earth) towards the pole star isn’t reading a map (even an “implicit” one), not that it isn’t cartography; and I don’t think that’s quite so obvious. (Also: it seems to me that “maps” have more in common than “models”, and I think that’s relevant.)
Could one argue the tuning by the programmer incorporates the relevant aspects of the model? (Which is what I think commenter meant by “implicit.”)
In my mom’s old van, going down a steep hill would mess up the cruise control: as you say, if you push hard enough, you can over come a control loop’s programming. So a guess as to relation to Bayescraft: certain real world scenarios operate within a narrow enough set of parameters enough of the time that one can design feedback loops that do not update based on all evidence and still work well enough.
nothing currently persuades me that models are how brains must work.
Who’s saying that they are?
(And: Is what you’re expressing skeptical about the idea that brains usually use models, or the idea that they ever do? I know that I use models quite often—any time I try to imagine how something I do will work out—and if it isn’t my brain doing that, I don’t know what it is.)
If you have not seen it yet, check out Ballbot. This video is it responding to a disturbance. I know nothing of its programming, but it acts as if it is using the same control systems you are describing.
Also, Beyond AI has a lot of discussion about how simple control structures may eventually work its way into building a general AI. I do not know if there is an online version hanging around, but if you are interested I can type up a summary article after the General AI topic ban is lifted.
In terms of your original post, another random example of simple control structures providing control over extremely complex systems would be video games. The controllers generally affect one thing and after my mind understands the movements I can guide a little soldier to kill other soldiers. I find that learning these control systems makes me a better driver, makes me better at operating small backhoes, or anything else that can be expressed in terms of simple control structures. An interesting side-topic to your article would be taking a look at how we control control structures and working to improve the feedback and response times. My talent for video games may be related to my intuitive ability to balance when walking on the curb or why I instinctively want to respond to a emotional tragedy by responding with a soft push toward emotional safety. “Fixing it all at once” is likely to overcorrect.
I am rambling now, but this article connected a few unassociated behaviors in my head. Cool.
Isn’t a model of the outside world built in – implicit – in the robot’s design? Surely it has no explicit knowledge of the outside world, yet it was built in a certain way so that it can counteract outside forces. Randomly throwing together a robot most certainly will not get you such a behaviour – but design (or evolution!) will give you a robot with a implicit model of the outside world (maybe at some point one who can formulate explicit models). I wouldn’t be so fast and just throw away the notion of a model.
I find the perspective very intriguing, but I think of it more as nature’s (or human designer’s) way of building quick and dirty, simple and efficient machines. To achieve that goal implicit models are very important. There is no magic – you need a model, albeit one that is implicit.
Certainly I, as the designer, had a model of the robot and its environment when I wrote that program, and the program implements those models. But the robot itself has no model of its environment. It calculates the positions of its feet, relative to itself, by sensing its joint angles, knowing the lengths of its limb segments and calculating, so it does have a fairly limited model of itself: it knows its own dimensions. However, it does not know its own mass, or the characteristics of its sensors and actuators.
The fact that it works does not mean that it has an “implicit” model of the environment: “implicit”, in a context like this, means “not”. What is a model? A model is a piece of mathematics in which certain quantities correspond to certain properties of the thing modelled, and certain mathematical relationships between these correspond to certain physical relationships. Maxwell’s equations model electromagnetic phenomena. The Newton-Raphson (EDIT: I meant Navier-Stokes) equation models fluid flow. “Implicit model” is what one says, when one expects to find a model and finds none. The robot’s environment contains a simulated wind pushing on the robot, and a simulated hand giving it a shove. The robot knows nothing of this: there is no variable in the part of the program that deals with the robot’s sensors, actuators, and control algorithms that represents the forces acting on it. The robot no more models its environment than a thermostat models the room outside it.
Since it is possible to build systems that achieve goals without models, and also possible, but in general rather more complicated, to build such systems that do use models, I do not think that the blind god of evolution is likely to have put models anywhere. It has come up with something—us, and probably the higher animals—that can make models, but nothing currently persuades me that models are how brains must work. I see no need of that hypothesis.
I’d rather like to build that robot. If I did, I would very likely use an onboard computer just to have flexibility in reconfiguring its control algorithms, but the controllers themselves are just PID loops. If, having got it to work robustly, I were to hard-wire it, the control circuitry would consist of a handful of analogue components for each joint, and no computer required. I still find it remarkable, how much it can do with so little.
Er, I think you mean Navier-Stokes.
I think that’s unfair. The notion of an implicit model (meaning something like “a model such that a system making use of it would behave just like this one”) is a useful one; for instance, suppose you are presented with a system designed by someone else that isn’t working as it should; one way to diagnose its troubles is to work out what assumptions about the world are implicit in its design (they might not amount to anything quite so grand as a “model”, I suppose) and how they fail to match reality, and then—with the help of one’s own better model of the world—to adjust the system’s behaviour.
Or, of course, you can just poke at it until it behaves better. But then I’d be inclined to say that you’re still using a model of the world—you’re exploiting the world’s ability to be used as a model of itself. If a system gets “poked at until it behaves better” often enough and in varied enough ways, it can end up with a whole lot of information about the world built into it. If you don’t want to call that an “implicit model”, fair enough; but what’s wrong with doing so?
Poking at it until it works isn’t revising a model, in the same sense that walking toward the pole star when you want to go North isn’t cartography.
I didn’t say that poking at something until it works is revising a model, I said that it’s using a model (in, doubtless, a rather trivial sense). And, if I’m understanding your analogy right, surely the analogous claim would be that walking (as nearly as possible given that one remains on the surface of the earth) towards the pole star isn’t reading a map (even an “implicit” one), not that it isn’t cartography; and I don’t think that’s quite so obvious. (Also: it seems to me that “maps” have more in common than “models”, and I think that’s relevant.)
Could one argue the tuning by the programmer incorporates the relevant aspects of the model? (Which is what I think commenter meant by “implicit.”) In my mom’s old van, going down a steep hill would mess up the cruise control: as you say, if you push hard enough, you can over come a control loop’s programming. So a guess as to relation to Bayescraft: certain real world scenarios operate within a narrow enough set of parameters enough of the time that one can design feedback loops that do not update based on all evidence and still work well enough.
Who’s saying that they are?
(And: Is what you’re expressing skeptical about the idea that brains usually use models, or the idea that they ever do? I know that I use models quite often—any time I try to imagine how something I do will work out—and if it isn’t my brain doing that, I don’t know what it is.)
If you have not seen it yet, check out Ballbot. This video is it responding to a disturbance. I know nothing of its programming, but it acts as if it is using the same control systems you are describing.
Also, Beyond AI has a lot of discussion about how simple control structures may eventually work its way into building a general AI. I do not know if there is an online version hanging around, but if you are interested I can type up a summary article after the General AI topic ban is lifted.
In terms of your original post, another random example of simple control structures providing control over extremely complex systems would be video games. The controllers generally affect one thing and after my mind understands the movements I can guide a little soldier to kill other soldiers. I find that learning these control systems makes me a better driver, makes me better at operating small backhoes, or anything else that can be expressed in terms of simple control structures. An interesting side-topic to your article would be taking a look at how we control control structures and working to improve the feedback and response times. My talent for video games may be related to my intuitive ability to balance when walking on the curb or why I instinctively want to respond to a emotional tragedy by responding with a soft push toward emotional safety. “Fixing it all at once” is likely to overcorrect.
I am rambling now, but this article connected a few unassociated behaviors in my head. Cool.
For a continuation of the ideas in Beyond AI, relevant to this LW topic, see:
http://agi-09.org/papers/paper_22.pdf
Thanks; added to reading list.