I think it is very important to consider the difference between a descriptive model and a theory of a mechanism.
So, inventing an extreme example for purposes of illustration, if someone builds a simple, two-parameter model of human marital relationships (perhaps centered on the idea of cost and benefits), that model might actually be made to work, to a degree. It could be used to do some pretty simple calculations about how many people divorce, at certain income levels, or with certain differences in income between partners in a marriage.
But nobody pretends that the mechanism inside the descriptive model corresponds to an actual mechanism inside the heads of those married couples. Sure, there might be!, but there doesn’t have to be, and we are pretty sure there is no actual calculation inside a particular mechanism, that matches the calculation in the model. Rather, we believe that reality involves a much more complex mechanism that has that behavior as an emergent property.
When RL is seen as a descriptive model—which I think is the correct way to view it in your above example, that is fine and good as far as it goes.
The big trouble that I have been fighting is the apotheosis from descriptive model to theory of a mechanism. And since we are constructing mechanisms when we do AI, that is an especially huge danger that must be avoided.
I agree that this is an important distinction, and that things that might naively seem like mechanisms are often actually closer to descriptive models.
I’m not convinced that RL necessarily falls into the class of things that should be viewed mainly as descriptive models, however. For one, what’s possibly the most general-purpose AI developed so far seems to have been developed by explicitly having RL as an actual mechanism. That seems to me like a moderate data point towards RL being an actual useful mechanism and not just a description.
Though I do admit that this isn’t necessarily that strong of a data point—after all, SHRDLU was once the most advanced system of its time too, yet basically all of its mechanisms turned out to be useless.
I think it is very important to consider the difference between a descriptive model and a theory of a mechanism.
So, inventing an extreme example for purposes of illustration, if someone builds a simple, two-parameter model of human marital relationships (perhaps centered on the idea of cost and benefits), that model might actually be made to work, to a degree. It could be used to do some pretty simple calculations about how many people divorce, at certain income levels, or with certain differences in income between partners in a marriage.
But nobody pretends that the mechanism inside the descriptive model corresponds to an actual mechanism inside the heads of those married couples. Sure, there might be!, but there doesn’t have to be, and we are pretty sure there is no actual calculation inside a particular mechanism, that matches the calculation in the model. Rather, we believe that reality involves a much more complex mechanism that has that behavior as an emergent property.
When RL is seen as a descriptive model—which I think is the correct way to view it in your above example, that is fine and good as far as it goes.
The big trouble that I have been fighting is the apotheosis from descriptive model to theory of a mechanism. And since we are constructing mechanisms when we do AI, that is an especially huge danger that must be avoided.
I agree that this is an important distinction, and that things that might naively seem like mechanisms are often actually closer to descriptive models.
I’m not convinced that RL necessarily falls into the class of things that should be viewed mainly as descriptive models, however. For one, what’s possibly the most general-purpose AI developed so far seems to have been developed by explicitly having RL as an actual mechanism. That seems to me like a moderate data point towards RL being an actual useful mechanism and not just a description.
Though I do admit that this isn’t necessarily that strong of a data point—after all, SHRDLU was once the most advanced system of its time too, yet basically all of its mechanisms turned out to be useless.
Arrgghh! No. :-)
The DeepMind Atari agent is the “most general-purposeAI developed so far”?
!!!
At this point your reply is “I am not joking. And don’t call me Shirley.”