I disagree. All the nodes in the network should be thought of as grounding out in imagination, in that it’s a world-model, not a world. Maybe I’m not seeing your point.
I would definitely like to see a graphical model that’s more capable of representing the way the world-model itself is recursively involved in decision-making.
One argument for calling an influence diagram a generalization of a bayes could be that the conditional probability table for the agent’s policy given observations is not given as part of the influence diagram, and instead must be solved for. But we can still think of this as a special case of a Bayes net, rather than a generalization, by thinking of an influence diagram as a special sort of Bayes net in which the decision nodes have to have conditional probability tables obeying some optimality notion (such as the CDT optimality notion, the EDT optimality notion, etc).
This constraint is not easily represented within the Bayes net itself, but instead imposed from outside. It would be nice to have a graphical model in which you could represent that kind of constraint naturally. But simply labelling things as decision nodes doesn’t do much. I would rather have a way of identifying something as agent-like based on the structure of the model for it. (To give a really bad version: suppose you allow directed cycles, rather than requiring DAGs, and you think of the “backwards causality” as agency. But, this is really bad, and I offer it only to illustrate the kind of thing I mean—allowing you to express the structure which gives rise to agency, rather than taking agency as a new primitive.)
All the nodes in the network should be thought of as grounding out in imagination, in that it’s a world-model, not a world. Maybe I’m not seeing your point.
My point is that my world model contains both ‘unimaginative things’ and ‘things like world models’, and it makes sense to separate those nodes (because the latter are typically functions of the former). Agreed that all of it is ‘in my head’, but it’s important that the ‘in my head’ realm contain the ‘in X’s head’ toolkit.
I disagree. All the nodes in the network should be thought of as grounding out in imagination, in that it’s a world-model, not a world. Maybe I’m not seeing your point.
I would definitely like to see a graphical model that’s more capable of representing the way the world-model itself is recursively involved in decision-making.
One argument for calling an influence diagram a generalization of a bayes could be that the conditional probability table for the agent’s policy given observations is not given as part of the influence diagram, and instead must be solved for. But we can still think of this as a special case of a Bayes net, rather than a generalization, by thinking of an influence diagram as a special sort of Bayes net in which the decision nodes have to have conditional probability tables obeying some optimality notion (such as the CDT optimality notion, the EDT optimality notion, etc).
This constraint is not easily represented within the Bayes net itself, but instead imposed from outside. It would be nice to have a graphical model in which you could represent that kind of constraint naturally. But simply labelling things as decision nodes doesn’t do much. I would rather have a way of identifying something as agent-like based on the structure of the model for it. (To give a really bad version: suppose you allow directed cycles, rather than requiring DAGs, and you think of the “backwards causality” as agency. But, this is really bad, and I offer it only to illustrate the kind of thing I mean—allowing you to express the structure which gives rise to agency, rather than taking agency as a new primitive.)
My point is that my world model contains both ‘unimaginative things’ and ‘things like world models’, and it makes sense to separate those nodes (because the latter are typically functions of the former). Agreed that all of it is ‘in my head’, but it’s important that the ‘in my head’ realm contain the ‘in X’s head’ toolkit.