Sorry for being slow :) No, I haven’t read anything of Bratman’s. Should I? The synopsis looks like it might have some interesting ideas but I’m worried he could get bogged down in what human planning “really is” rather than what models are useful.
I’d totally be happy to chat either here or in PMs. Full Bayesian reasoning seems tricky if the environment is complicated enough to make hierarchical planning attractive—or do you mean optimizing a model for posterior probability (the prior being something like MML?) by local search?
I think one interesting question there is if it can learn human foibles. For example, suppose we’re playing a racing game and I want to win the race, but fail because my driving skills are bad. How diverse a dataset about me do you need to actually be able to infer that a) I am capable of conceptualizing how good my performance is b) I wanted it to be good c) It wasn’t good, from a hierarchical perpective, because of the lower-level planning faculties I have. I think maybe you could actually learn this only from racing game data (no need to make an AGI that can ask me about my goals and do top-down inference), so long as you had diverse enough driving data to make the “bottom-up” generalization that my low-level driving skill can be modeled as bad almost no matter the higher-level goal, and therefore it’s simplest to explain me not winning a race by taking the bad driving I display elsewhere as a given and asking what simple higher-level goal fits on top.
Sorry for being slow :) No, I haven’t read anything of Bratman’s. Should I? The synopsis looks like it might have some interesting ideas but I’m worried he could get bogged down in what human planning “really is” rather than what models are useful.
I’d totally be happy to chat either here or in PMs. Full Bayesian reasoning seems tricky if the environment is complicated enough to make hierarchical planning attractive—or do you mean optimizing a model for posterior probability (the prior being something like MML?) by local search?
I think one interesting question there is if it can learn human foibles. For example, suppose we’re playing a racing game and I want to win the race, but fail because my driving skills are bad. How diverse a dataset about me do you need to actually be able to infer that a) I am capable of conceptualizing how good my performance is b) I wanted it to be good c) It wasn’t good, from a hierarchical perpective, because of the lower-level planning faculties I have. I think maybe you could actually learn this only from racing game data (no need to make an AGI that can ask me about my goals and do top-down inference), so long as you had diverse enough driving data to make the “bottom-up” generalization that my low-level driving skill can be modeled as bad almost no matter the higher-level goal, and therefore it’s simplest to explain me not winning a race by taking the bad driving I display elsewhere as a given and asking what simple higher-level goal fits on top.