Could you describe some of the other motivation systems for AI that are under discussion? I imagine they might be complicated, but is it possible to explain them to someone not part of the AI building community?
AFAIK most people build planning engines that use multiple goals, plus what you might call “ad hoc” machinery to check on that engine. So in other words, you might have something that comes up with a plan but then a whole bunch of stuff that analyses the plan.
My own approach is very different. Coming up with a plan is not a linear process, but involves large numbers of constraints acting in parallel. If you know about how a neural net goes from a large array of inputs (e.g. a visual field) to smaller numbers of hidden units that encode more and more abstract descriptions of the input, until finally you get some high level node being activated …. then if you picture that process happening in reverse, with a few nodes being highly activated, then causing more and more low level nodes to come up, that gives a rough idea of how it works.
In practice all that the above means is that the maximum possible quantity of contextual information acts on the evolving plan. And that is critical.
Could you describe some of the other motivation systems for AI that are under discussion? I imagine they might be complicated, but is it possible to explain them to someone not part of the AI building community?
AFAIK most people build planning engines that use multiple goals, plus what you might call “ad hoc” machinery to check on that engine. So in other words, you might have something that comes up with a plan but then a whole bunch of stuff that analyses the plan.
My own approach is very different. Coming up with a plan is not a linear process, but involves large numbers of constraints acting in parallel. If you know about how a neural net goes from a large array of inputs (e.g. a visual field) to smaller numbers of hidden units that encode more and more abstract descriptions of the input, until finally you get some high level node being activated …. then if you picture that process happening in reverse, with a few nodes being highly activated, then causing more and more low level nodes to come up, that gives a rough idea of how it works.
In practice all that the above means is that the maximum possible quantity of contextual information acts on the evolving plan. And that is critical.