Stuart—Yeah, the line of theoretical research you suggest is worthwhile.…
However, it’s worth noting that I and the other OpenCog team members are pressed for time, and have a lot of concrete OpenCog work to do. It would seem none of us really feels like taking a lot of time, at this stage, to carefully formalize arguments about what the system is likely to do in various situations once it’s finished. We’re too consumed with trying to finish the system, which is a long and difficult task in itself...
I will try to find some time in the near term to sketch a couple example arguments of the type you request… but it won’t be today...
As a very rough indication for the moment… note that OpenCog has explicit Goal Node objects in its AtomSpace knowledge store, and then one can look at the explicit probabilistic ImplicationLinks pointing to these GoalNodes from various combinations of contexts and actions. So one can actually look, in principle, at the probabilistic relations between (context, action) pairs and goals that OpenCog is using to choose actions.
Now, for a quite complex OpenCog system, it may be hard to understand what all these probabilistic relations mean. But for a young OpenCog doing simple things, it will be easier. So one would want to validate for a young OpenCog doing simple things, that the information in the system’s AtomSpace is compatible with 1 rather than 2-4.… One would then want to validate that, as the system gets more mature and does more complex things, there is not a trend toward more of 2-4 and less of 1 ….
Stuart—Yeah, the line of theoretical research you suggest is worthwhile.…
However, it’s worth noting that I and the other OpenCog team members are pressed for time, and have a lot of concrete OpenCog work to do. It would seem none of us really feels like taking a lot of time, at this stage, to carefully formalize arguments about what the system is likely to do in various situations once it’s finished. We’re too consumed with trying to finish the system, which is a long and difficult task in itself...
I will try to find some time in the near term to sketch a couple example arguments of the type you request… but it won’t be today...
As a very rough indication for the moment… note that OpenCog has explicit Goal Node objects in its AtomSpace knowledge store, and then one can look at the explicit probabilistic ImplicationLinks pointing to these GoalNodes from various combinations of contexts and actions. So one can actually look, in principle, at the probabilistic relations between (context, action) pairs and goals that OpenCog is using to choose actions.
Now, for a quite complex OpenCog system, it may be hard to understand what all these probabilistic relations mean. But for a young OpenCog doing simple things, it will be easier. So one would want to validate for a young OpenCog doing simple things, that the information in the system’s AtomSpace is compatible with 1 rather than 2-4.… One would then want to validate that, as the system gets more mature and does more complex things, there is not a trend toward more of 2-4 and less of 1 ….
Interesting line of thinking indeed! …