Hmmm, that’s not quite what I meant. It’s not about stopping at some meta-level, but rather, stopping at some amount of learning in the system. The system should learn not just level-specific information, but also cross-level information (like overall philosophical heuristics), which means that even if you stop teaching the machine at some point, it can still produce new reasoning at higher levels which should be similar to feedback you might have given.
Interesting. So the point is to learn how to move up the hierarchy? I mean, that makes a lot of sense. It is a sort of fixed point description, because then the AI can keep moving up the hierarchy as far as it wants, which mean the whole hierarchy is encoded by it’s behavior. It’s just a question of how far up it needs to go to get satisfying answers.
Right. I mean, I would clarify that the whole point isn’t to learn to go up the hierarchy; in some sense, most of the point is learning at a few levels. But yeah.
Interesting. So the point is to learn how to move up the hierarchy? I mean, that makes a lot of sense. It is a sort of fixed point description, because then the AI can keep moving up the hierarchy as far as it wants, which mean the whole hierarchy is encoded by it’s behavior. It’s just a question of how far up it needs to go to get satisfying answers.
Is that correct?
Right. I mean, I would clarify that the whole point isn’t to learn to go up the hierarchy; in some sense, most of the point is learning at a few levels. But yeah.