I’ve (tried to) read it several times. While I agree on the basic idea of finding isomorphisms by looking at bisimulations or bijections, and the minimizing differences sounds like a good idea inasmuch as it follows Occam’s razor, a lot of it seems unmotivated and unexplained.
Like the use of the Kullback-Leibler divergence. Why that, specifically—is it just that obvious and desirable? It would seem to have some not especially useful properties like not being symmetrical (so would an AI using it would exhibit non-monotonic behavior in changing ontologies?), which don’t seem to be discussed.
I’ve (tried to) read it several times. While I agree on the basic idea of finding isomorphisms by looking at bisimulations or bijections, and the minimizing differences sounds like a good idea inasmuch as it follows Occam’s razor, a lot of it seems unmotivated and unexplained.
Like the use of the Kullback-Leibler divergence. Why that, specifically—is it just that obvious and desirable? It would seem to have some not especially useful properties like not being symmetrical (so would an AI using it would exhibit non-monotonic behavior in changing ontologies?), which don’t seem to be discussed.