One way of framing our disagreement: I’m not convinced that the f operation makes sense as you’ve defined it. That is, I don’t think it can both be invertible and map to a goal with low complexity in the new ontology.
To clarify, I don’t think f is invertible, and that is why I talked about the preimage and not the inverse. I find it very plausible that f is not injective, i.e. that in a more compact ontology coming from a more intelligent agent, ideas/configurations etc that were different in the old ontology get mapped to the same thing in the new ontology (because the more intelligent agent realizes that they are somehow the same on a deeper level). I also believe f would not be surjective, as I wrote in response to rif a. sauros:
I’d suspect one possible counterargument is that, just like how more intelligent agents with more compressed models can more compactly represent complex goals, they are also capable of drawing ever-finer distinctions that allow them to identify possible goals that have very short encodings in the new ontology, but which don’t make sense at all as stand-alone, mostly-coherent targets in the old ontology (because it is simply too weak to represent them). So it’s not just that goals get compressed, but also that new possible kinds of goals (many of them really simple) get added to the game.
But this process should also allow new goals to arise that have ~ any arbitrary encoding length in the new ontology, because it should be just as easy to draw new, subtle distinctions inside a complex goal (which outputs a new medium- or large-complexity goal) as it would be inside a really simple goal (which outputs the type of new super-small-complexity goal that the previous paragraph talks about). So I don’t think this counterargument ultimately works, and I suspect it shouldn’t change our expectations in any meaningful way.
Nonetheless, I still expect f−1[f(G)] (viewed as the preimage of f(G) under the f mapping) and G to only differ very slightly.
To clarify, I don’t think f is invertible, and that is why I talked about the preimage and not the inverse. I find it very plausible that f is not injective, i.e. that in a more compact ontology coming from a more intelligent agent, ideas/configurations etc that were different in the old ontology get mapped to the same thing in the new ontology (because the more intelligent agent realizes that they are somehow the same on a deeper level). I also believe f would not be surjective, as I wrote in response to rif a. sauros:
Nonetheless, I still expect f−1[f(G)] (viewed as the preimage of f(G) under the f mapping) and G to only differ very slightly.
Ah, sorry for the carelessness on my end. But this still seems like a substantive disagreement: you expect
f−1[f(G)]≈G, and I don’t, for the reasons in my comment.