I think what you’re saying just makes a lot of sense, honestly.
I’d suspect one possible counterargument is that, just like how more intelligent agents with more compressed models can more compactly represent complex goals, they are also capable of drawing ever-finer distinctions that allow them to identify possible goals that have very short encodings in the new ontology, but which don’t make sense at all as stand-alone, mostly-coherent targets in the old ontology (because it is simply too weak to represent them). So it’s not just that goals get compressed, but also that new possible kinds of goals (many of them really simple) get added to the game.
But this process should also allow new goals to arise that have ~ any arbitrary encoding length in the new ontology, because it should be just as easy to draw new, subtle distinctions inside a complex goal (which outputs a new medium- or large-complexity goal) as it would be inside a really simple goal (which outputs the type of new super-small-complexity goal that the previous paragraph talks about). So I don’t think this counterargument ultimately works, and I suspect it shouldn’t change our expectations in any meaningful way.
I think what you’re saying just makes a lot of sense, honestly.
I’d suspect one possible counterargument is that, just like how more intelligent agents with more compressed models can more compactly represent complex goals, they are also capable of drawing ever-finer distinctions that allow them to identify possible goals that have very short encodings in the new ontology, but which don’t make sense at all as stand-alone, mostly-coherent targets in the old ontology (because it is simply too weak to represent them). So it’s not just that goals get compressed, but also that new possible kinds of goals (many of them really simple) get added to the game.
But this process should also allow new goals to arise that have ~ any arbitrary encoding length in the new ontology, because it should be just as easy to draw new, subtle distinctions inside a complex goal (which outputs a new medium- or large-complexity goal) as it would be inside a really simple goal (which outputs the type of new super-small-complexity goal that the previous paragraph talks about). So I don’t think this counterargument ultimately works, and I suspect it shouldn’t change our expectations in any meaningful way.