FWIW, this point was made by Marcus Hutter when I took his algorithmic information theory class in 2014 (including proving that it differs only by an additive constant); I understood this to be a basic theorem of algorithm information theory. I’ve always called the alt-complexity of X “the Solomonoff prior on X”.
I agree it’s regrettable that people are used to talking about the K-complexity when they almost always want to use what you call the alt-complexity instead. I personally try to always use “Solomonoff prior” instead of “K-complexity” for this reason, or just say “K-complexity” and wince a little.
Re “algorithmic information theory would be more elegant if they used this concept instead”: in my experience, algorithm information theorists always do use this concept instead.
FWIW, this point was made by Marcus Hutter when I took his algorithmic information theory class in 2014 (including proving that it differs only by an additive constant); I understood this to be a basic theorem of algorithm information theory. I’ve always called the alt-complexity of X “the Solomonoff prior on X”.
I agree it’s regrettable that people are used to talking about the K-complexity when they almost always want to use what you call the alt-complexity instead. I personally try to always use “Solomonoff prior” instead of “K-complexity” for this reason, or just say “K-complexity” and wince a little.
Re “algorithmic information theory would be more elegant if they used this concept instead”: in my experience, algorithm information theorists always do use this concept instead.
Do they know that it does not differ by a constant in the infinite sequence case?
No idea, sorry.