They only depend to within a constant factor. That’s not the problem; the REAL problem is that K-complexity is uncomputable, meaning that you cannot in any way prove that the program you’re proposing is, or is NOT, the shortest possible program to express the law.
I disagree; I think the underspecification is a more serious issue than the uncomputability. There are constant factors that outweigh, by a massive margin, all evidence ever collected by our species. Unless there’s a way for us to get our hands on an infinite amount of cputime, there are constant factors that outweigh, by a massive margin, all evidence we will ever have a chance to collect.
For any two strings, you can assign a lower complexity to either one by choosing the description language appropriately. Some way to make a good enough (not necessarily optimal) judgement on the language to use is needed for the complexity metric to make any sense.
The uncomputability is unfortunate, but hardly fatal. You can just spend some finite effort trying to find the shortest program that produces the each string, using the best heuristics available for this job, and use that as an approximation and upper bound.
If you wanted to turn this into a social process, you could reward people for discovering shorter programs than the shortest-currently-known for existing theories (proving that they were simpler than known up to that point), as well as for collecting new evidence to discriminate between them.
The uncomputability is unfortunate, but hardly fatal. You can just spend some finite effort trying to find the shortest program that produces the each string, using the best heuristics available for this job, and use that as an approximation and upper bound. If you wanted to turn this into a social process, you could reward people for discovering shorter programs than the shortest-currently-known for existing theories (proving that they were simpler than known up to that point), as well as for collecting new evidence to discriminate between them.