So the constant value c is 2, then, at least once we discard the random variable x. I just checked with a calculator, and sin(2+2) is roughly equal to −0.76, rounded to 2 decimal places, and sin(4) got the same value, so the lowest constant is 2, unless it’s lower than that, and I suspect that it still holds for all values of the random variable x.
Ah true, in this case we can take the maximum constant to be 2.
Let’s try a different example. Consider the functions |x-c| for different values of c. Then if you take any two such functions, |x-a| and |x-b|, their difference is bounded by a constant |a-b| for all x. But it’s not clear how to say which function has the “lowest additive constant”.
Yeah, I’m really hoping Kolmogorov Complexity isn’t like this, though when you say it’s not clear that there’s a minimum/maximum constant for a function, does that mean they know it lacks a minimum or maximum, or can they not prove whether it has a minimum or maximum.
I’d say the main difference is it involves adding, not subtracting, and I suspect there’s a trivial minimum that is equal to 0, because I suspect the programming language can’t subtract the Kolmogorov Complexity because it’s defined to be the shortest program in a Turing-Complete language that outputs a given object, and the challenge is whether that minimum number of 0 actually exists, or whether the lower bound is higher than that for a programming language while still being Turing-Complete.
I think K-complexity is actually like that. For any given object you can define a Turing-complete language in which the empty program outputs that object. It could literally be “Python, except the empty program outputs this specific object”.
So when you say that K-complexity is like the function that you describe earlier, does that mean that there’s provably no minimum, or does that mean that you can’t prove that the minimal constant for Kolmogorov Complexity exists?
I can accept this as an answer, though you’d have to show why Kolmogorov Complexity’s additive constant lacks a minimum number more than you have in this comment thread.
However, “Python, except the empty program outputs this specific object” would probably be more complex than “Python”, in most programming languages. So I wonder whether it would be possible to define objective complexity as eigenvector (not sure I am using the right word here) of relative complexities. As in: “simple” means “simple, when programmed in a simple language”.
So the constant value c is 2, then, at least once we discard the random variable x. I just checked with a calculator, and sin(2+2) is roughly equal to −0.76, rounded to 2 decimal places, and sin(4) got the same value, so the lowest constant is 2, unless it’s lower than that, and I suspect that it still holds for all values of the random variable x.
Ah true, in this case we can take the maximum constant to be 2.
Let’s try a different example. Consider the functions |x-c| for different values of c. Then if you take any two such functions, |x-a| and |x-b|, their difference is bounded by a constant |a-b| for all x. But it’s not clear how to say which function has the “lowest additive constant”.
Yeah, I’m really hoping Kolmogorov Complexity isn’t like this, though when you say it’s not clear that there’s a minimum/maximum constant for a function, does that mean they know it lacks a minimum or maximum, or can they not prove whether it has a minimum or maximum.
I’d say the main difference is it involves adding, not subtracting, and I suspect there’s a trivial minimum that is equal to 0, because I suspect the programming language can’t subtract the Kolmogorov Complexity because it’s defined to be the shortest program in a Turing-Complete language that outputs a given object, and the challenge is whether that minimum number of 0 actually exists, or whether the lower bound is higher than that for a programming language while still being Turing-Complete.
I think K-complexity is actually like that. For any given object you can define a Turing-complete language in which the empty program outputs that object. It could literally be “Python, except the empty program outputs this specific object”.
So when you say that K-complexity is like the function that you describe earlier, does that mean that there’s provably no minimum, or does that mean that you can’t prove that the minimal constant for Kolmogorov Complexity exists?
It’s still possible that I’m misunderstanding the question, but if it means what I think it means, then the answer is “provably no minimum”.
I can accept this as an answer, though you’d have to show why Kolmogorov Complexity’s additive constant lacks a minimum number more than you have in this comment thread.
However, “Python, except the empty program outputs this specific object” would probably be more complex than “Python”, in most programming languages. So I wonder whether it would be possible to define objective complexity as eigenvector (not sure I am using the right word here) of relative complexities. As in: “simple” means “simple, when programmed in a simple language”.