For simplicity, let’s assume that the time taken is a sum of products that are all of the same order as one another.
You are assuming that the algorithm runs in polynomial time w.r.t. the input dimensions. That’s a strong assumption.
To improve a particular summand by a particular constant of proportionality, we may improve any one factor of that summand by that constant of proportionality. Or, we may improve all factors of that summand by constants that together multiply to the desired constant of proportionality.
What do you mean by “improve any one factor”? In a complexity function, the variables represent the relevant dimensions of the input. How do you “improve” them?
And more generally, you seem to be under the impression that algorithms can be improved indefinitely, as evidenced by the fact that you are mentioning algorithm improvement in a post about exponential growth. They can’t: Complexity is bounded from below, which means that for each problem there exist a maximally efficient algorithm. And similarly, once you specify the details of the hardware, there exists a maximally efficient program for that specific hardware. Once you get there you can’t improve any further.
You are assuming that the algorithm runs in polynomial time w.r.t. the input dimensions. That’s a strong assumption.
I’m not assuming that.
In fact, my analysis isn’t asymptotic at all, but rather, for fixed (but large) input sizes. In the asymptotic setting, moving to a new algorithm can yield an improvement that is no longer in the same order equivalence class (I could have elaborated more on this, but this was already a PS).
What do you mean by “improve any one factor”? In a complexity function, the variables represent the relevant dimensions of the input. How do you “improve” them?
I meant “factor” in the “part of a multiplication” sense. For instance, let’s say you have an algorithm that has an outer loop A that calls an inner loop B every step. Then if a and b are the number of steps respectively, the total time is ab. Now, reducing either a or b by a given factor would reduce the product. That reduction by a factor could be through the identification of steps that turn out to be unnecessary.
And more generally, you seem to be under the impression that algorithms can be improved indefinitely, as evidenced by the fact that you are mentioning algorithm improvement in a post about exponential growth.
I am not under this impression. Sorry for the confusion.
Algorithms can’t be improved indefinitely, nor can population or the economy. But we can still talk of exponential growth over a range of time as we chip off at the obvious issues.
“let’s assume that the time taken is a sum of products”
This is the definition of a polynomial, although you might not have intended it to be a polynomial in something other than the input dimensions. In that case, I’m not sure I’ve understood clearly what you are arguing.
I was making the point that if the run time can be broken down in a form where it’s expressed as a sum of products, where the summands are the times taken for some sub-algorithms, then we can attempt to tweak the sub-algorithms to reduce the time taken for the individual tasks.
The time taken for the sub-algorithm may or may not be polynomial in the input dimensions.
You are assuming that the algorithm runs in polynomial time w.r.t. the input dimensions. That’s a strong assumption.
What do you mean by “improve any one factor”? In a complexity function, the variables represent the relevant dimensions of the input. How do you “improve” them?
And more generally, you seem to be under the impression that algorithms can be improved indefinitely, as evidenced by the fact that you are mentioning algorithm improvement in a post about exponential growth.
They can’t: Complexity is bounded from below, which means that for each problem there exist a maximally efficient algorithm. And similarly, once you specify the details of the hardware, there exists a maximally efficient program for that specific hardware. Once you get there you can’t improve any further.
Thanks for your thoughts.
I’m not assuming that.
In fact, my analysis isn’t asymptotic at all, but rather, for fixed (but large) input sizes. In the asymptotic setting, moving to a new algorithm can yield an improvement that is no longer in the same order equivalence class (I could have elaborated more on this, but this was already a PS).
I meant “factor” in the “part of a multiplication” sense. For instance, let’s say you have an algorithm that has an outer loop A that calls an inner loop B every step. Then if a and b are the number of steps respectively, the total time is ab. Now, reducing either a or b by a given factor would reduce the product. That reduction by a factor could be through the identification of steps that turn out to be unnecessary.
I am not under this impression. Sorry for the confusion.
Algorithms can’t be improved indefinitely, nor can population or the economy. But we can still talk of exponential growth over a range of time as we chip off at the obvious issues.
“let’s assume that the time taken is a sum of products”
This is the definition of a polynomial, although you might not have intended it to be a polynomial in something other than the input dimensions. In that case, I’m not sure I’ve understood clearly what you are arguing.
Sorry for my lack of clarity.
I was making the point that if the run time can be broken down in a form where it’s expressed as a sum of products, where the summands are the times taken for some sub-algorithms, then we can attempt to tweak the sub-algorithms to reduce the time taken for the individual tasks.
The time taken for the sub-algorithm may or may not be polynomial in the input dimensions.
Ok.