There are a variety of things going on here. One is that those speedups helped. A major part those is imprecision on my part. This is actually a good example of an interesting (and from the perspective of what is discussed above) potentially dangerous phenomenon. Most of the time when we discuss the efficiency of algorithms one is looking at big-O bounds. But those by nature have constants built into them. In the case of linear programming, it turned out that we could drastically improve the constants. This is related to what I discussed above where even if one has P != NP, one could have effective efficient ways of solving all instances of 3-SAT that have fewer than say 10^80 terms. That sort of situation could be about as bad from a FOOM perspective. To the immediate purpose being discussed above, the Euclidean algorithm example is probably better since in that case we actually know that there’s not much room for improvements in the constants.
There are a variety of things going on here. One is that those speedups helped. A major part those is imprecision on my part. This is actually a good example of an interesting (and from the perspective of what is discussed above) potentially dangerous phenomenon. Most of the time when we discuss the efficiency of algorithms one is looking at big-O bounds. But those by nature have constants built into them. In the case of linear programming, it turned out that we could drastically improve the constants. This is related to what I discussed above where even if one has P != NP, one could have effective efficient ways of solving all instances of 3-SAT that have fewer than say 10^80 terms. That sort of situation could be about as bad from a FOOM perspective. To the immediate purpose being discussed above, the Euclidean algorithm example is probably better since in that case we actually know that there’s not much room for improvements in the constants.