Back in high school I discovered this by accident (yes, I was really bored!). I suppose it’s nothing new, but it turns out that this works for more than simple squares and cubes:
Given any sequence of numbers, keep finding differences of differences until you hit a constant; the number of iterations needed is the maximum exponent in the formula that produced the numbers. That is, this works even if there are other terms, regardless of whether any or all terms have coefficients other than 1.
This is obvious after you learn calculus. The “nth difference” corresponds to nth derivative (a sequence just looks at integer points of a real-valued function), so clearly a polynomial of degree n has constant nth derivative. It would be even more accurate to say that an nth antiderivative of a constant is precisely a degree n polynomial.
Differences and derivatives are not the same, though there is the obvious analogy. If you want to take derivatives and antiderivatives, you want to write in the x^k basis or the x^k/k! basis. If you want to take differences and sums, you want to write in the falling factorial basis or the x choose k basis.
If you get a non constant, yes. For a linear function, f(a+1) - f(a) = f’(a). Inductively you can then show that the nth one-step difference of a degree n polynomial f at a point a is f^(n)(a). But this doesn’t work for anything but n. Thanks for pointing that out!
I agree that it’s implied by working out the logic and finding that it doesn’t apply elsewhere. I disagree that it is implied by the phrasing.
Given any sequence of numbers
doesn’t seem to restrict it, and though I suppose
the number of iterations needed is the maximum exponent in the formula that produced the numbers
implies that there is a “maximum exponent in the formula” and with slightly more reasoning (a number of iterations isn’t going to be fractional) that it must be a formula with a whole number maximum exponent, I don’t see anything that precludes, for instance, x^2 + x^(1/2), which would also never go constant.
Sorry, I was using the weak “implies”, and probably too much charity.
And I usually only look at this sort of thing in the context of algorithm analysis, so I’m used to thinking that x squared is pretty much equal to 5 x squared plus 2 log x plus square root of x plus 37.
Back in high school I discovered this by accident (yes, I was really bored!). I suppose it’s nothing new, but it turns out that this works for more than simple squares and cubes:
Given any sequence of numbers, keep finding differences of differences until you hit a constant; the number of iterations needed is the maximum exponent in the formula that produced the numbers. That is, this works even if there are other terms, regardless of whether any or all terms have coefficients other than 1.
So did I! And in general the nth order finite differences of nth powers will be n factorial.
This is obvious after you learn calculus. The “nth difference” corresponds to nth derivative (a sequence just looks at integer points of a real-valued function), so clearly a polynomial of degree n has constant nth derivative. It would be even more accurate to say that an nth antiderivative of a constant is precisely a degree n polynomial.
Differences and derivatives are not the same, though there is the obvious analogy. If you want to take derivatives and antiderivatives, you want to write in the x^k basis or the x^k/k! basis. If you want to take differences and sums, you want to write in the falling factorial basis or the x choose k basis.
If you get a non constant, yes. For a linear function, f(a+1) - f(a) = f’(a). Inductively you can then show that the nth one-step difference of a degree n polynomial f at a point a is f^(n)(a). But this doesn’t work for anything but n. Thanks for pointing that out!
Ah, yes, that’s a good point, because the leading coefficient be the same whether you use the x^k basis or the falling factorial basis.
Neither finite differences nor calculus are new to me, but I didn’t pick up the correlation between the two until now, and it really is obvious.
This is why I love mathematics—there’s always a trick hidden up the sleeve!
Notice that the result doesn’t hold if the points aren’t evenly spaced, so the solution must use this fact.
Iterated finite differences correspond to derivatives in some non-obvious way I can’t remember (and can’t be bothered to find out).
Your procedure (though not necessarily your result) breaks for e^x
Really for non-polynomials, and I think that was implied by the phrasing.
I agree that it’s implied by working out the logic and finding that it doesn’t apply elsewhere. I disagree that it is implied by the phrasing.
doesn’t seem to restrict it, and though I suppose
implies that there is a “maximum exponent in the formula” and with slightly more reasoning (a number of iterations isn’t going to be fractional) that it must be a formula with a whole number maximum exponent, I don’t see anything that precludes, for instance, x^2 + x^(1/2), which would also never go constant.
Sorry, I was using the weak “implies”, and probably too much charity.
And I usually only look at this sort of thing in the context of algorithm analysis, so I’m used to thinking that x squared is pretty much equal to 5 x squared plus 2 log x plus square root of x plus 37.