As a software developer, I completely expect this; it’s just an everyday consequence of limited-precision math. You’ll get the same kind of anomalies in almost any programming language, and other kinds of calculations can produce far worse inaccuracies than this (seems kind of odd to single out standard deviations which are relatively well-behaved, but maybe Excel used to have a worse algorithm). John Gustafson proposed using clusters of unums and then posits to mitigate the problem (preferably in hardware), but it’s not practical to eliminate it entirely.
I tried a set of 104,857,600 values in Excel 2010 for Windows and the standard deviations diverge at the 9th decimal:
Count 1: 104,857,600
Count 2: 104,857,600
Sum 1: 5,242,888,545
Sum 2: 104,862,842,888,545
Average 1: 50.00008149
Average 2: 1,000,050.00008149
Stdev.p 1: 29.15276094096330000
Stdev.p 2: 29.15276094160490000
As a software developer, I completely expect this; it’s just an everyday consequence of limited-precision math. You’ll get the same kind of anomalies in almost any programming language, and other kinds of calculations can produce far worse inaccuracies than this (seems kind of odd to single out standard deviations which are relatively well-behaved, but maybe Excel used to have a worse algorithm). John Gustafson proposed using clusters of unums and then posits to mitigate the problem (preferably in hardware), but it’s not practical to eliminate it entirely.