The claim that a given algorithm or circuit really adds two numbers is very precise. Even a single pair of numbers that it adds incorrectly refutes the claim, and very much risks making this algorithm/circuit useless.
For almost every arithmetic operation in actual computers, on every type of numbers, there are many inputs for which that operation returns the wrong result. (Yeah, arbitrary size integers are an exception, but most programs don’t use those, and even they can fail if you try making a number that doesn’t fit in memory.) But still, lots of algorithms are useful.
Are you referring to overflow? If so, that’s the right result, the function to compute is “adding integers mod N” not “adding integers” (I agree I said “adding integers” but anyway addition mod N is a different very, very precise claim). Otherwise that’s a hardware bug and quality assurance is supposed to get rid of those.
I still don’t think the programming example supports your point.
For example, in C and C++, integer overflow is undefined behavior. The compiler is allowed to break your program if it happens. Undefined behavior is useful for optimizations—for example, you can optimize x<x+1 to true, which helps eliminate branches—and there have been popular programs that quietly broke when a new compiler release got better at such optimizations. John Regehr’s blog is a great source on this.
Almost nothing in programming is 100% reliable, most things just kinda seem to work. Maybe it would be better to use an example from math.
The claim was that if the arithmetic circuit that is supposed to add numbers fails 0.01% of the time the computer crashes, which is true.
You did also say that
For almost every arithmetic operation in actual computers, on every type of numbers, there are many inputs for which that operation returns the wrong result. (Yeah, arbitrary size integers are an exception, but most programs don’t use those, and even they can fail if you try making a number that doesn’t fit in memory.) But still, lots of algorithms are useful.
Are you referring to overflow? If so, that’s the right result, the function to compute is “adding integers mod N” not “adding integers” (I agree I said “adding integers” but anyway addition mod N is a different very, very precise claim). Otherwise that’s a hardware bug and quality assurance is supposed to get rid of those.
I still don’t think the programming example supports your point.
For example, in C and C++, integer overflow is undefined behavior. The compiler is allowed to break your program if it happens. Undefined behavior is useful for optimizations—for example, you can optimize x<x+1 to true, which helps eliminate branches—and there have been popular programs that quietly broke when a new compiler release got better at such optimizations. John Regehr’s blog is a great source on this.
Almost nothing in programming is 100% reliable, most things just kinda seem to work. Maybe it would be better to use an example from math.