The sum of two numbers should have a precision no higher than the operand with the highest precision. For example, adding 0.1 + 0.2 should yield 0.3, not 0.30000000000000004.
I would argue that the precision should be capped at the lowest precision of the operands. In physics, if you add to lengths, 0.123m+0.123456m should be rounded to 0.246m.
Also, IEEE754 fundamentally does not contain information about the precision of a number. If you want to track that information correctly, you can use two floating point numbers and do interval arithmetic. There is even an IEEE standard for that nowadays.
Of course, this comes at a cost. While monotonic functions can be converted for interval arithmetic, the general problem of finding the extremal values of a function in some high-dimensional domain is a hard problem. Of course, if you know how the function is composed out of simpler operations, you can at least find some bounds.
Or you could do what physicists do (at least when they are taking lab courses) and track physical quantities with a value and a precision, and do uncertainty propagation. (This might not be 100% kosher in cases where you first calculate multiple intermediate quantities from the same measurement (whose error will thus not be independent) and continue to treat them as if they were. But that might just give you bigger errors.) Also, this relies on your function being sufficiently well-described in the region of interest by the partial derivatives at the central point. If you calculate the uncertainty of f(x,y)=xy for x=0.1±1, y=0.1±1 using the partial derivatives you will not have fun.
I would argue that the precision should be capped at the lowest precision of the operands. In physics, if you add to lengths, 0.123m+0.123456m should be rounded to 0.246m.
Also, IEEE754 fundamentally does not contain information about the precision of a number. If you want to track that information correctly, you can use two floating point numbers and do interval arithmetic. There is even an IEEE standard for that nowadays.
Of course, this comes at a cost. While monotonic functions can be converted for interval arithmetic, the general problem of finding the extremal values of a function in some high-dimensional domain is a hard problem. Of course, if you know how the function is composed out of simpler operations, you can at least find some bounds.
Or you could do what physicists do (at least when they are taking lab courses) and track physical quantities with a value and a precision, and do uncertainty propagation. (This might not be 100% kosher in cases where you first calculate multiple intermediate quantities from the same measurement (whose error will thus not be independent) and continue to treat them as if they were. But that might just give you bigger errors.) Also, this relies on your function being sufficiently well-described in the region of interest by the partial derivatives at the central point. If you calculate the uncertainty of f(x,y)=xy for x=0.1±1, y=0.1±1 using the partial derivatives you will not have fun.