And, pre-emptively, I do feel comfortable providing two digits of precision. Not because I have excessive confidence in my ability to quantise my subjective judgements but rather because using significant figures as a method of communicating confidence or accuracy is a terrible idea.
This seems right but I’m not sure why. Can you articulate your reasons?
Let’s see. I need to purge my conclusion cache. (What’s the name for Eliezer’s post on not asking ‘why’ but asking ‘if’? I definitely needed to apply that.)
Yes, approximately what FAWS said. If I know I’m only accurate plus or minus 0.1 and the value I calculate is 0.75 then it would be silly to round off to 0.8. Compressing the two pieces of information (number and precision) into one number is just lossy. It can become a problem when writing say, 100 too. Although that can technically be avoided by always using scientific notation.
Not wedrifid, but you needlessly lose some small amount of information. The digits after the last significant one still are your best bet for the actual value, so you systematically do worse than you could.
This seems right but I’m not sure why. Can you articulate your reasons?
Let’s see. I need to purge my conclusion cache. (What’s the name for Eliezer’s post on not asking ‘why’ but asking ‘if’? I definitely needed to apply that.)
Yes, approximately what FAWS said. If I know I’m only accurate plus or minus 0.1 and the value I calculate is 0.75 then it would be silly to round off to 0.8. Compressing the two pieces of information (number and precision) into one number is just lossy. It can become a problem when writing say, 100 too. Although that can technically be avoided by always using scientific notation.
Not wedrifid, but you needlessly lose some small amount of information. The digits after the last significant one still are your best bet for the actual value, so you systematically do worse than you could.