It’s the same as programmers insisting on introducing abstractions decoupling their code from the framework and libraries that they’re using; modularity to prevent dependency hell.
I think it is the same as programmers choosing to use languages with built in support for floating point calculations and importing standard math and stats libraries as appropriate. This is an alternative to rolling your own math functions to model your calculations based off integers or bytes.
Are you suggesting that modeling real numbers with floating point is a good practice?
Yes, it is a standard practice, and it may be the best compromise available for a programmer or a team on a limited time budget, but the enormous differences between real numbers and floating point numbers mean that everything that was true upstream in math-land regarding real numbers, becomes suspect and have to be re-checked, or transformed into corresponding not-quite-the-same theorems and proofs.
If we (downstream consumers of mathematics) could get mathematicians to use realistic primitives, then we could put a lot of numerical analysts out of work (free up their labor to do something more valuable).
Do you think some constructivist representation of numbers can do better than IEEE floats at removing the need for numerical analysis in most engineering tasks, while still staying fast enough? I’m veeeeery skeptical. It would be a huge breakthrough if it were true.
Yes, that’s my position. In fact, if you had hardware support for an intuitionistic / constructivist representation (intervals, perhaps), my bet would be that the circuits would be simpler than the floating-point hardware implementing the IEEE standard now.
I’m not an expert in the field, but it seems to me that intervals require strictly more complexity than IEEE floats (because you still need to do floating-point arithmetic on the endpoints) and will be unusable in many practical problems because they will get too wide. At least that’s the impression I got from reading a lot of Kahan. Or do you have some more clever scheme in mind?
Yes, if you have to embrace the same messy compromises, then I am mistaken. My belief (which is founded on studying logic and mathematics in college, and then software development after college) is that better foundations, with effort, show through to better implementations.
Are you suggesting that modeling real numbers with floating point is a good practice?
Good, certainly. Not a universally optimal practice though. There are times when unlimited precision is preferable, despite the higher computational overhead. There are libraries for that too.
I think it is the same as programmers choosing to use languages with built in support for floating point calculations and importing standard math and stats libraries as appropriate. This is an alternative to rolling your own math functions to model your calculations based off integers or bytes.
Your shot.
Are you suggesting that modeling real numbers with floating point is a good practice?
Yes, it is a standard practice, and it may be the best compromise available for a programmer or a team on a limited time budget, but the enormous differences between real numbers and floating point numbers mean that everything that was true upstream in math-land regarding real numbers, becomes suspect and have to be re-checked, or transformed into corresponding not-quite-the-same theorems and proofs.
If we (downstream consumers of mathematics) could get mathematicians to use realistic primitives, then we could put a lot of numerical analysts out of work (free up their labor to do something more valuable).
Do you think some constructivist representation of numbers can do better than IEEE floats at removing the need for numerical analysis in most engineering tasks, while still staying fast enough? I’m veeeeery skeptical. It would be a huge breakthrough if it were true.
Yes, that’s my position. In fact, if you had hardware support for an intuitionistic / constructivist representation (intervals, perhaps), my bet would be that the circuits would be simpler than the floating-point hardware implementing the IEEE standard now.
I’m not an expert in the field, but it seems to me that intervals require strictly more complexity than IEEE floats (because you still need to do floating-point arithmetic on the endpoints) and will be unusable in many practical problems because they will get too wide. At least that’s the impression I got from reading a lot of Kahan. Or do you have some more clever scheme in mind?
Yes, if you have to embrace the same messy compromises, then I am mistaken. My belief (which is founded on studying logic and mathematics in college, and then software development after college) is that better foundations, with effort, show through to better implementations.
Good, certainly. Not a universally optimal practice though. There are times when unlimited precision is preferable, despite the higher computational overhead. There are libraries for that too.