My point is the process of maths is (to a degree) invented or discovered, and under the invented hypothesis, where one would adhere to strict physicalist-style nominalism, the very act of predicting that the solutions to very real problems are dependent on abstract insight is literally incompatible with that position, to the point where seeing it done, even once, forces you to make some drastic ramifications to your own ontological model of the world.
One account is that the particular grouping of features into a definition is “invented”, in the same way that the concept of a “tree” is invented; but there is still a pattern in the world corresponding to tree. But from your original post I think we’re in agreement on this point?
the mathematics says that ‘if we assume such and such holds about, say partial differential equations, then this implies, through a chain of abstract reasoning, we get something like a concrete result within the real world.’ Its essentially this process that disturbs me. This process can fail in the real world, especially when we try to model specific phenomena, but those models have to obey PDE conditions (from our example), and this includes both right and wrong models.
I believe Pattern’s reasoning above could be summed-up by saying that abstraction is a way for us to model the real world, and the process of reasoning abstractly a way for us to run some sort of efficient simulation with our models. (@Pattern Is that a fair one-line summary?)
In which case, my understanding of your original question is one of the two: why is it the case that the world could be *efficiently* simulated? Perhaps your question is even one level deeper: why is it the case the the world could *simulated* at all? After all, it is possible that the only way to predict the outcome of a physical process is to observe the physical process. (Is this a fair summary of what disturbs you about the PDEs example?)
This could be rephrased slightly more concretely as a question about the Church-Turing thesis: how come there is such a thing as a *universal* Turing machine. Made even more concrete, it turns into a deep physics problem: what kind of laws of physics permit the existence of a universal Turing machine. That’s a deep (and technical!) question, which in this particular form was popularized by David Deutsch. This blog post by Michael Nielsen is a good general-audience introduction.
One account is that the particular grouping of features into a definition is “invented”, in the same way that the concept of a “tree” is invented; but there is still a pattern in the world corresponding to tree. But from your original post I think we’re in agreement on this point?
I believe Pattern’s reasoning above could be summed-up by saying that abstraction is a way for us to model the real world, and the process of reasoning abstractly a way for us to run some sort of efficient simulation with our models. (@Pattern Is that a fair one-line summary?)
In which case, my understanding of your original question is one of the two: why is it the case that the world could be *efficiently* simulated? Perhaps your question is even one level deeper: why is it the case the the world could *simulated* at all? After all, it is possible that the only way to predict the outcome of a physical process is to observe the physical process. (Is this a fair summary of what disturbs you about the PDEs example?)
This could be rephrased slightly more concretely as a question about the Church-Turing thesis: how come there is such a thing as a *universal* Turing machine. Made even more concrete, it turns into a deep physics problem: what kind of laws of physics permit the existence of a universal Turing machine. That’s a deep (and technical!) question, which in this particular form was popularized by David Deutsch. This blog post by Michael Nielsen is a good general-audience introduction.