Thanks :) Can you elaborate a bit? Are you saying that I overreached, and that largely there should be some transformed domain where the model turns out to be simple, but is not guaranteed to exist for every model?
I’m not sure “overreached” is quite my meaning. Rather, I think I disagree with more or less everything you said, apart from the obvious bits :-).
And that is the reason linear models are mathematically tractable : they form such a small space of possible models.
I don’t think it has anything much to do with the size of the space. Linear things are tractable because vector spaces are nice. The only connection with the niceness of linear models and the fact that they form such a small fraction of all possible models is this: any “niceness” property they have is a constraint on the models that have them, and therefore for something to be very “nice” requires it to satisfy lots of constraints, so “nice” things have to be rare. But “nice, therefore rare” is not at all the same as “rare, therefore smart”.
(We could pick out some other set of models, just as sparse as the linear ones, without the nice properties linear models have. They would form just as small a space of possible models, but they would not be as nice to work with as the linear ones.)
Of course nonlinear models don’t have general formulae that always work : they’re just defined as what is NOT linear.
If you mean that being nonlinear doesn’t guarantee anything useful, of course that’s right (and this is the same point about “nonapples” being made by the original article here). Particular classes of nonlinear models might have general formulae, a possibility we’ll come to in a moment.
In other words, linear models are severely restricted in the form they can have.
I’m not sure what that’s putting “in other words”; but yes, being linear is a severe restriction.
When we define another subset of models suitable to the specific thing being modelled, then we will just as easily be able to come up with a set of explicit symbolic formulae.
No. Not unless we cheat by e.g. defining some symbol to mean “a function satisfying this funky nonlinear condition we happen to be working with right now”. (Which mathematicians sometimes do, if the same funky nonlinear condition comes up often enough. But (1) this is a special case and (2) it still doesn’t get you anything as nice and easy to deal with as linearity does.)
In general, having a narrowly specified set of models suitable to a specific physical phenomenon is no guarantee at all of exact explicit symbolic formulae.
Then it will be just as “tractable” as linear models, even though it’s nonlinear : simply because it has different special properties
No. Those different special properties may be much less useful than linearity. Linearity is a big deal because it is so very useful. The space of solutions to, I dunno, let’s say the Navier-Stokes equations in a given region and with given boundary conditions is highly constrained; but it isn’t constrained in ways that (at least so far as mathematicians have so far been able to figure out) are as useful as linearity.
So I don’t agree at all that “largely there should be some transformed domain where the model turns out to be simple”. Sometimes that happens, but usually not.
Thanks :) Can you elaborate a bit? Are you saying that I overreached, and that largely there should be some transformed domain where the model turns out to be simple, but is not guaranteed to exist for every model?
I’m not sure “overreached” is quite my meaning. Rather, I think I disagree with more or less everything you said, apart from the obvious bits :-).
I don’t think it has anything much to do with the size of the space. Linear things are tractable because vector spaces are nice. The only connection with the niceness of linear models and the fact that they form such a small fraction of all possible models is this: any “niceness” property they have is a constraint on the models that have them, and therefore for something to be very “nice” requires it to satisfy lots of constraints, so “nice” things have to be rare. But “nice, therefore rare” is not at all the same as “rare, therefore smart”.
(We could pick out some other set of models, just as sparse as the linear ones, without the nice properties linear models have. They would form just as small a space of possible models, but they would not be as nice to work with as the linear ones.)
If you mean that being nonlinear doesn’t guarantee anything useful, of course that’s right (and this is the same point about “nonapples” being made by the original article here). Particular classes of nonlinear models might have general formulae, a possibility we’ll come to in a moment.
I’m not sure what that’s putting “in other words”; but yes, being linear is a severe restriction.
No. Not unless we cheat by e.g. defining some symbol to mean “a function satisfying this funky nonlinear condition we happen to be working with right now”. (Which mathematicians sometimes do, if the same funky nonlinear condition comes up often enough. But (1) this is a special case and (2) it still doesn’t get you anything as nice and easy to deal with as linearity does.)
In general, having a narrowly specified set of models suitable to a specific physical phenomenon is no guarantee at all of exact explicit symbolic formulae.
No. Those different special properties may be much less useful than linearity. Linearity is a big deal because it is so very useful. The space of solutions to, I dunno, let’s say the Navier-Stokes equations in a given region and with given boundary conditions is highly constrained; but it isn’t constrained in ways that (at least so far as mathematicians have so far been able to figure out) are as useful as linearity.
So I don’t agree at all that “largely there should be some transformed domain where the model turns out to be simple”. Sometimes that happens, but usually not.