One reason to doubt chaos theory’s usefulness is that we don’t need fancy theories to tell us something is impossible. Impossibility tends to make itself obvious.
This claim seems really weird to me. Why do you think that’s true? A lot of things we accomplished with technology today might seem impossible to someone from 1700. On the other hand, you could have thought that e.g. perpetuum mobile, or superluminal motion, or deciding whether a graph is 3-colorable in worst-case polynomial time, or transmitting information with a rate higher than Shannon-Hartley is possible if you didn’t know the relevant theory.
I agree with your examples and larger point. That was a gloss I was never happy with but can’t quite bring myself to remove. I was hoping to articulate the reasons better than this but so far they’re eluding me.
Yeah, I agree with the general point (don’t have strong opinion about chaos theory at the moment).
First, negative results are really, really important. Mostly because they let you not lose your time trying to do something impossible, and sometimes they actually point you toward an answer. In general, conservation laws in physics have this role. And knowing what is undecidable is really important in formal methods, where the trick is generally to simplify what you want or the expressive power of your programs in order to sidestep it.
Then, they are indeed quite hard to prove, at least in non-trivial cases. Conservations laws are the results of literally centuries of reframing of classical mechanics and reduction, leading to seeing the importance of energy and potential in unifying everything in physics. Undecidability is the result of 60 years of metamathetical work trying to clean formalisms enough to be able to study these kind of properties.
The really hot take is that the really important boundary is between tractable P problems and (conjectured) intractable NP and more complicated problems, not the decidability boundary, because it is both far more relevant to what you can expect a computer to do in practice, and because at a meta level above, a lot of the historical results about incompleteness and the inability to define 1st order arithmetical truth in 1st order arithmetic can be proved based on other theorems that are relevant to computability theory.
As I like to say, computability theory is complexity theory when you allow for arbitrary memory and time and want to compare one resource/oracle with another, like nondeterminism and time travel, since even in the world where you can build arbitrarily large memory computers and can think for arbitrarily long, some resources are more useful than others to compute more problems.
But you are correct that intractability/impossibility results from a certain set of assumptions is really important and useful IRL.
This claim seems really weird to me. Why do you think that’s true? A lot of things we accomplished with technology today might seem impossible to someone from 1700. On the other hand, you could have thought that e.g. perpetuum mobile, or superluminal motion, or deciding whether a graph is 3-colorable in worst-case polynomial time, or transmitting information with a rate higher than Shannon-Hartley is possible if you didn’t know the relevant theory.
I agree with your examples and larger point. That was a gloss I was never happy with but can’t quite bring myself to remove. I was hoping to articulate the reasons better than this but so far they’re eluding me.
Yeah, I agree with the general point (don’t have strong opinion about chaos theory at the moment).
First, negative results are really, really important. Mostly because they let you not lose your time trying to do something impossible, and sometimes they actually point you toward an answer. In general, conservation laws in physics have this role. And knowing what is undecidable is really important in formal methods, where the trick is generally to simplify what you want or the expressive power of your programs in order to sidestep it.
Then, they are indeed quite hard to prove, at least in non-trivial cases. Conservations laws are the results of literally centuries of reframing of classical mechanics and reduction, leading to seeing the importance of energy and potential in unifying everything in physics. Undecidability is the result of 60 years of metamathetical work trying to clean formalisms enough to be able to study these kind of properties.
The really hot take is that the really important boundary is between tractable P problems and (conjectured) intractable NP and more complicated problems, not the decidability boundary, because it is both far more relevant to what you can expect a computer to do in practice, and because at a meta level above, a lot of the historical results about incompleteness and the inability to define 1st order arithmetical truth in 1st order arithmetic can be proved based on other theorems that are relevant to computability theory.
As I like to say, computability theory is complexity theory when you allow for arbitrary memory and time and want to compare one resource/oracle with another, like nondeterminism and time travel, since even in the world where you can build arbitrarily large memory computers and can think for arbitrarily long, some resources are more useful than others to compute more problems.
But you are correct that intractability/impossibility results from a certain set of assumptions is really important and useful IRL.