If you’re trying to model a system, and the results of your model are extremely sensitive to miniscule data errors (i.e. the system is chaotic), and there is no practical way to obtain extremely accurate data, then chaos theory limits the usefulness of the model.
This seems like a very underpowered sentence that doesn’t actually need chaos theory. How do you know you’re in a system that is chaotic, as opposed to have shitty sensors or a terrible model? What do you get from the theory, as opposed to the empirical result that your predictions only stay accurate for so long?
[For everyone else: Hastings is addressing these questions more directly. But I’m still interesting in what Jay or anyone else has to say].
Let’s try again. Chaotic systems usually don’t do exactly what you want them to, and they almost never do the right thing 1000 times in a row. If you model a system using ordinary modeling techniques, chaos theory can tell you whether the system is going to be finicky and unreliable (in a specific way). This saves you the trouble of actually building a system that won’t work reliably. Basically, it marks off certain areas of solution space as not viable.
Also, there’s Lavarand. It turns out that lava lamps are chaotic.
Hastings clearly has more experience with chaos theory than I do (I upvoted his comment). I’m hoping that my rather simplistic grasp of the field might result in a simpler-to-understand answer (that’s still basically correct).
Chaos theory is a branch of math; it characterizes models (equations). If your model is terrible, it can’t help you. What the theory tells you is how wildly your model will react to small perturbations.
AFAIK the only invention made possible by chaos theory is random number generators. If a system you’re modeling is extremely chaotic, you can use its output for random numbers with confidence that nobody will ever be able to model or replicate your system with sufficient precision to reproduce its output.
This seems like a very underpowered sentence that doesn’t actually need chaos theory. How do you know you’re in a system that is chaotic, as opposed to have shitty sensors or a terrible model? What do you get from the theory, as opposed to the empirical result that your predictions only stay accurate for so long?
[For everyone else: Hastings is addressing these questions more directly. But I’m still interesting in what Jay or anyone else has to say].
Let’s try again. Chaotic systems usually don’t do exactly what you want them to, and they almost never do the right thing 1000 times in a row. If you model a system using ordinary modeling techniques, chaos theory can tell you whether the system is going to be finicky and unreliable (in a specific way). This saves you the trouble of actually building a system that won’t work reliably. Basically, it marks off certain areas of solution space as not viable.
Also, there’s Lavarand. It turns out that lava lamps are chaotic.
For what it’s worth, I think you’re getting downvoted in part because what you write seems to indicate that you didn’t read the post.
Hastings clearly has more experience with chaos theory than I do (I upvoted his comment). I’m hoping that my rather simplistic grasp of the field might result in a simpler-to-understand answer (that’s still basically correct).
Chaos theory is a branch of math; it characterizes models (equations). If your model is terrible, it can’t help you. What the theory tells you is how wildly your model will react to small perturbations.
AFAIK the only invention made possible by chaos theory is random number generators. If a system you’re modeling is extremely chaotic, you can use its output for random numbers with confidence that nobody will ever be able to model or replicate your system with sufficient precision to reproduce its output.
That wasn’t well phrased. Oops.