The amount of entropy that corresponds to real world information in the starting data vs. the predictions is at best the same but likely the prediction contains less information.
Another possibility is that after n years the algorithm smoothes out the probability of all the possible futures so that they are equally likely... The problem is not only computational: unless there are some strong pruning heuristics, the value of predicting the far future decays rapidly, since the probability mass (which is conserved) becomes diluted between more and more branches.
That’s called chaining the forecasts. This tends to break down after very few iterations because errors snowball and because tail events do happen.
1
The right algorithm doesn’t give you good results if the data which you have isn’t good enough.
What do you mean?
The amount of entropy that corresponds to real world information in the starting data vs. the predictions is at best the same but likely the prediction contains less information.
Another possibility is that after n years the algorithm smoothes out the probability of all the possible futures so that they are equally likely...
The problem is not only computational: unless there are some strong pruning heuristics, the value of predicting the far future decays rapidly, since the probability mass (which is conserved) becomes diluted between more and more branches.
Answered top.