Somewhere in space flying ASI in the form of a cloud with nanobots that continuously simulates the future. He does this for to know all of the risks and opportunities of the event in advance. So, he is able to conduct more effective research, for example to avoid loss of a time. Of course, if the future of modeling uses less resources than saved. There is only one problem—its sensors indicate that at a distance of thousands parsecs no other ASI. But there is a probability (0,0...1%) that another ASI will suddenly appear next to us using the teleport about which the first intellect is nothing known. The calculation shows that the probability of 0.0...1% appearance and 5% that other ASI will destroy the first algorithm. That will selects the first algorithm? Waste of resources to solve the problem with low probability or the probability the destruction.
On the whole the algorithm is able to create a a lot of markers, which he will have to check in real world. And these markers will be correct probabilistic models all the time.
So, you can build a model in which the highest probability density is verified most densely markers on the basis of genetic algorithms.
The amount of entropy that corresponds to real world information in the starting data vs. the predictions is at best the same but likely the prediction contains less information.
Another possibility is that after n years the algorithm smoothes out the probability of all the possible futures so that they are equally likely... The problem is not only computational: unless there are some strong pruning heuristics, the value of predicting the far future decays rapidly, since the probability mass (which is conserved) becomes diluted between more and more branches.
1
Theoretical example.
Somewhere in space flying ASI in the form of a cloud with nanobots that continuously simulates the future. He does this for to know all of the risks and opportunities of the event in advance. So, he is able to conduct more effective research, for example to avoid loss of a time. Of course, if the future of modeling uses less resources than saved. There is only one problem—its sensors indicate that at a distance of thousands parsecs no other ASI. But there is a probability (0,0...1%) that another ASI will suddenly appear next to us using the teleport about which the first intellect is nothing known. The calculation shows that the probability of 0.0...1% appearance and 5% that other ASI will destroy the first algorithm. That will selects the first algorithm? Waste of resources to solve the problem with low probability or the probability the destruction.
On the whole the algorithm is able to create a a lot of markers, which he will have to check in real world. And these markers will be correct probabilistic models all the time.
So, you can build a model in which the highest probability density is verified most densely markers on the basis of genetic algorithms.
What does that phrase mean?
That’s called chaining the forecasts. This tends to break down after very few iterations because errors snowball and because tail events do happen.
1
The right algorithm doesn’t give you good results if the data which you have isn’t good enough.
What do you mean?
The amount of entropy that corresponds to real world information in the starting data vs. the predictions is at best the same but likely the prediction contains less information.
Another possibility is that after n years the algorithm smoothes out the probability of all the possible futures so that they are equally likely...
The problem is not only computational: unless there are some strong pruning heuristics, the value of predicting the far future decays rapidly, since the probability mass (which is conserved) becomes diluted between more and more branches.
Answered top.