By “complex”, I mean a system for which it would be too computationally costly to model it from first principles e.g. the economy, the climate (my field, by the way). Suppose our goal is to predict a system’s future behaviour with minimum possible error given by some metric (e.g. minimise the mean square error or maximise the likelihood). This seems like something we would want to do in an optimal way, and also something a superintelligence should have a strategy to do, so I thought I’d ask here if anyone has worked on this problem.
I’ve read quite a bit about how we can optimally try to deduce the truth e.g. apply Bayes’ theorem with a prior set following Ockham’s razor (c.f. Solomonoff induction). However, this seems difficult to me to apply to modelling complex systems, even as an idealisation, because:
Since we cannot afford to model the true equations, every member of the set of models available to us is false, so the likelihood and posterior probability for each will typically evaluate to zero given enough observed data. So if we want to use Bayes’ theorem, the probabilities should not mean the probability of each model being true. But it’s not clear to me what they should mean—perhaps the probability that each model will give the prediction with the lowest error? But then it’s not clear how to do updating, if the normal likelihoods will typically be zero.
It doesn’t seem clear that Ockham’s razor will be a good guide to giving our models prior probabilities. Its use seems to be motivated by it working well for deducing fundamental laws of nature. However, for modelling complex systems it seems more reasonable to me to give more weight to models that incorporate what we understand to be the important processes—and past observations can’t necessarily help us tell what processes are important to include, because different processes may become important in future (c.f. biological feedbacks that may kick in as the climate warms). This could perhaps be done by having a strategy for deriving approximate affordable models from the fundamental laws—but is it possible to say anything about how an agent should do this?
I’ve not found anything about rational strategies to approximately model complex systems rather than derive true models. Thank you very much for any thoughts and resources you can share.
[Question] How should we model complex systems?
By “complex”, I mean a system for which it would be too computationally costly to model it from first principles e.g. the economy, the climate (my field, by the way). Suppose our goal is to predict a system’s future behaviour with minimum possible error given by some metric (e.g. minimise the mean square error or maximise the likelihood). This seems like something we would want to do in an optimal way, and also something a superintelligence should have a strategy to do, so I thought I’d ask here if anyone has worked on this problem.
I’ve read quite a bit about how we can optimally try to deduce the truth e.g. apply Bayes’ theorem with a prior set following Ockham’s razor (c.f. Solomonoff induction). However, this seems difficult to me to apply to modelling complex systems, even as an idealisation, because:
Since we cannot afford to model the true equations, every member of the set of models available to us is false, so the likelihood and posterior probability for each will typically evaluate to zero given enough observed data. So if we want to use Bayes’ theorem, the probabilities should not mean the probability of each model being true. But it’s not clear to me what they should mean—perhaps the probability that each model will give the prediction with the lowest error? But then it’s not clear how to do updating, if the normal likelihoods will typically be zero.
It doesn’t seem clear that Ockham’s razor will be a good guide to giving our models prior probabilities. Its use seems to be motivated by it working well for deducing fundamental laws of nature. However, for modelling complex systems it seems more reasonable to me to give more weight to models that incorporate what we understand to be the important processes—and past observations can’t necessarily help us tell what processes are important to include, because different processes may become important in future (c.f. biological feedbacks that may kick in as the climate warms). This could perhaps be done by having a strategy for deriving approximate affordable models from the fundamental laws—but is it possible to say anything about how an agent should do this?
I’ve not found anything about rational strategies to approximately model complex systems rather than derive true models. Thank you very much for any thoughts and resources you can share.