Whether it does or does not isn’t important to the main argument here.
Critics might have a role to play for a resource-limited agent—for instance if they pointed out explanations that were short and were not yet receiving the proper consideration—or if they supplied more data.
If consistent data makes a theory more probable, I might have expected a theory that has survived (non-empirical) criticism to become more probable. Because you are an empiricist, you relegate criticism to a minor role when in fact criticism is a major driving force in science. Most theories don’t get tested empirically, they are refuted by criticism alone. Critical rationalism knows this.
Also some theories can’t be refuted by empirical means, so what does Solomonoff Induction do about those?
It says to prefer the shorter one.
Is that it? And how is the algorithm supposed to work anyway? If the theory is non-empirical, it can’t be a compression of an empirical dataset.
Because you are an empiricist, you relegate criticism to a minor role when in fact criticism is a major driving force in science.
Checking with the definitionthat apparently boils down to whether I think there is much innate knowledge. Humans have some innate knowledge, so I figure: probably not an empiricist.
I have no particular beef with criticism. Solomonoff induction is not given as a model of how humans actually do science. It is given as a formalisation of the maths of induction.
And how is the algorithm supposed to work anyway? If the theory is non-empirical, it can’t be a compression of an empirical dataset.
Theories are constructed from datasets. Solomonoff induction is an abstract model of sequence prediction. Given a serial stream of sense data, it maintains models of it, and uses those models to predict future observations. The models embody theories about what it being observed—and smaller models are preferred.
Solomonoff Induction is empiricist because it assumes all knowledge comes from the data. Theories arising from Solomonoff Induction are, at most, only as reliable as the data and it can’t come up with theories that make more precise predictions than the data or that contain more knowledge than the data. This is complicated by the fact that in real life applications it will have to deal with noise in the data and this is going to get deeply subjective very quickly.
Another problem is: how is the dataset itself constructed? You don’t just go out and collect data; you need to know what you are looking for. Among the infinite number of things you can observe, you need to know what is important and to know this you need a theory. Where does this theory come from? It arises as a conjectural explanation to a problem-situation and specific predictions arising from the explanation guide your observations. So Solomonoff Induction has things backward.
Solomonoff Induction is just about prediction. It models a forecasting agent that observes a stream, and emits probabilities of the next symbol. It doesn’t do anything else. Complaining that it can’t create its own experiments seems rather futile. Of course it can’t—it is a forecaster. Real agents do more than just forecast, of course, but that isn’t a criticism for forecasting, or the idea of a forecaster.
If Solomonoff Induction does not discard theories inconsistent with the data, then this is wrong:
http://wiki.lesswrong.com/wiki/Solomonoff_induction
Whether it does or does not isn’t important to the main argument here.
If consistent data makes a theory more probable, I might have expected a theory that has survived (non-empirical) criticism to become more probable. Because you are an empiricist, you relegate criticism to a minor role when in fact criticism is a major driving force in science. Most theories don’t get tested empirically, they are refuted by criticism alone. Critical rationalism knows this.
Is that it? And how is the algorithm supposed to work anyway? If the theory is non-empirical, it can’t be a compression of an empirical dataset.
Checking with the definition that apparently boils down to whether I think there is much innate knowledge. Humans have some innate knowledge, so I figure: probably not an empiricist.
I have no particular beef with criticism. Solomonoff induction is not given as a model of how humans actually do science. It is given as a formalisation of the maths of induction.
Theories are constructed from datasets. Solomonoff induction is an abstract model of sequence prediction. Given a serial stream of sense data, it maintains models of it, and uses those models to predict future observations. The models embody theories about what it being observed—and smaller models are preferred.
Solomonoff Induction is empiricist because it assumes all knowledge comes from the data. Theories arising from Solomonoff Induction are, at most, only as reliable as the data and it can’t come up with theories that make more precise predictions than the data or that contain more knowledge than the data. This is complicated by the fact that in real life applications it will have to deal with noise in the data and this is going to get deeply subjective very quickly.
Another problem is: how is the dataset itself constructed? You don’t just go out and collect data; you need to know what you are looking for. Among the infinite number of things you can observe, you need to know what is important and to know this you need a theory. Where does this theory come from? It arises as a conjectural explanation to a problem-situation and specific predictions arising from the explanation guide your observations. So Solomonoff Induction has things backward.
Solomonoff Induction is just about prediction. It models a forecasting agent that observes a stream, and emits probabilities of the next symbol. It doesn’t do anything else. Complaining that it can’t create its own experiments seems rather futile. Of course it can’t—it is a forecaster. Real agents do more than just forecast, of course, but that isn’t a criticism for forecasting, or the idea of a forecaster.