How can we generalize the theory of machine induction—called Solomonoff induction—so that it can use higher-order logics and reason correctly about observation selection effects?
I don’t really understand. What’s with the higher-order logic? Solomonoff induction already uses a Turing-complete reference machine. There’s nothing “higher” than that.
I don’t think observation-selection effects need particularly special treatment with a dedicated reference machine. The conventional approach would be to simply let the agent see the world. That way it finds out about the laws of physics and observation selection effects. After it has some data, you can see what kind of interpreter it has built for itself—and go from there.
Yes, you could try to manually wire all this kind of thing into the reference machine—but with a sufficiently smart agent, that process can be automated by letting the agent see the world, and seeing what kind of “compiler” it creates for itself. Essentially, this isn’t really an important problem that needs solving by humans.
IMHO, the important problem in this area involves find a reference machine that best facilitates self-improvement. We need to find which reference machine languages are most easily understood by mechanical programmers—NOT which ones most accurately represent the real world.
I don’t really understand. What’s with the higher-order logic? Solomonoff induction already uses a Turing-complete reference machine. There’s nothing “higher” than that.
So: I am not too worried about the universe being uncomputable.
On the race to superintelligence, there are more pressing things to worry about than such possibilities—and those interested in winning that race should prioritise their efforts—with things like this being at the bottom of the heap—otherwise they are more likely to fail.
I don’t think that Solomonoff induction has a problem in this area—but it is a plauisble explanation of what the reference to “higher-order logic” referred to.
I don’t really understand. What’s with the higher-order logic? Solomonoff induction already uses a Turing-complete reference machine. There’s nothing “higher” than that.
I don’t think observation-selection effects need particularly special treatment with a dedicated reference machine. The conventional approach would be to simply let the agent see the world. That way it finds out about the laws of physics and observation selection effects. After it has some data, you can see what kind of interpreter it has built for itself—and go from there.
Yes, you could try to manually wire all this kind of thing into the reference machine—but with a sufficiently smart agent, that process can be automated by letting the agent see the world, and seeing what kind of “compiler” it creates for itself. Essentially, this isn’t really an important problem that needs solving by humans.
IMHO, the important problem in this area involves find a reference machine that best facilitates self-improvement. We need to find which reference machine languages are most easily understood by mechanical programmers—NOT which ones most accurately represent the real world.
Have you read this thread?
So: I am not too worried about the universe being uncomputable.
On the race to superintelligence, there are more pressing things to worry about than such possibilities—and those interested in winning that race should prioritise their efforts—with things like this being at the bottom of the heap—otherwise they are more likely to fail.
I don’t think that Solomonoff induction has a problem in this area—but it is a plauisble explanation of what the reference to “higher-order logic” referred to.