Olle Häggström, Professor of Mathematical Statistics at Chalmers University of Technology, reads some of Marcus Hutter’s work, comes away unimpressed, and asks for recommendations.
One concept that is sometimes claimed to be of central importance in contemporary AGI research is the so-called AIXI formalism. [...] In the presentation, Hutter advices us to consult his book Universal Artificial Intelligence. Before embarking on that, however, I decided to try one of the two papers that he also directs us to in the presentation, namely his A philosophical treatise of universal induction, coauthored with Samuel Rathmanner and published in the journal Entropy in 2011. After reading the paper, I have moved the reading of Hutter’s book far down my list of priorities, because gerneralizing from the paper leads me to suspect that the book is not so good.
I find the paper bad. There is nothing wrong with the ambition—to sketch various approaches to induction from Epicurus and onwards, and to try to argue how it all culminates in the concept of Solomonoff induction. There is much to agree with in the paper, such as the untenability of relying on uniform priors and the limited interest of the so-called No Free Lunch Theorems (points I’ve actually made myself in a different setting). The authors’ emphasis on the difficulty of defending induction without resorting to circularity (see the well-known anti-induction joke for a drastic illustration) is laudable. And it’s a nice perspective to view Solomonoff’s prior as a kind of compromise between Epicurus and Ockham, but does this particular point need to be made in quite so many words? Judging from the style of the paper, the word “philosophical” in the title seems to mean something like “characterized by lack of rigor and general verbosity”.4 Here are some examples of my more specific complaints [...]
I still consider it plausible to think that Kolmogorov complexity and Solomonoff induction are relavant to AGI7 (as well as to statistical inference and the theory of science), but the experience of reading Uncertainty & Induction in AGI and A philosophical treatise of universal induction strongly suggests that Hutter’s writings are not the place for me to go in order to learn more about this. But where, then? Can the readers of this blog offer any advice?
My current thinking is that Kolmogorov complexity / Solomonoff induction is probably only a small piece of the AGI puzzle. It seems obvious to me that the ideas are relevant to AGI, but hard to tell in what way exactly. I think Hutter correctly recognized the relevance of the ideas, but tends to exaggerate their importance, and as Olle Häggström recognized, can’t really back up his claims as to how central these ideas are.
If Olle wanted to become an FAI researcher then I’d suggest getting an overview of the AIT field from Li and Vitanyi’s textbook, but if he is more interested in what I called “Singularity Strategies” (which from Google translations of his other blog entries, it sounds like he is) and wants an understanding of just how Solomonoff Induction is relevant to AGI, in order to better understand AI risk and generally figure out how to best influence the Singularity in a positive direction, I’m afraid nobody has the answers at the moment.
(I wonder if we could convince Olle to join LW? I’d comment on some of Olle’s posts but I’m really wary of personal blogs, which tend to disappear and take all of my comments with them.)
I’d comment on some of Olle’s posts but I’m really wary of personal blogs, which tend to disappear and take all of my comments with them.
Nothing stops you from setting up some program to archive URLs you visit, which will deal with most comments. I also tend to excerpt my best comments into Evernote as well, to make them easier to refind.
Open link, control+f “relavant to AGI”. Get directed to “relavant to AGI7″.
Footnote 7 is “7) I am not a computer scientist, so the following should perhaps be taken with a grain of salt. While I do think that computability and concepts derived from it such as Kolmogorov complexity may be relevant to AGI, I have the feeling that the somewhat more down-to-earth issue of computability in polynomial time is even more likely to be of crucial importance.”
Artificial intelligence and Solomonoff induction: what to read?
Olle Häggström, Professor of Mathematical Statistics at Chalmers University of Technology, reads some of Marcus Hutter’s work, comes away unimpressed, and asks for recommendations.
My current thinking is that Kolmogorov complexity / Solomonoff induction is probably only a small piece of the AGI puzzle. It seems obvious to me that the ideas are relevant to AGI, but hard to tell in what way exactly. I think Hutter correctly recognized the relevance of the ideas, but tends to exaggerate their importance, and as Olle Häggström recognized, can’t really back up his claims as to how central these ideas are.
If Olle wanted to become an FAI researcher then I’d suggest getting an overview of the AIT field from Li and Vitanyi’s textbook, but if he is more interested in what I called “Singularity Strategies” (which from Google translations of his other blog entries, it sounds like he is) and wants an understanding of just how Solomonoff Induction is relevant to AGI, in order to better understand AI risk and generally figure out how to best influence the Singularity in a positive direction, I’m afraid nobody has the answers at the moment.
(I wonder if we could convince Olle to join LW? I’d comment on some of Olle’s posts but I’m really wary of personal blogs, which tend to disappear and take all of my comments with them.)
Nothing stops you from setting up some program to archive URLs you visit, which will deal with most comments. I also tend to excerpt my best comments into Evernote as well, to make them easier to refind.
Random question—is AGI7 a typo, or a term?
Open link, control+f “relavant to AGI”. Get directed to “relavant to AGI7″.
Footnote 7 is “7) I am not a computer scientist, so the following should perhaps be taken with a grain of salt. While I do think that computability and concepts derived from it such as Kolmogorov complexity may be relevant to AGI, I have the feeling that the somewhat more down-to-earth issue of computability in polynomial time is even more likely to be of crucial importance.”