My current thinking is that Kolmogorov complexity / Solomonoff induction is probably only a small piece of the AGI puzzle. It seems obvious to me that the ideas are relevant to AGI, but hard to tell in what way exactly. I think Hutter correctly recognized the relevance of the ideas, but tends to exaggerate their importance, and as Olle Häggström recognized, can’t really back up his claims as to how central these ideas are.
If Olle wanted to become an FAI researcher then I’d suggest getting an overview of the AIT field from Li and Vitanyi’s textbook, but if he is more interested in what I called “Singularity Strategies” (which from Google translations of his other blog entries, it sounds like he is) and wants an understanding of just how Solomonoff Induction is relevant to AGI, in order to better understand AI risk and generally figure out how to best influence the Singularity in a positive direction, I’m afraid nobody has the answers at the moment.
(I wonder if we could convince Olle to join LW? I’d comment on some of Olle’s posts but I’m really wary of personal blogs, which tend to disappear and take all of my comments with them.)
I’d comment on some of Olle’s posts but I’m really wary of personal blogs, which tend to disappear and take all of my comments with them.
Nothing stops you from setting up some program to archive URLs you visit, which will deal with most comments. I also tend to excerpt my best comments into Evernote as well, to make them easier to refind.
My current thinking is that Kolmogorov complexity / Solomonoff induction is probably only a small piece of the AGI puzzle. It seems obvious to me that the ideas are relevant to AGI, but hard to tell in what way exactly. I think Hutter correctly recognized the relevance of the ideas, but tends to exaggerate their importance, and as Olle Häggström recognized, can’t really back up his claims as to how central these ideas are.
If Olle wanted to become an FAI researcher then I’d suggest getting an overview of the AIT field from Li and Vitanyi’s textbook, but if he is more interested in what I called “Singularity Strategies” (which from Google translations of his other blog entries, it sounds like he is) and wants an understanding of just how Solomonoff Induction is relevant to AGI, in order to better understand AI risk and generally figure out how to best influence the Singularity in a positive direction, I’m afraid nobody has the answers at the moment.
(I wonder if we could convince Olle to join LW? I’d comment on some of Olle’s posts but I’m really wary of personal blogs, which tend to disappear and take all of my comments with them.)
Nothing stops you from setting up some program to archive URLs you visit, which will deal with most comments. I also tend to excerpt my best comments into Evernote as well, to make them easier to refind.