One thing I will note is that I’m not sure why they say AGI has its roots in Solomonoff’s induction paper. There is such a huge variety in approaches to AGI… what do they all have to do with that paper?
Or to look at it another way, Solomonoff was the first mathematical specification of a system that could, in principle if not in the physical universe, learn anything learnable by a computable system.
I think the interesting feature of Solomonoff induction is that it does no worse than any other object from the same class (lower-semicomputable semimeasures), not just objects from a lower class (computable humans). I’m currently trying to solve a related problem where it’s easy to devise an agent that beats all humans, but difficult to devise one that’s optimal in its own class.
Well, on the other hand, if AGI is defined as truly universal, Solomonoff seems quite rooty indeed. It’s only if you think of “general” to mean “general relative to a beaver’s brain” that a wide variety of approaches become acceptable.
I estimate brains spend about 80% of their time doing inductive inference (the rest is evaluation, tree-pruning, etc). Solomonoff’s induction is a general theory of inductive inference. Thus the connection.
One thing I will note is that I’m not sure why they say AGI has its roots in Solomonoff’s induction paper. There is such a huge variety in approaches to AGI… what do they all have to do with that paper?
AIXI is based on Solomonoff, and to the extent that you regard all other AGIs as approximations to AIXI...
Gotcha.
Or to look at it another way, Solomonoff was the first mathematical specification of a system that could, in principle if not in the physical universe, learn anything learnable by a computable system.
I think the interesting feature of Solomonoff induction is that it does no worse than any other object from the same class (lower-semicomputable semimeasures), not just objects from a lower class (computable humans). I’m currently trying to solve a related problem where it’s easy to devise an agent that beats all humans, but difficult to devise one that’s optimal in its own class.
That paragraph is simply wrong.
Well, on the other hand, if AGI is defined as truly universal, Solomonoff seems quite rooty indeed. It’s only if you think of “general” to mean “general relative to a beaver’s brain” that a wide variety of approaches become acceptable.
I estimate brains spend about 80% of their time doing inductive inference (the rest is evaluation, tree-pruning, etc). Solomonoff’s induction is a general theory of inductive inference. Thus the connection.