Or to look at it another way, Solomonoff was the first mathematical specification of a system that could, in principle if not in the physical universe, learn anything learnable by a computable system.
I think the interesting feature of Solomonoff induction is that it does no worse than any other object from the same class (lower-semicomputable semimeasures), not just objects from a lower class (computable humans). I’m currently trying to solve a related problem where it’s easy to devise an agent that beats all humans, but difficult to devise one that’s optimal in its own class.
AIXI is based on Solomonoff, and to the extent that you regard all other AGIs as approximations to AIXI...
Gotcha.
Or to look at it another way, Solomonoff was the first mathematical specification of a system that could, in principle if not in the physical universe, learn anything learnable by a computable system.
I think the interesting feature of Solomonoff induction is that it does no worse than any other object from the same class (lower-semicomputable semimeasures), not just objects from a lower class (computable humans). I’m currently trying to solve a related problem where it’s easy to devise an agent that beats all humans, but difficult to devise one that’s optimal in its own class.