The Carnot engine is an abstract description of a maximally efficient heat engine. You can’t make your car engine more efficient by writing thermodynamic equations on the engine casing.
The Solomonoff Inductor is an abstract description of an optimal reasoner. Memorizing the equations doesn’t automagically make your reasoning better. The human brain is a kludge of non modifiable special purpose hardware. There is no clear line between entering data and changing software. Humans are capable of taking in a “rule of thumb” and making somewhat better decisions based on it. Humans can take in Occams razor, the advice to “prefer simple and mathematical hypothesis” and intermittently act on it, sometimes down-weighting a complex hypothesis when a simpler one comes to mind. Humans can sometimes produce these sort of rules of thumb from an understanding of Solomonoff Induction.
Its like looking at a book about optics doesn’t automatically make your eyes better, but if you know the optics, you can sometimes work out how your vision is distorted and say “that line looks bent, but its actually straight”.
If you want to try making workarounds and patches for the bug riddled mess of human cognition, knowing Solomonoff Induction is somewhat useful as a target and source of inspiration.
If you found an infinitely fast computer, Solomonoff Induction would be incredibly effective, more effective than any other algorithm.
I would expect any good AI design to tend to Solomonoff Induction (or something like it ? ) in the limit of infinite compute (and the assumption that acausal effects don’t exist?) I would expect a good AI designer to know about Solomonoff Induction, in much the same way I would expect a good car engine designer to know about the Carnot engine.
The Carnot engine is an abstract description of a maximally efficient heat engine. You can’t make your car engine more efficient by writing thermodynamic equations on the engine casing.
The Solomonoff Inductor is an abstract description of an optimal reasoner. Memorizing the equations doesn’t automagically make your reasoning better. The human brain is a kludge of non modifiable special purpose hardware. There is no clear line between entering data and changing software. Humans are capable of taking in a “rule of thumb” and making somewhat better decisions based on it. Humans can take in Occams razor, the advice to “prefer simple and mathematical hypothesis” and intermittently act on it, sometimes down-weighting a complex hypothesis when a simpler one comes to mind. Humans can sometimes produce these sort of rules of thumb from an understanding of Solomonoff Induction.
Its like looking at a book about optics doesn’t automatically make your eyes better, but if you know the optics, you can sometimes work out how your vision is distorted and say “that line looks bent, but its actually straight”.
If you want to try making workarounds and patches for the bug riddled mess of human cognition, knowing Solomonoff Induction is somewhat useful as a target and source of inspiration.
If you found an infinitely fast computer, Solomonoff Induction would be incredibly effective, more effective than any other algorithm.
I would expect any good AI design to tend to Solomonoff Induction (or something like it ? ) in the limit of infinite compute (and the assumption that acausal effects don’t exist?) I would expect a good AI designer to know about Solomonoff Induction, in much the same way I would expect a good car engine designer to know about the Carnot engine.