AIXI isn’t isn’t a practically realisable model due to its incomputability, but there’s nice optimality results, and it gives you an ideal model of intelligence that you can approximate (https://arxiv.org/abs/0909.0801). It uses a universal Bayesian mixture over environments, using the Solomonoff prior (in some sense the best choice of prior) to learn, (in a way you can make formal) as fast as possible, as fast as any agent possibly could. There’s some recent work done on trying to build practical approximations using deep learning instead of the CTW mixture (https://arxiv.org/html/2401.14953v1).
(Sorry for the lazy formatting, I’m on a phone right now. Maybe now is the time to get around to making a website for people to link)
AIXI isn’t isn’t a practically realisable model due to its incomputability, but there’s nice optimality results, and it gives you an ideal model of intelligence that you can approximate (https://arxiv.org/abs/0909.0801). It uses a universal Bayesian mixture over environments, using the Solomonoff prior (in some sense the best choice of prior) to learn, (in a way you can make formal) as fast as possible, as fast as any agent possibly could. There’s some recent work done on trying to build practical approximations using deep learning instead of the CTW mixture (https://arxiv.org/html/2401.14953v1).
(Sorry for the lazy formatting, I’m on a phone right now. Maybe now is the time to get around to making a website for people to link)