“Arguably Marcus Hutter’s AIXI should go in this category: for a mind of infinite power, it’s awfully stupid—poor thing can’t even recognize itself in a mirror.”
Have you (or somebody else) mathematically proven this?
(If you have then that’s great and I’d like to see the proof, and I’ll pass it on to Hutter because I’m sure he will be interested. A real proof. I say this because I see endless intuitions and opinions about Solomonoff induction and AIXI on the internet. Intuitions about models of super intelligent machines like AIXI just don’t cut it. In my experience they very often don’t do what you think they will.)
@ Eli:
“Arguably Marcus Hutter’s AIXI should go in this category: for a mind of infinite power, it’s awfully stupid—poor thing can’t even recognize itself in a mirror.”
Have you (or somebody else) mathematically proven this?
(If you have then that’s great and I’d like to see the proof, and I’ll pass it on to Hutter because I’m sure he will be interested. A real proof. I say this because I see endless intuitions and opinions about Solomonoff induction and AIXI on the internet. Intuitions about models of super intelligent machines like AIXI just don’t cut it. In my experience they very often don’t do what you think they will.)