Well, let’s consider AIXI-tl properly, mathematically, without the ‘what would I do in it’s shoes’ idiocy and without incompetent “let’s just read the verbal summary”. The AIXI-tl
1: looks for a way to make the button pressed.
2: actually, not even that; it does not relate itself to the representation of itself inside it’s representation of the world, and can’t model world going on without itself. It can’t understand death. It’s internal model is dualist.
It is an AI that won’t stop you from shutting it down. If you try to resolve 2, then you hit another very hard problem, wireheading.
Those two problems naturally stay in the way of creation of AI that kills everyone, or AI that wants to bring about heaven on earth, but they are entirely irrelevant to the creation of useful AI in general. Thus the alternative approach to AI risk reduction is to withdraw all funding from SI or any other organization working on philosophy of mind for AI, as those organizations create the risk of AGI that solves those two very hard problems which prevent arbitrary useful AI from killing us all.
Someone sent me this anonymous suggestion:
Just a guess, but this sounds very much like.