It is trivial to say one AIXI can’t comprehend another instance of AIXI, if by comprehend you mean form an accurate model.
AIXI expects the environment to be computable and is itself incomputable. So if one AIXI comes across another, it won’t be able to form a true model of it.
However I am not sure of the value of this argument as we expect intelligence to be computable.
Have you looked at AIXI?
It is trivial to say one AIXI can’t comprehend another instance of AIXI, if by comprehend you mean form an accurate model.
AIXI expects the environment to be computable and is itself incomputable. So if one AIXI comes across another, it won’t be able to form a true model of it.
However I am not sure of the value of this argument as we expect intelligence to be computable.