It has been observed on Less Wrong that a physical, approximate implementation of AIXI is unable to reason about its own embedding in the universe, and therefore is apt to make certain mistakes: for example, it is likely to destroy itself for spare parts, and is unable to recognize itself in a mirror.
So: the approach taken by nature it to wall off the brain behind a protective barrier and surround it with pain sensors. Not a final solution—but one that would take machines up to beyond human level—at which point there will be a lot more minds available to work on the problem.
So: the approach taken by nature it to wall off the brain behind a protective barrier and surround it with pain sensors. Not a final solution—but one that would take machines up to beyond human level—at which point there will be a lot more minds available to work on the problem.