A maximizer can care about and optimize the state of the universe rather than just its own perceptions
Well, that would certainly be nice. There’s no proof or working demonstration of this, though—and so nobody really knows if it is true or not.
It seems as though you could build a machine whose religion is was to optimize the state of the universe. However, most religions we know about lack long-term stability around very smart agents.
Well, that would certainly be nice. There’s no proof or working demonstration of this, though—and so nobody really knows if it is true or not.
It seems as though you could build a machine whose religion is was to optimize the state of the universe. However, most religions we know about lack long-term stability around very smart agents.