However, information-theoretic groundings only talk about probability, not about “goals” or “agents” or anything utility-like. Here, we’ve transformed expected utility maximization into something explicitly information-theoretic and conceptually natural.
This interpretation of model fitting formalizes goal pursuit, and looks well constructed. I like this as a step forward in addressing my concern about terminology of AI researchers.
I imagine that negentropy could serve as a universal “resource”, replacing the “dollars” typically used as a measuring stick in coherence theorems.
I like to say that “entropy has trained mutating replicators to pursue goal Y called ‘information about the entropy to counteract it’. This ‘information’ is us. It is the world model F′, which happened to be the most helpful in solving our equation F(X)=Y for actions X, maximizing our ability to counteract entropy.” How would we say that in this formalism?
Laws of physics are not perfect model of the world, thus we do science and research, trying to make ourselves into a better model of it. However, neither we nor AIs choose the model to minimize the length of input for—ultimately, it is the world that induces its model into each of us (including computers) and optimizes it, not the other way around. There’s that irreducible computational complexity in this world, which we continue to explore, iteratively improving our approximations, which we call our model—laws of physics. If someone makes a paperclip maximizer, it will die because of world’s entropy, unless it maximizes for its survival (i.e., instead of making paperclips, it makes various copies of itself and all the non-paperclip components needed for its copies, searching for better ones at survival).
This interpretation of model fitting formalizes goal pursuit, and looks well constructed. I like this as a step forward in addressing my concern about terminology of AI researchers.
I like to say that “entropy has trained mutating replicators to pursue goal Y called ‘information about the entropy to counteract it’. This ‘information’ is us. It is the world model F′, which happened to be the most helpful in solving our equation F(X)=Y for actions X, maximizing our ability to counteract entropy.” How would we say that in this formalism?
Laws of physics are not perfect model of the world, thus we do science and research, trying to make ourselves into a better model of it. However, neither we nor AIs choose the model to minimize the length of input for—ultimately, it is the world that induces its model into each of us (including computers) and optimizes it, not the other way around. There’s that irreducible computational complexity in this world, which we continue to explore, iteratively improving our approximations, which we call our model—laws of physics. If someone makes a paperclip maximizer, it will die because of world’s entropy, unless it maximizes for its survival (i.e., instead of making paperclips, it makes various copies of itself and all the non-paperclip components needed for its copies, searching for better ones at survival).