More degrees of freedom in a representation give it more ways to map between things (transfer learning), but more degrees of freedom are the opposite of what makes for predictive outputs.
If you can retroactively fit a utility function to any sequence of actions, what predictive power do we gain by including utility functions into our models of AGI?
More degrees of freedom in a representation give it more ways to map between things (transfer learning), but more degrees of freedom are the opposite of what makes for predictive outputs.
If you can retroactively fit a utility function to any sequence of actions, what predictive power do we gain by including utility functions into our models of AGI?