a question came up—how do you formalize this exactly? how do you separate questions about physical state from questions about utility functions? perhaps, audere says, could you bound the relative complexity of the perspectives of utility function representation vs simulating perspective?
also how do you deal with modeling smaller boundedly rational agents in actual formalism? I can recognize psychologizing is the right perspective to model a cat who is failing to walk around a glass wall to get the food on the other side and is instead meowing sadly at the wall, but how do I formalize it? Seems like the discovering agents paper still has a lot to tell us about how to do this—https://arxiv.org/pdf/2208.08345.pdf
a question came up—how do you formalize this exactly? how do you separate questions about physical state from questions about utility functions? perhaps, audere says, could you bound the relative complexity of the perspectives of utility function representation vs simulating perspective?
also how do you deal with modeling smaller boundedly rational agents in actual formalism? I can recognize psychologizing is the right perspective to model a cat who is failing to walk around a glass wall to get the food on the other side and is instead meowing sadly at the wall, but how do I formalize it? Seems like the discovering agents paper still has a lot to tell us about how to do this—https://arxiv.org/pdf/2208.08345.pdf