Great post! Agree with the points raised but would like to add that restricting the expressivity isn’t the only way that we can try to make the world model more interpretable by design. There are many ways that we can decompose a world model into components, and human concepts correspond to some of the components (under a particular decomposition) as opposed to the world model as a whole. We can backpropagate desiderata about ontology identification to the way that the world model is decomposed.
For instance, suppose that we’re trying to identify the concept of a strawberry inside a solomonoff inductor: We know that once we identify the concept of a strawberry inside a solomonoff inductor, it needs to continue to work even when the solomonoff inductor updates to new potential hypotheses about the world (e.g. we want the concept of a strawberry to still be there even when the inductor learns about QFT). This means that we’re looking for redundant information that is present in a wide variety of potential likely hypothesis given our observations, so instead of working with all the individual TMs, we can try to capture the redundant information shared across a wide variety of TMs consistent with our existing observations (& we expect the concept of a strawberry to be part of that redundant information, as opposed to the information specific to any particular hypothesis)
This obviously doesn’t get us all the way there but I think it’s an existence proof for cutting down the search space for “human-like concepts” without sacrificing the expressivity of the world model, by reasoning about what parts of the world model could correspond to human-like concepts
Great post! Agree with the points raised but would like to add that restricting the expressivity isn’t the only way that we can try to make the world model more interpretable by design. There are many ways that we can decompose a world model into components, and human concepts correspond to some of the components (under a particular decomposition) as opposed to the world model as a whole. We can backpropagate desiderata about ontology identification to the way that the world model is decomposed.
For instance, suppose that we’re trying to identify the concept of a strawberry inside a solomonoff inductor: We know that once we identify the concept of a strawberry inside a solomonoff inductor, it needs to continue to work even when the solomonoff inductor updates to new potential hypotheses about the world (e.g. we want the concept of a strawberry to still be there even when the inductor learns about QFT). This means that we’re looking for redundant information that is present in a wide variety of potential likely hypothesis given our observations, so instead of working with all the individual TMs, we can try to capture the redundant information shared across a wide variety of TMs consistent with our existing observations (& we expect the concept of a strawberry to be part of that redundant information, as opposed to the information specific to any particular hypothesis)
This obviously doesn’t get us all the way there but I think it’s an existence proof for cutting down the search space for “human-like concepts” without sacrificing the expressivity of the world model, by reasoning about what parts of the world model could correspond to human-like concepts