The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there.
Whether or not the user is conscious of the feature of his model doesn’t matter much.
The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there. Whether or not the user is conscious of the feature of his model doesn’t matter much.