This concept is not fully formed. It is necessary that it is not fully formed, because once I have finished forming it, it won’t be something I can communicate any longer; it will become, to borrow a turn of phrase from SMBC, rotten with specificity.
I have noticed a shortcoming in my model of reality. It isn’t a problem with the accuracy of the model, but rather there is an important feature of the model missing. It is particularly to do with people, and the shortcoming is this: I have no conceptual texture, no conceptual hook, to attach nebuluous information to people to.
To explain what I need a hook for, a professional acquantance has recriprocated trust. There is a professional relationship there, but also a human interaction; the trust involved means we can proceed professionally without negotiating contractual terms beforehand. However, it would undermine the trust in a very fundamental way to actually treat this as the meaning of the trust. That is to say, modeling the relationship as transactional would undermine the basis of the relationship (but for the purposes of describing things, I’m going to do that anyways, basically because it’s easier to explain that way; any fair description of a relationship of any kind should not be put to a short number of words).
I have a pretty good model of this person, as a person. They have an (adult) child who has developed a chronic condition; part of basic social interaction is that, having received this information, I need to ask about said child the next time we interact. This is something that is troubling this person; my responsibility, to phrase it in a misleading way, is to acknowledge them, to make what they have said into something that has been heard, and in a way that lets them know that they have been heard.
So, returning to the shortcoming: I have no conceptual texture to attach this to. I have never built any kind of cognitive architecture that serves this kind of purpose; my interactions with other humans are focused on understanding them, which has basically served me socially thus far. But I have no conceptual hook to attach things like “Ask after this person’s child”. My model is now updated to include the pain of that situation; there is nothing in the model that is designed to prompt me to ask. I have heard; now I need to let this person know that they have been heard, and I reach for a tool I suddenly realize has never existed. I knew this particular tool was necessary, but have never needed it before.
It’s maybe tempting to build general-purpose mental architecture to deal with this problem, but as I examine it, it looks like maybe this is a problem that actually needs to be resolved on a more individual basis, because as I mentally survey the situation, a large part of the problem in the first place is the overuse of general-purpose mental architecture. I should have noticed this missing tool before.
I am not looking for solutions. Mostly it is just interesting to notice; usually, with these sorts of things, I’ve already solved the problem before I’ve had a chance to really notice, observe, and interact with the problem, much less notice the pieces of my mind which actually do the solving. Which itself is an interesting thing to notice; how routine the construction of this kind of conceptual architecture has gotten, that the need for novel mental architecture actually makes me stop for a moment, and pay attention to what is going on.
It can sometimes be hard to notice the things you mentally automate; the point of automating things is to stop noticing them, after all.
That’s a fascinating observation! When I introspect the same process (in my case, it might be “ask how this person’s diabetic cat is doing”), I find that nothing in the model itself is shaped like a specific reminder to ask about the cat. The way I end up asking is that when there’s a lull in the conversation, I scan the model for recent and important things that I’d expect the person might want to talk about, and that scan brings up the cat. My own generalizations, in turn, likely leave gaps which yours would cover, just as the opposite seems to be happening here.
This concept is not fully formed. It is necessary that it is not fully formed, because once I have finished forming it, it won’t be something I can communicate any longer; it will become, to borrow a turn of phrase from SMBC, rotten with specificity.
I have noticed a shortcoming in my model of reality. It isn’t a problem with the accuracy of the model, but rather there is an important feature of the model missing. It is particularly to do with people, and the shortcoming is this: I have no conceptual texture, no conceptual hook, to attach nebuluous information to people to.
To explain what I need a hook for, a professional acquantance has recriprocated trust. There is a professional relationship there, but also a human interaction; the trust involved means we can proceed professionally without negotiating contractual terms beforehand. However, it would undermine the trust in a very fundamental way to actually treat this as the meaning of the trust. That is to say, modeling the relationship as transactional would undermine the basis of the relationship (but for the purposes of describing things, I’m going to do that anyways, basically because it’s easier to explain that way; any fair description of a relationship of any kind should not be put to a short number of words).
I have a pretty good model of this person, as a person. They have an (adult) child who has developed a chronic condition; part of basic social interaction is that, having received this information, I need to ask about said child the next time we interact. This is something that is troubling this person; my responsibility, to phrase it in a misleading way, is to acknowledge them, to make what they have said into something that has been heard, and in a way that lets them know that they have been heard.
So, returning to the shortcoming: I have no conceptual texture to attach this to. I have never built any kind of cognitive architecture that serves this kind of purpose; my interactions with other humans are focused on understanding them, which has basically served me socially thus far. But I have no conceptual hook to attach things like “Ask after this person’s child”. My model is now updated to include the pain of that situation; there is nothing in the model that is designed to prompt me to ask. I have heard; now I need to let this person know that they have been heard, and I reach for a tool I suddenly realize has never existed. I knew this particular tool was necessary, but have never needed it before.
It’s maybe tempting to build general-purpose mental architecture to deal with this problem, but as I examine it, it looks like maybe this is a problem that actually needs to be resolved on a more individual basis, because as I mentally survey the situation, a large part of the problem in the first place is the overuse of general-purpose mental architecture. I should have noticed this missing tool before.
I am not looking for solutions. Mostly it is just interesting to notice; usually, with these sorts of things, I’ve already solved the problem before I’ve had a chance to really notice, observe, and interact with the problem, much less notice the pieces of my mind which actually do the solving. Which itself is an interesting thing to notice; how routine the construction of this kind of conceptual architecture has gotten, that the need for novel mental architecture actually makes me stop for a moment, and pay attention to what is going on.
It can sometimes be hard to notice the things you mentally automate; the point of automating things is to stop noticing them, after all.
That’s a fascinating observation! When I introspect the same process (in my case, it might be “ask how this person’s diabetic cat is doing”), I find that nothing in the model itself is shaped like a specific reminder to ask about the cat. The way I end up asking is that when there’s a lull in the conversation, I scan the model for recent and important things that I’d expect the person might want to talk about, and that scan brings up the cat. My own generalizations, in turn, likely leave gaps which yours would cover, just as the opposite seems to be happening here.