My attempt to summarize the idea. How accurate is it?
Gemini modeling—Agent A models another agent B when A simulates some aspect of B’s cognition by counterfactually inserting an appropriate cognitive element Xp (that is a model of some element X, inferred to be present in B’s mind) into an appropriate slot of A’s mind, such that the relevant connections between X and other elements of B’s mind are reflected by analogous connections between Xp and corresponding elements of A’s mind.
A maybe trivial note: You switched the notation; I used Xp to mean “a part of the whole thing” and X is “the whole thing, the whole context of Xp”, and then [Xp] to denote the model / twin of Xp. X would be all of B, or enough of B to make Xp the sort of thing that Xp is.
A less trivial note: It’s a bit of a subtle point (I mean, a point I don’t fully understand), but: I think it’s important that it’s not just “the relevant connections are reflected by analogous connections”. (I mean, “relevant” is ambiguous and could mean what gemini modeling is supposed to me.) But anyway, the point is that to be gemini modeling, the criterion isn’t about reflecting any specific connections. Instead the criterion is providing connections enough so that the gemini model [Xp] is rendered “the same sort of thing” as what’s being gemini modeled Xp. E.g., if Xp is a belief that B has, then [Xp] as an element of A has to be treated by A in a way that makes [Xp] play the role of a belief in A. And further, the Thing that Xp in B “wants to be”—what it would unfold into, in B, if B were to investigate Xp further—is supposed to also be the same Thing that [Xp] in A would unfold into in A if A were to investigate [Xp] further. In other words, A is supposed to provide the context for [Xp] that makes [Xp] be “the same pointer” as Xp is for B.
Sounds close/similar-to-but-not-the-same-as categorical limit (if I understand you and category theory sufficiently correctly).
(Switching back to your notation)
Think of the modeler-mind A and the modeled-mind B as categories where objects are elements (~currently) possibilizable by/within the mind.
[1]Gemini modeling can be represented by a pair of functors:
G:B→A which maps the things/elements in B to the same things/elements in A (their “sameness” determined by identical placement in the arrow-structure).[2] In particular, it maps the actualized Xp in B to the also actualized GXp in A.
Δ[Xp] which collapses all elements in B to [Xp] in A.
For every arrow f:GXp→x in A, there is a corresponding arrow f′:[Xp]→x and the same for arrows going in the other directions. But there is no isomorphism between [Xp] and GXp. They can fill the same roles but they are importantly different (the difference between “B believing the same thing that A believes” and “B modeling A as believing the that thing”).
Now, for every other “candidate” [Xp]′ that satisfies the criterion from the previous paragraph, there is a unique arrow m:[Xp]′→[Xp] that factorizes its morphisms through [Xp].
On second thought, maybe it’s better to just have two almost identical functors, differing only by what Xp is mapped onto? (I may also be overcomplicating things, pareidolizing, or misunderstanding category theory)
On second thought, for this to work, we probably need to either restrict ourselves to a subset of each mind or represent them sufficiently coarse-grained-ly.
I don’t think I’ve fully processed what you or the OP have said here—my apologies, but this still seemed relevant.
I think the category-theory way I would describe this is Bob is a category B, and Alice is a category A. A and B are big and complicated, and I have no idea how to describe all the objects or morphisms in them, although there is some structure preserving morphism between them (your G). But what Bob does is try to to find a straw-alice category A’ which is small, and simple, along with functors from A’ to A and from A’ to B, which makes Alice predictable (or post-dictable).
Yeah, maybe it makes more sense. B’ would be just a subcategory of B that is sufficient for defining (?) Xp (something like Markov blanket of Xp?). The (endo-)functor from B’ to B would be just identity and the relationship between Xp and [Xp] would be represented by a natural transformation?
My attempt to summarize the idea. How accurate is it?
Basically, yeah.
A maybe trivial note: You switched the notation; I used Xp to mean “a part of the whole thing” and X is “the whole thing, the whole context of Xp”, and then [Xp] to denote the model / twin of Xp. X would be all of B, or enough of B to make Xp the sort of thing that Xp is.
A less trivial note: It’s a bit of a subtle point (I mean, a point I don’t fully understand), but: I think it’s important that it’s not just “the relevant connections are reflected by analogous connections”. (I mean, “relevant” is ambiguous and could mean what gemini modeling is supposed to me.) But anyway, the point is that to be gemini modeling, the criterion isn’t about reflecting any specific connections. Instead the criterion is providing connections enough so that the gemini model [Xp] is rendered “the same sort of thing” as what’s being gemini modeled Xp. E.g., if Xp is a belief that B has, then [Xp] as an element of A has to be treated by A in a way that makes [Xp] play the role of a belief in A. And further, the Thing that Xp in B “wants to be”—what it would unfold into, in B, if B were to investigate Xp further—is supposed to also be the same Thing that [Xp] in A would unfold into in A if A were to investigate [Xp] further. In other words, A is supposed to provide the context for [Xp] that makes [Xp] be “the same pointer” as Xp is for B.
Sounds close/similar-to-but-not-the-same-as categorical limit (if I understand you and category theory sufficiently correctly).
(Switching back to your notation)
Think of the modeler-mind A and the modeled-mind B as categories where objects are elements (~currently) possibilizable by/within the mind.
[1]Gemini modeling can be represented by a pair of functors:
G:B→A which maps the things/elements in B to the same things/elements in A (their “sameness” determined by identical placement in the arrow-structure).[2] In particular, it maps the actualized Xp in B to the also actualized GXp in A.
Δ[Xp] which collapses all elements in B to [Xp] in A.
For every arrow f:GXp→x in A, there is a corresponding arrow f′:[Xp]→x and the same for arrows going in the other directions. But there is no isomorphism between [Xp] and GXp. They can fill the same roles but they are importantly different (the difference between “B believing the same thing that A believes” and “B modeling A as believing the that thing”).
Now, for every other “candidate” [Xp]′ that satisfies the criterion from the previous paragraph, there is a unique arrow m:[Xp]′→[Xp] that factorizes its morphisms through [Xp].
On second thought, maybe it’s better to just have two almost identical functors, differing only by what Xp is mapped onto? (I may also be overcomplicating things, pareidolizing, or misunderstanding category theory)
I’m not sure what the arrows are, unfortunately, which is very unsatisfying
On second thought, for this to work, we probably need to either restrict ourselves to a subset of each mind or represent them sufficiently coarse-grained-ly.
I don’t think I’ve fully processed what you or the OP have said here—my apologies, but this still seemed relevant.
I think the category-theory way I would describe this is Bob is a category B, and Alice is a category A. A and B are big and complicated, and I have no idea how to describe all the objects or morphisms in them, although there is some structure preserving morphism between them (your G). But what Bob does is try to to find a straw-alice category A’ which is small, and simple, along with functors from A’ to A and from A’ to B, which makes Alice predictable (or post-dictable).
Does that make any sense?
Yeah, maybe it makes more sense. B’ would be just a subcategory of B that is sufficient for defining (?) Xp (something like Markov blanket of Xp?). The (endo-)functor from B’ to B would be just identity and the relationship between Xp and [Xp] would be represented by a natural transformation?