Collective consciousness/hivemind, the content is mostly determined statistically. AI mostly operate in this domain with a lot of focus on temporal context and significance. Such collective consciousness determines what’s applicable when. For example an AI learned purely out of information generated from a specific historical period would have a vastly different applicability when set to interact in a different temporal context. Realities of human civilizations are very different in different time periods, but you can always find universal patterns throughout. Such patterns can only be identified on a meta level of the details that are temporally dependent.
In simpler words, the things that do change in short periods of time are often superficial levels of content and engagement. The things that change on longer periods of time mostly happen on the meta level on top of the superficial level.
Reality itself is very different from the constructed realities of the individuals and collective consciousness. That’s why language has such a significant role in everything we do as humans.
How does someone who take our reality for granted talk to someone who doesn’t take our reality for granted? They can only successfully communicate on points that they agree linguistically, even though the underlying context and depth of the subject may be completely different between the two individuals. Since language is a superficial representation of thoughts, and thoughts originate from our own versions of reality/mental model, those two are ultimately not really effectively communicating at all even though they are both uttering words of language from their mouths. So what’s the point of talking to someone whom you can’t effectively communicate with? Well at least one person has change their own linguistic context to match the context of the other person. So the question becomes, how many mental models/realities does an individual typically exercise? Do they context switch between the models? Do the models overlap? Are some models purely subsets of bigger context, even though the encompassing model doesn’t even exist in that person’s mind? This is essentially the root of tribalism, people with different mental models with other out groups but share similar models with the in group members. They may all exist as some subset of an abstract larger model, even such model doesn’t actually exist in any individual at a given point in time. I think AI essentially are these larger abstract models or an ensemble of them.
Collective consciousness/hivemind, the content is mostly determined statistically. AI mostly operate in this domain with a lot of focus on temporal context and significance. Such collective consciousness determines what’s applicable when. For example an AI learned purely out of information generated from a specific historical period would have a vastly different applicability when set to interact in a different temporal context. Realities of human civilizations are very different in different time periods, but you can always find universal patterns throughout. Such patterns can only be identified on a meta level of the details that are temporally dependent.
In simpler words, the things that do change in short periods of time are often superficial levels of content and engagement. The things that change on longer periods of time mostly happen on the meta level on top of the superficial level.
Reality itself is very different from the constructed realities of the individuals and collective consciousness. That’s why language has such a significant role in everything we do as humans.
How does someone who take our reality for granted talk to someone who doesn’t take our reality for granted? They can only successfully communicate on points that they agree linguistically, even though the underlying context and depth of the subject may be completely different between the two individuals. Since language is a superficial representation of thoughts, and thoughts originate from our own versions of reality/mental model, those two are ultimately not really effectively communicating at all even though they are both uttering words of language from their mouths. So what’s the point of talking to someone whom you can’t effectively communicate with? Well at least one person has change their own linguistic context to match the context of the other person. So the question becomes, how many mental models/realities does an individual typically exercise? Do they context switch between the models? Do the models overlap? Are some models purely subsets of bigger context, even though the encompassing model doesn’t even exist in that person’s mind? This is essentially the root of tribalism, people with different mental models with other out groups but share similar models with the in group members. They may all exist as some subset of an abstract larger model, even such model doesn’t actually exist in any individual at a given point in time. I think AI essentially are these larger abstract models or an ensemble of them.