We probably won’t just play status games with each other after AGI

There is a view I’ve encountered somewhat often,[1] which can be summarized as follows:

After the widespread deployment of advanced AGI, assuming humanity survives, material scarcity will largely disappear. Everyone will have sufficient access to necessities like food, housing, and other basic resources. Therefore, the only scarce resource remaining will be “social status”. As a result, the primary activity humans will engage in will be playing status games with other humans.

I have a number of objections to this idea. I’ll focus on two of my objections here.

My first objection is modest but important. In my view, this idea underestimates the extent to which AIs could participate in status games alongside us, not just as external tools or facilitators but as actual participants and peers in human social systems. Specifically, the idea that humans will only be playing status games with each other strikes me as flawed because it overlooks the potential for AIs to fully integrate into our social lives, forming genuinely deep relationships with humans as friends, romantic partners, social competitors, and other forms of meaningful social connections.

One common counterargument I’ve heard from people is that they don’t believe they would ever truly view an AI as a “real” friend or romantic partner. This reasoning often seems to rest on a belief that such relationships would feel inauthentic, as though you’re interacting with a mere simulation. However, I think this skepticism is based on a misunderstanding of what AIs are capable of. In a way, this belief seems to stem from skepticism about AI capabilities: they are essentially saying that whatever it is that humans do that cause us to be good social partners can’t be replicated in a machine.

In my view, there is no fundamental reason why a mind implemented on silicon should inherently feel less “real” or “authentic” than a mind implemented on a biological brain. The perceived difference is a matter of perspective, not an objective truth about what makes a relationship meaningful.

To illustrate this, consider a silly hypothetical: imagine discovering that your closest friend was, unbeknownst to you, a robot all along. Would this revelation fundamentally change how you view your relationship? I suspect that most people would not suddenly stop caring about that friend or begin treating them as a mere tool (though they’d likely become deeply confused, and have a lot of questions). My point is that the qualities that made the friendship meaningful—such as shared memories, and emotional connection—would not cease to exist simply because of the revelation that they are not a carbon-based lifeform. In the same way, I predict that as AIs improve and become more sophisticated, most humans will eventually overcome their initial hesitation and embrace AIs as true peers.

Right now, this might seem implausible because today’s AI systems are still limited in important ways. For example, current LLMs lack of robust long-term memory and so it’s effectively impossible to have a meaningful relationship with them over long timespans. But these limitations are temporary. In the long run, there’s no reason to believe that AIs won’t eventually surpass humans in every domain that makes someone a good friend, romantic partner, or social peer. Advanced AIs will have great memory, excellent social intuition, and a good sense of humor. They could have outstanding courage, empathy, and creativity. Depending on the interface—such as a robotic body capable of human-like physical presence—they could be made to feel as “normal” to interact with as any human you know.

In fact, I would argue that AIs will ultimately make for better friends, partners, and peers than humans in practically every way. Unlike humans, AIs can be explicitly trained to embody the traits we most value in relationships—whether that’s empathy, patience, humor, intelligence, whatever—without the shortcomings and inconsistencies that are inherent to human behavior. While their non-biological substrate ultimately sets them apart, their behavior could easily surpass human standards of social connection. In this sense, AIs would not just be equal to humans as social beings but could actually become superior in the ways that matter most when forming social ties with them.

Once people recognize how fulfilling and meaningful relationships with AIs can be, I expect that social attitudes will shift. This change may start slowly, as more conservative or skeptical people will resist the idea at first. But over time, much like the adoption of smartphones into our everyday life, I predict that forming deep social bonds with AIs will become normalized. At some point, it won’t seem unusual or weird to have AIs as core members of one’s social circle. In fact, I think it’s entirely plausible that AIs will become the vast majority of people’s social connections. If this happens, the notion that humans will be primarily playing status games with each other becomes an oversimplification. Instead, the post-AGI social landscape will likely involve a complex interplay of dynamics between humans and AIs, with AIs playing a major—indeed, likely central—role as peers in these interactions.

But even in the scenario I’ve just outlined, where AIs integrate into human social systems and become peers, the world still feels far too normal to me. The picture I’ve painted seems to assume that not much will fundamentally change about our social structures or the ways we interact, even in a post-AGI world.

Yet, I believe the future will likely look profoundly strange—far beyond a simple continuation of our current world but with vast material abundance. Instead of just having more of what we already know, I anticipate the emergence of entirely new ways for people to spend their time, pursue meaning, and structure their lives. These new activities and forms of engagement could be so unfamiliar and alien to us today that they would be almost unrecognizable.

This leads me to my second objection to the idea that the primary activity of future humans will revolve around status games: humans will likely upgrade their cognitive abilities.

This could begin with biological enhancements—such as genetic modifications or neural interfaces—but I think pretty quickly after it becomes possible, people will start uploading their minds onto digital substrates. Once this happens, humans could then modify and upgrade their brains in ways that are currently unimaginable. For instance, they might make their minds vastly larger, restructure their neural architectures, or add entirely new cognitive capabilities. They could also duplicate themselves across different hardware, forming “clans” of descendants of themselves. Over time, this kind of enhancement could drive dramatic evolutionary changes, leading to entirely new states of being that bear little resemblance to the humans of today.

The end result of such a transformation is that, even if we begin this process as “humans”, we are unlikely to remain human in any meaningful sense in the long-run. Our augmented and evolved forms could be so radically different that it feels absurd to imagine we would still be preoccupied with the same social activities that dominate our lives now—namely, playing status games with one another. And it seems especially strange to think that, after undergoing such profound changes, we would still find ourselves engaging in these games specifically with biological humans, whose cognitive and physical capacities would pale in comparison to our own.

  1. ^

    Here’s a random example of a tweet that I think gestures at this idea.