In this article, you posit that “positive sum networks will out-compete [...] antisocial capitalism [...]”.
If I understand correctly, this is due to cooperative systems of agents (positive-sum networks) producing more utility than purely-competitive systems. You paint a good picture of this phenomenon happening, and I think you are describing something similar to what Scott Alexander is in In Favor of Niceness, Community, and Civilization.
However, the question then becomes “what exactly makes people choose to cooperate, and when?” You cite the Prisoner’s Dilemma as a situation where the outcome Cooperate/Cooperate is better than the outcome Compete/Compete for both players. That is true, but the outcome Compete/Cooperate is better for player 1 than any other. The reverse is true for player 2. That is what makes the Coop/Coop state a fragile one for agents acting under “classical rationality”.
Cooperation tends to be fragile, not because it is worse then Competition (it’s better in the situations we posit), but because unilaterally defying is better. So, suppose you have a group of people (thousands? billions?) who follow a norm of “always choose cooperation”. This group would surely be more productive than an external group who constantly chooses to compete, sure, but if you put even one person who chooses to compete inside the “always cooperate” group, that person will likely reap enormous benefits to the detriment of others—they will be player 1 in a Compete/Cooperate dilemma.
If we posit that the cooperating group can learn, they will learn that there is a “traitor” among them, and will become a little more likely to choose Compete instead of Cooperate when they think they might be interacting with the “traitor”. But this means that these people themselves be choosing Compete, increasing the amount of “traitors” in the group, and then the whole thing deteriorates.
Do you have any ideas on how to prevent this phenomenon? Maybe the cooperating group is acting under a norm that is more complex than just “always cooperate”, that allows a state of Cooperate/Cooperate to become stable?
You cite “communication and trust” as “the two pillars of positive sum economic networks”. Do you think that if there is a sufficiently large amount and quality of trust and communication they become self-reinforcing? What I have described is a deterioration of trust in a group. How can this be prevented?
Yes to your “more utility” point. I am influenced by Robert Wright, who makes a compelling and direct case that communication and trust are what make positive-sum outcomes possible (Nonzero and here). And he points out that societies or organizations that generate those positive-sum effects will outcompete those that devolve in a race to the bottom.
Re your comment “Maybe the cooperating group is acting under a norm that is more complex than just ‘always cooperate’, that allows a state of Cooperate/Cooperate to become stable?”, Yes, that’s exactly it! Civilization is a multipolar game, as Scott Alexander points out in Meditations on Moloch and also in the article you cite (‘...and the general case is called “civilization”’).
In Moloch, Alexander points out all sorts of multipolar traps. Yet on the whole society has moved forward (at least since the 1600s) by developing sufficient complexity which governs our interactions. Fortunately, we don’t live in a simple PD game played only once or played anonymously (both of which strongly disfavor cooperation). Our personal relationships, reputations, sense of shame, and fear of downstream consequences make real life different from the simplest PD game. They provide enough nuance and complexity that on the whole we benefit from “inheriting a cultural norm and not screwing it up” (the article you cite).
Here’s my premise: Up until now in our digital lives we have lacked agency. Our online communications tend to either be centralized (governed by a Zuckerberg) or else anonymous (where reputation, relationships, sense of shame, and fear of downstream consequences don’t apply). With the former we lack agency because the medium is not designed to support our individual interests or even human flourishing (as breaking news today about Facebook reminds us). With the latter, the medium often lacks the requisite complexity that forms our cultural norm inheritance in the offline world.
Online life today is merely an inadequate equilibrium, to use Eliezer Yudkowsky’s term. The purpose of my essay and the re-post here is to ask, “Would the following set of changes (which I attempt to articulate) allow for digital interactions that break free from PD and allow us to solve collective action problems?” How could we design digital interactions so that they represent a multiplayer game with a structure that stacks the deck in favor of positive-sum outcomes? My optimistic conclusion is that the online world (leaning on decentralized identifiers, zero-knowledge proofs, etc.) can offer game design structures that are more advantageous to positive-sum outcomes than anything we’ve yet seen offline (and certainly better than today’s online designs).
Ironically, tipping out of today’s inadequate equilibrium is itself a collective action problem. And as I say, “Until individuals regain agency in their digital social interactions, coordinating for positive sum collective action is hard”. Fortunately, I believe there is now (finally) a fulcrum for a tipping point that does not rely on collective action. Now that advertisers can no longer exploit personal identifiers in the same way, I believe they will be forced to explore models that make media firms more money anyway! (My point re the “Barbados” example).
Scott Alexander, Robert Wright, many of you on this forum, and I have long thought about how to achieve more positive sum outcomes (how to defeat Moloch). Usually we look with hope to morality and rationality, yet we know how powerful a force Moloch really is. That’s where I become excited that I believe that (finally!) now there can be a tipping point via: (i) Capitalist incentives plus (ii) the shock to today’s equilibrium of Apple/Google’s privacy announcements plus (iii) (not required, but bonus!) regulatory and other pressures owing to revelations about Facebook.
Thanks so much for engaging on the essay. I’m optimistic that there really is a path toward a better equilibrium, and it helps to bounce the ideas off smart people. Glad to have this forum!
In this article, you posit that “positive sum networks will out-compete [...] antisocial capitalism [...]”.
If I understand correctly, this is due to cooperative systems of agents (positive-sum networks) producing more utility than purely-competitive systems. You paint a good picture of this phenomenon happening, and I think you are describing something similar to what Scott Alexander is in In Favor of Niceness, Community, and Civilization.
However, the question then becomes “what exactly makes people choose to cooperate, and when?” You cite the Prisoner’s Dilemma as a situation where the outcome Cooperate/Cooperate is better than the outcome Compete/Compete for both players. That is true, but the outcome Compete/Cooperate is better for player 1 than any other. The reverse is true for player 2. That is what makes the Coop/Coop state a fragile one for agents acting under “classical rationality”.
Cooperation tends to be fragile, not because it is worse then Competition (it’s better in the situations we posit), but because unilaterally defying is better. So, suppose you have a group of people (thousands? billions?) who follow a norm of “always choose cooperation”. This group would surely be more productive than an external group who constantly chooses to compete, sure, but if you put even one person who chooses to compete inside the “always cooperate” group, that person will likely reap enormous benefits to the detriment of others—they will be player 1 in a Compete/Cooperate dilemma.
If we posit that the cooperating group can learn, they will learn that there is a “traitor” among them, and will become a little more likely to choose Compete instead of Cooperate when they think they might be interacting with the “traitor”. But this means that these people themselves be choosing Compete, increasing the amount of “traitors” in the group, and then the whole thing deteriorates.
Do you have any ideas on how to prevent this phenomenon? Maybe the cooperating group is acting under a norm that is more complex than just “always cooperate”, that allows a state of Cooperate/Cooperate to become stable?
You cite “communication and trust” as “the two pillars of positive sum economic networks”. Do you think that if there is a sufficiently large amount and quality of trust and communication they become self-reinforcing? What I have described is a deterioration of trust in a group. How can this be prevented?
Hi Giskard,
Yes to your “more utility” point. I am influenced by Robert Wright, who makes a compelling and direct case that communication and trust are what make positive-sum outcomes possible (Nonzero and here). And he points out that societies or organizations that generate those positive-sum effects will outcompete those that devolve in a race to the bottom.
Re your comment “Maybe the cooperating group is acting under a norm that is more complex than just ‘always cooperate’, that allows a state of Cooperate/Cooperate to become stable?”, Yes, that’s exactly it! Civilization is a multipolar game, as Scott Alexander points out in Meditations on Moloch and also in the article you cite (‘...and the general case is called “civilization”’).
In Moloch, Alexander points out all sorts of multipolar traps. Yet on the whole society has moved forward (at least since the 1600s) by developing sufficient complexity which governs our interactions. Fortunately, we don’t live in a simple PD game played only once or played anonymously (both of which strongly disfavor cooperation). Our personal relationships, reputations, sense of shame, and fear of downstream consequences make real life different from the simplest PD game. They provide enough nuance and complexity that on the whole we benefit from “inheriting a cultural norm and not screwing it up” (the article you cite).
Here’s my premise: Up until now in our digital lives we have lacked agency. Our online communications tend to either be centralized (governed by a Zuckerberg) or else anonymous (where reputation, relationships, sense of shame, and fear of downstream consequences don’t apply). With the former we lack agency because the medium is not designed to support our individual interests or even human flourishing (as breaking news today about Facebook reminds us). With the latter, the medium often lacks the requisite complexity that forms our cultural norm inheritance in the offline world.
Online life today is merely an inadequate equilibrium, to use Eliezer Yudkowsky’s term. The purpose of my essay and the re-post here is to ask, “Would the following set of changes (which I attempt to articulate) allow for digital interactions that break free from PD and allow us to solve collective action problems?” How could we design digital interactions so that they represent a multiplayer game with a structure that stacks the deck in favor of positive-sum outcomes? My optimistic conclusion is that the online world (leaning on decentralized identifiers, zero-knowledge proofs, etc.) can offer game design structures that are more advantageous to positive-sum outcomes than anything we’ve yet seen offline (and certainly better than today’s online designs).
Ironically, tipping out of today’s inadequate equilibrium is itself a collective action problem. And as I say, “Until individuals regain agency in their digital social interactions, coordinating for positive sum collective action is hard”. Fortunately, I believe there is now (finally) a fulcrum for a tipping point that does not rely on collective action. Now that advertisers can no longer exploit personal identifiers in the same way, I believe they will be forced to explore models that make media firms more money anyway! (My point re the “Barbados” example).
Scott Alexander, Robert Wright, many of you on this forum, and I have long thought about how to achieve more positive sum outcomes (how to defeat Moloch). Usually we look with hope to morality and rationality, yet we know how powerful a force Moloch really is. That’s where I become excited that I believe that (finally!) now there can be a tipping point via: (i) Capitalist incentives plus (ii) the shock to today’s equilibrium of Apple/Google’s privacy announcements plus (iii) (not required, but bonus!) regulatory and other pressures owing to revelations about Facebook.
Thanks so much for engaging on the essay. I’m optimistic that there really is a path toward a better equilibrium, and it helps to bounce the ideas off smart people. Glad to have this forum!