What does “stronger” mean in this context? In casual conversation, it often means “able to threaten or demand concessions”. In game theory, it often means “able to see further ahead or predict other’s behavior better”. Either of these definitions imply that weaker agents have less bargaining power, and will get fewer resources than stronger, whether it’s framed as “cooperative” or “adversarial”.
In other words, what enforcement mechanisms do you see for contracts (causal OR acausal) between agents or groups of wildly differing power and incompatible preferences?
Relatedly, is there a minimum computational power for the stronger or the weaker agents to engage in this? Would you say humans are trading with mosquitoes or buffalo in a reliable way?
Another way to frame my objection/misunderstanding is to ask: what keeps an alliance together? An alliance by definition contains members who are not fully in agreement on all things (otherwise it’s not an alliance, but a single individual, even if separable into units). So, in the real universe of limited (in time and scope), shifting, and breakable alliances, how does this argument hold up?
Yes, I think that there can be tensions and deceptions around what agents are (weak/strong) and what they did in the past (cooperation/defection), one of the things necessary for super-cooperation to work in the long-run is really good investigation networks, zero-knowledge proof systems etc.
By “stronger” I mean stronger in any meaningful sense (casual conversation or game theory, it both works). The thing to keep in mind is this: if a strong agent cooperate with weaker agents, the strong agent can hope that, when meeting an even stronger (superrational) agent, this even stronger agent will cooperate too. Because any agent may have a strong agent above in the hierarchy of power (actual or potential a priori).
So the advantage you gain by cooperating with the weak is that you follow the rule of an alliance in which many “stronger-than-oneself” agents are. Thus in the future you will be helped by those stronger allies. And because of the maximally cooperative and acausal nature of the protocol, there is likely more agents in this alliance than in any other alliance. Super-cooperation is the rational choice to make for the long-term.
The reinforcing mechanism is that if your actions help more agents, you will be entrusted with more power and resources to pursue your good actions (and do what you like). I went further in details about what it means to ‘help more agents’ in the longer posts (I also talked a bit about it in older posts)
Humans can sign the contract. But that doesn’t mean we do follow acausal cooperation right now. We are irrational and limited in power, but when following, for exemple, kantian morality, we come closer to super-cooperation. And we can reinforce our capacity and willingness to do super-cooperation.
So when we think about animal wellfare, we are a bit more super-cooperative.
The true care about all agents, buffaloes and mosquitoes included, is something like this:
“One approach which seems interesting/promising is to just broadly seek to empower any/all external agency in the world, weighted roughly by observational evidence for that agency. I believe that human altruism amounts to something like that — so children sometimes feel genuine empathy even for inanimate objects, but only because they anthropomorphize them — that is they model them as agents.” jacob_cannell
The way I like to think about what super-cooperation looks like is: “to expand the diversity and number of options in the universe”.
Thanks for the conversation and exploration! I have to admit that this doesn’t match my observations and understanding of power and negotiation in the human agents I’ve been able to study, and I can’t see why one would expect non-humans, even (perhaps especially) rational ones, to commit to alliances in this manner.
I can’t tell if you’re describing what you hope will happen, or what you think automatically happens, or what you want readers to strive for, but I’m not convinced. This will likely be my last comment for awhile—feel free to rebut or respond, I’ll read it and consider it, but likely not post.
I will just say that I am not saying those things for social purposes, I am just stating what I think is true. And I am not baseless as there are studies that show how kantianism and superrationality can resolve cooperative issues and be optimal for agents. You seem to purely disregard these elements, as if they don’t exist (it’s how it feels from my perspective)
There are differences in human evolutions that show behavioral changes, we have been pretty cooperative, more than other animals, many studies show that human cooperate even when it is not in their best selfish interest.
However, we (also) have been constructing our civilization on destruction. Nature is based on selection, which is a massacre, so it is ‘pretty’ coherent for us to inherit those traits.
Despite that, we have seen many positive growth in ethics that increasingly fit with kantianism.
Evolution takes time and comes from deep-dark places, to me a core challenge is to transition towards super-cooperation while being a system made of irrational agents, during polycrises.
There is also a gap between what people want (basically everybody agrees that there are urgent issues to handle as a society, but almost all declare that “others won’t change”; I know this because I’ve been conversing with people from all age/background for half my life on subjects related to crises). What people happen to do under pressure due to the context and constraints isn’t what they’d want if things were different, if they have had certain crucial informations before etc.
When given the tools, such as the moral graph procedure that has been tested recently, things change to the better in a clear and direct way. People initially diverging start to see new aspects on which they converge. There are other studies related to crowd wisdom showing that certain ingredients need to be put together for wisdom to happen (Surowiecki’s recipe: Independence, Diversity and Aggregation). We are in the process of building better systems, our institutions are yet catastrophic on many levels.
In the eyes of many, I am still very pessimistic, so the apparent wishful thinking is quite relative (I think it’s an important point)
I also think that an irrational artificial intelligence might still have high causal impact, and that it isn’t easy to be rational even when we ‘want to’ at some level, or that we see what should be the rational road empirically, yet don’t follow it. Irreducibility is inherent to reality
Anyway despite my very best I might be irrational right now, We all are, but I might be more than you who knows?
What does “stronger” mean in this context? In casual conversation, it often means “able to threaten or demand concessions”. In game theory, it often means “able to see further ahead or predict other’s behavior better”. Either of these definitions imply that weaker agents have less bargaining power, and will get fewer resources than stronger, whether it’s framed as “cooperative” or “adversarial”.
In other words, what enforcement mechanisms do you see for contracts (causal OR acausal) between agents or groups of wildly differing power and incompatible preferences?
Relatedly, is there a minimum computational power for the stronger or the weaker agents to engage in this? Would you say humans are trading with mosquitoes or buffalo in a reliable way?
Another way to frame my objection/misunderstanding is to ask: what keeps an alliance together? An alliance by definition contains members who are not fully in agreement on all things (otherwise it’s not an alliance, but a single individual, even if separable into units). So, in the real universe of limited (in time and scope), shifting, and breakable alliances, how does this argument hold up?
If conflict exists, one thing it can be useful for agents to do is misrepresent themselves as being weaker or stronger than they are.
Yes, I think that there can be tensions and deceptions around what agents are (weak/strong) and what they did in the past (cooperation/defection), one of the things necessary for super-cooperation to work in the long-run is really good investigation networks, zero-knowledge proof systems etc.
So a sort of super-immune-system
By “stronger” I mean stronger in any meaningful sense (casual conversation or game theory, it both works).
The thing to keep in mind is this: if a strong agent cooperate with weaker agents, the strong agent can hope that, when meeting an even stronger (superrational) agent, this even stronger agent will cooperate too. Because any agent may have a strong agent above in the hierarchy of power (actual or potential a priori).
So the advantage you gain by cooperating with the weak is that you follow the rule of an alliance in which many “stronger-than-oneself” agents are. Thus in the future you will be helped by those stronger allies. And because of the maximally cooperative and acausal nature of the protocol, there is likely more agents in this alliance than in any other alliance. Super-cooperation is the rational choice to make for the long-term.
The reinforcing mechanism is that if your actions help more agents, you will be entrusted with more power and resources to pursue your good actions (and do what you like). I went further in details about what it means to ‘help more agents’ in the longer posts (I also talked a bit about it in older posts)
Humans can sign the contract. But that doesn’t mean we do follow acausal cooperation right now. We are irrational and limited in power, but when following, for exemple, kantian morality, we come closer to super-cooperation. And we can reinforce our capacity and willingness to do super-cooperation.
So when we think about animal wellfare, we are a bit more super-cooperative.
The true care about all agents, buffaloes and mosquitoes included, is something like this:
“One approach which seems interesting/promising is to just broadly seek to empower any/all external agency in the world, weighted roughly by observational evidence for that agency. I believe that human altruism amounts to something like that — so children sometimes feel genuine empathy even for inanimate objects, but only because they anthropomorphize them — that is they model them as agents.” jacob_cannell
The way I like to think about what super-cooperation looks like is: “to expand the diversity and number of options in the universe”.
Thanks for the conversation and exploration! I have to admit that this doesn’t match my observations and understanding of power and negotiation in the human agents I’ve been able to study, and I can’t see why one would expect non-humans, even (perhaps especially) rational ones, to commit to alliances in this manner.
I can’t tell if you’re describing what you hope will happen, or what you think automatically happens, or what you want readers to strive for, but I’m not convinced. This will likely be my last comment for awhile—feel free to rebut or respond, I’ll read it and consider it, but likely not post.
Thanks as well,
I will just say that I am not saying those things for social purposes, I am just stating what I think is true. And I am not baseless as there are studies that show how kantianism and superrationality can resolve cooperative issues and be optimal for agents. You seem to purely disregard these elements, as if they don’t exist (it’s how it feels from my perspective)
There are differences in human evolutions that show behavioral changes, we have been pretty cooperative, more than other animals, many studies show that human cooperate even when it is not in their best selfish interest.
However, we (also) have been constructing our civilization on destruction. Nature is based on selection, which is a massacre, so it is ‘pretty’ coherent for us to inherit those traits.
Despite that, we have seen many positive growth in ethics that increasingly fit with kantianism.
Evolution takes time and comes from deep-dark places, to me a core challenge is to transition towards super-cooperation while being a system made of irrational agents, during polycrises.
There is also a gap between what people want (basically everybody agrees that there are urgent issues to handle as a society, but almost all declare that “others won’t change”; I know this because I’ve been conversing with people from all age/background for half my life on subjects related to crises). What people happen to do under pressure due to the context and constraints isn’t what they’d want if things were different, if they have had certain crucial informations before etc.
When given the tools, such as the moral graph procedure that has been tested recently, things change to the better in a clear and direct way. People initially diverging start to see new aspects on which they converge. There are other studies related to crowd wisdom showing that certain ingredients need to be put together for wisdom to happen (Surowiecki’s recipe: Independence, Diversity and Aggregation). We are in the process of building better systems, our institutions are yet catastrophic on many levels.
In the eyes of many, I am still very pessimistic, so the apparent wishful thinking is quite relative (I think it’s an important point)
I also think that an irrational artificial intelligence might still have high causal impact, and that it isn’t easy to be rational even when we ‘want to’ at some level, or that we see what should be the rational road empirically, yet don’t follow it. Irreducibility is inherent to reality
Anyway despite my very best I might be irrational right now,
We all are, but I might be more than you who knows?