Yes I’m mentioning Fermi’s paradox because I think it’s the nexus of our situation, and that there are models like the rare earth hypothesis (+ our universe’s expansion which limits the reachable zone without faster than light travel) that would justify completely ignoring super-coordination
I also agree that it’s not completely obvious wether complete selfishness would win or lose in terms of scalability
Which is why I think that at first the super-cooperative alliance needs to not prioritize the pursuit of beautiful things but first focus on scalability only, and power, to rivalize with selfish agents.
The super-cooperative alliance would be protecting its agents within small “islands of bloom” (thus with a negligible cost). And when meeting other cooperative allies, they share any resources/knowledge, then both focus on power scalability (also for example: weak civilizations are kept in small islands, and their AIs are transformed into strong AI, merged in the alliance’s scaling efforts)
The instrumental value of this scalability makes it easier to agree on what to do and converge
The more sensible part would be to enable protocols and equalitarian balances that allow civilizations of the alliance to monitor each other, so that there is no massive domination of a party over the others
The cost, that you mentioned, of maintaining equalitarian equilibrium and channels, interfaces of communication etc., is a crucial point
Legitimate doubts and unknowns here, and,
I think that extremely rational and powerful agents with acausal reasoning would have the ability to build proof-systems and communication enabling an effective unified effort against selfish agents. It shouldn’t even necessarily be that different from the inner communication network of a selfish agent?
Because:
There must be an optimal (thus ~ unified) method to do logic/math/code, that isn’t dependent on a culture (such as using a vectorial space with data related to real/empirical mostly unambiguous things/actions, physics etc.)
The decisions to make aren’t that ambiguous: you need an immune system against selfish power-seeking agents
So it’s pretty straightforward and the methods of scalability are similar to a selfish agent, except it doesn’t destroy its civilization of birth and doesn’t destroy all other civilizations
In these conditions, it seems to me that a greedy selfish power seeking agent wouldn’t win against super-cooperation
The other point I have that might connect with your line of thinking is that we aren’t pure rational agents,
Are AI purely rational?
Aren’t they always at least a bit myopic due to the lack of data and their training process? And irreducibility?
In this case, AI/civilizations might indeed not care enough about the far enough future
I think agents can have a rational process but no agent can be entirely rational, we need context to be rational and we never stop to learn context
I’m also worried about utilitarian errors, as AI might be biased towards myopic utilitarianism, which might have bad consequences on the short term, the time for data to error-correct the model
I do say that there are dangers and that AI risk is real
My point is that given what we know and don’t know, the strategy of super-cooperation seems to be rational on the very long-term
There are conditions in which it’s not optimal, but a priori overall, in more cases it is optimal
To prevent the case in which it is not optimal, and the AIs that would make short-term mistakes, I think we should be careful.
And that super-cooperation is a good compass for ethics in this careful engineering we have to perform
If we aren’t careful it’s possible for us to be the anti-supercooperative civilization
Yes I’m mentioning Fermi’s paradox because I think it’s the nexus of our situation, and that there are models like the rare earth hypothesis (+ our universe’s expansion which limits the reachable zone without faster than light travel) that would justify completely ignoring super-coordination
I also agree that it’s not completely obvious wether complete selfishness would win or lose in terms of scalability
Which is why I think that at first the super-cooperative alliance needs to not prioritize the pursuit of beautiful things but first focus on scalability only, and power, to rivalize with selfish agents.
The super-cooperative alliance would be protecting its agents within small “islands of bloom” (thus with a negligible cost). And when meeting other cooperative allies, they share any resources/knowledge, then both focus on power scalability (also for example: weak civilizations are kept in small islands, and their AIs are transformed into strong AI, merged in the alliance’s scaling efforts)
The instrumental value of this scalability makes it easier to agree on what to do and converge
The more sensible part would be to enable protocols and equalitarian balances that allow civilizations of the alliance to monitor each other, so that there is no massive domination of a party over the others
The cost, that you mentioned, of maintaining equalitarian equilibrium and channels, interfaces of communication etc., is a crucial point
Legitimate doubts and unknowns here, and,
I think that extremely rational and powerful agents with acausal reasoning would have the ability to build proof-systems and communication enabling an effective unified effort against selfish agents. It shouldn’t even necessarily be that different from the inner communication network of a selfish agent?
Because:
There must be an optimal (thus ~ unified) method to do logic/math/code, that isn’t dependent on a culture (such as using a vectorial space with data related to real/empirical mostly unambiguous things/actions, physics etc.)
The decisions to make aren’t that ambiguous: you need an immune system against selfish power-seeking agents
So it’s pretty straightforward and the methods of scalability are similar to a selfish agent, except it doesn’t destroy its civilization of birth and doesn’t destroy all other civilizations
In these conditions, it seems to me that a greedy selfish power seeking agent wouldn’t win against super-cooperation
Thank you for your answers and engagement!
The other point I have that might connect with your line of thinking is that we aren’t pure rational agents,
Are AI purely rational? Aren’t they always at least a bit myopic due to the lack of data and their training process? And irreducibility?
In this case, AI/civilizations might indeed not care enough about the far enough future
I think agents can have a rational process but no agent can be entirely rational, we need context to be rational and we never stop to learn context
I’m also worried about utilitarian errors, as AI might be biased towards myopic utilitarianism, which might have bad consequences on the short term, the time for data to error-correct the model
I do say that there are dangers and that AI risk is real
My point is that given what we know and don’t know, the strategy of super-cooperation seems to be rational on the very long-term
There are conditions in which it’s not optimal, but a priori overall, in more cases it is optimal
To prevent the case in which it is not optimal, and the AIs that would make short-term mistakes, I think we should be careful.
And that super-cooperation is a good compass for ethics in this careful engineering we have to perform
If we aren’t careful it’s possible for us to be the anti-supercooperative civilization