The other point I have that might connect with your line of thinking is that we aren’t pure rational agents,
Are AI purely rational?
Aren’t they always at least a bit myopic due to the lack of data and their training process? And irreducibility?
In this case, AI/civilizations might indeed not care enough about the far enough future
I think agents can have a rational process but no agent can be entirely rational, we need context to be rational and we never stop to learn context
I’m also worried about utilitarian errors, as AI might be biased towards myopic utilitarianism, which might have bad consequences on the short term, the time for data to error-correct the model
I do say that there are dangers and that AI risk is real
My point is that given what we know and don’t know, the strategy of super-cooperation seems to be rational on the very long-term
There are conditions in which it’s not optimal, but a priori overall, in more cases it is optimal
To prevent the case in which it is not optimal, and the AIs that would make short-term mistakes, I think we should be careful.
And that super-cooperation is a good compass for ethics in this careful engineering we have to perform
If we aren’t careful it’s possible for us to be the anti-supercooperative civilization
Thank you for your answers and engagement!
The other point I have that might connect with your line of thinking is that we aren’t pure rational agents,
Are AI purely rational? Aren’t they always at least a bit myopic due to the lack of data and their training process? And irreducibility?
In this case, AI/civilizations might indeed not care enough about the far enough future
I think agents can have a rational process but no agent can be entirely rational, we need context to be rational and we never stop to learn context
I’m also worried about utilitarian errors, as AI might be biased towards myopic utilitarianism, which might have bad consequences on the short term, the time for data to error-correct the model
I do say that there are dangers and that AI risk is real
My point is that given what we know and don’t know, the strategy of super-cooperation seems to be rational on the very long-term
There are conditions in which it’s not optimal, but a priori overall, in more cases it is optimal
To prevent the case in which it is not optimal, and the AIs that would make short-term mistakes, I think we should be careful.
And that super-cooperation is a good compass for ethics in this careful engineering we have to perform
If we aren’t careful it’s possible for us to be the anti-supercooperative civilization