I think the question of whether agents can develop niceness reliably in suitable environments is a cornerstone of Shard Theory, Brain-like AGI, and related approaches. I don’t think Nate’s argument is water-tight and has lots of uncertainty itself, e.g.,
Presumably that reproductive effect factored through a greater ability to form alliances
hedges with “presumably” and we don’t know how the greater ability to form alliances comes about—maybe via components of niceness. But I don’t want to argue the points—they are not strong enough to be wrong. I want empirical facts. Shut up and Calculate! I think we are at a stage where the question can be settled with experiments. And that is what the research agenda of the Brain-like AGI project “aintelope” calls for, and it is also what as I understand the Shard Theory team is aiming at (with a different type of environment).
I think the question of whether agents can develop niceness reliably in suitable environments is a cornerstone of Shard Theory, Brain-like AGI, and related approaches. I don’t think Nate’s argument is water-tight and has lots of uncertainty itself, e.g.,
hedges with “presumably” and we don’t know how the greater ability to form alliances comes about—maybe via components of niceness. But I don’t want to argue the points—they are not strong enough to be wrong. I want empirical facts. Shut up and Calculate! I think we are at a stage where the question can be settled with experiments. And that is what the research agenda of the Brain-like AGI project “aintelope” calls for, and it is also what as I understand the Shard Theory team is aiming at (with a different type of environment).