I haven’t fully understood all of your points, but they gloss as reasonable and good. Thank you for this high-effort, thoughtful comment!
(If anyone is interested in doing research on the evolution of prosocality vs antisocialness in humans and/or how these things might play out in AI training environments, I know people who would likely be interested in funding such work.)
I encourage applicants to also read Quintin’s Evolution is a bad analogy for AGI (which I wish more people had read, I think it’s quite important). I think that evolution-based analogies can easily go astray, for reasons pointed out in the essay. (It wasn’t obvious to me that you went astray in your comment, to be clear—more noting this for other readers.)
I haven’t fully understood all of your points, but they gloss as reasonable and good. Thank you for this high-effort, thoughtful comment!
I encourage applicants to also read Quintin’s Evolution is a bad analogy for AGI (which I wish more people had read, I think it’s quite important). I think that evolution-based analogies can easily go astray, for reasons pointed out in the essay. (It wasn’t obvious to me that you went astray in your comment, to be clear—more noting this for other readers.)