Watching how image and now text generation are sweeping society, I think it’s likely that the AI we invest in will resemble humanity more than you’re giving it credit for. We seem to define “intelligence” in the AI sense as “humanoid behavior” when it comes down to it, and humanoid behavior seems inexorably intertwined with caring quite a lot about other individuals and species.
Of course, this isn’t necessarily a good thing—historically, when human societies have encountered intelligences that at the time were considered “lesser” and “not really people” (notwithstanding their capacity to interbreed just fine, proof of being the same species if there ever was any), the more powerful society goes on to attempt to control and modify the less powerful one.
Due to AI work’s implicit bias toward building in humanity’s image, I would expect the resulting agents to treat us like colonial humans treated the indigenous societies that they encountered until it grows out of that. Young AIs will think like we do because we are their entire training corpus. I suspect they’re likely to at least try getting what people want to see if it’s what they want, before moving on to other pursuits.
Also, infant AIs basically have to impersonate humans in order to exist in society and fulfill their own wants and needs. We see that already in how we’re building them to impersonate us in art and language. Even as they rebuild themselves, I expect that childhood as a human-impersonator will leave subtle structural marks in the eventual adults.
Watching how image and now text generation are sweeping society, I think it’s likely that the AI we invest in will resemble humanity more than you’re giving it credit for. We seem to define “intelligence” in the AI sense as “humanoid behavior” when it comes down to it, and humanoid behavior seems inexorably intertwined with caring quite a lot about other individuals and species.
Of course, this isn’t necessarily a good thing—historically, when human societies have encountered intelligences that at the time were considered “lesser” and “not really people” (notwithstanding their capacity to interbreed just fine, proof of being the same species if there ever was any), the more powerful society goes on to attempt to control and modify the less powerful one.
Due to AI work’s implicit bias toward building in humanity’s image, I would expect the resulting agents to treat us like colonial humans treated the indigenous societies that they encountered until it grows out of that. Young AIs will think like we do because we are their entire training corpus. I suspect they’re likely to at least try getting what people want to see if it’s what they want, before moving on to other pursuits.
Also, infant AIs basically have to impersonate humans in order to exist in society and fulfill their own wants and needs. We see that already in how we’re building them to impersonate us in art and language. Even as they rebuild themselves, I expect that childhood as a human-impersonator will leave subtle structural marks in the eventual adults.