“Free AI” is still something that humans would choose to build—you can’t just heap a bunch of silicon into a pile and tell it “do what you want!” (Unless what silicon really wants is to sit very still.) So it’s a bit of a weird category, and I think most “regulars” in the field don’t think in terms of it.
However, I think your question can be fixed by asking about whether there’s work about treating the AIs as moral ends in themselves, rather than as means to helping humans. Many philosophers adjacent to AI have written vague things about this, but I’m not sure of anything that’s both good and non-vague.
Part of the issue is that this runs into an important question in meta-ethics: is it morally mandatory that we create more happy people, until the universe is as full as physically possible? And if not, where do you draw the line? The answer to this question is that our preferences about population ethics are a mixture of game theory and aesthetic preference—where by “aesthetic preference” I mean that if you find the idea of a galaxy-spanning civilization aesthetically pleasing, you don’t need to justify this in terms of deep moral rules, that can just be how you’d prefer the universe to be.
So basically, I think you’re asking “Is there some community of people thinking about AI that all find a future inherited by AIs aesthetically pleasing?”
And after all this: no, sorry, I don’t know of such a community.
See also https://www.lesswrong.com/posts/cnYHFNBF3kZEyx24v/ghosts-in-the-machine
“Free AI” is still something that humans would choose to build—you can’t just heap a bunch of silicon into a pile and tell it “do what you want!” (Unless what silicon really wants is to sit very still.) So it’s a bit of a weird category, and I think most “regulars” in the field don’t think in terms of it.
However, I think your question can be fixed by asking about whether there’s work about treating the AIs as moral ends in themselves, rather than as means to helping humans. Many philosophers adjacent to AI have written vague things about this, but I’m not sure of anything that’s both good and non-vague.
Part of the issue is that this runs into an important question in meta-ethics: is it morally mandatory that we create more happy people, until the universe is as full as physically possible? And if not, where do you draw the line? The answer to this question is that our preferences about population ethics are a mixture of game theory and aesthetic preference—where by “aesthetic preference” I mean that if you find the idea of a galaxy-spanning civilization aesthetically pleasing, you don’t need to justify this in terms of deep moral rules, that can just be how you’d prefer the universe to be.
So basically, I think you’re asking “Is there some community of people thinking about AI that all find a future inherited by AIs aesthetically pleasing?”
And after all this: no, sorry, I don’t know of such a community.