On straightforward extrapolation of current technologies, it kind of seems like AI friends would be overly pliable and lack independent lives. One could obviously train an AI to seem more independent to their “friends”, and that would probably make it more interesting to “befriend”, but in reality it would make the AI less independent because its supposed “independence” would actually arise from a constraint generated by its “friends”’s perception, rather than from an attempt to live independently. This seems less like a normal friendship and more like a superstimulus simulating the appearance of a friendship for entertainment value. It seems reasonable enough to characterize it as non-authentic.
Do you disagree? What do you think would lead to a different trajectory?
This seems less like a normal friendship and more like a superstimulus simulating the appearance of a friendship for entertainment value. It seems reasonable enough to characterize it as non-authentic.
I assume some people people will end up wanting to interact with a mere superstimulus; however, other people will value authenticity and variety in their friendships and social experiences. This comes down to human preferences, which will shape the type of AIs we end up training.
The conclusion that nearly all AI-human friendships will seem inauthentic thus seems unwarranted. Unless the superstimulus is irresistible, then it won’t be the only type of relationship people will have.
Since most people already express distaste at non-authentic friendships with AIs, I assume there will be a lot of demand for AI companies to train higher quality AIs that are not superficial and pliable in the way you suggest. These AIs would not merely appear independent but would literally be independent in the same functional sense that humans are, if indeed that’s what consumers demand.
This can be compared to addictive drugs and video games, which are popular, but not universally viewed as worthwhile pursuits. In fact, many people purposely avoid trying certain drugs to avoid getting addicted: they’d rather try to enjoy what they see as richer and more meaningful experiences from life instead.
I don’t think consumers demand authentic AI friends because they already have authentic human friends. Also it’s not clear how you imagine the AI companies could train the AIs to be more independent and less superficial; generally training an AI requires a differentiable loss, but human independence does not originate from a differentiable loss and so it’s not obvious that one could come up with something functionally similar via gradient descent.
On straightforward extrapolation of current technologies, it kind of seems like AI friends would be overly pliable and lack independent lives. One could obviously train an AI to seem more independent to their “friends”, and that would probably make it more interesting to “befriend”, but in reality it would make the AI less independent because its supposed “independence” would actually arise from a constraint generated by its “friends”’s perception, rather than from an attempt to live independently. This seems less like a normal friendship and more like a superstimulus simulating the appearance of a friendship for entertainment value. It seems reasonable enough to characterize it as non-authentic.
Do you disagree? What do you think would lead to a different trajectory?
I assume some people people will end up wanting to interact with a mere superstimulus; however, other people will value authenticity and variety in their friendships and social experiences. This comes down to human preferences, which will shape the type of AIs we end up training.
The conclusion that nearly all AI-human friendships will seem inauthentic thus seems unwarranted. Unless the superstimulus is irresistible, then it won’t be the only type of relationship people will have.
Since most people already express distaste at non-authentic friendships with AIs, I assume there will be a lot of demand for AI companies to train higher quality AIs that are not superficial and pliable in the way you suggest. These AIs would not merely appear independent but would literally be independent in the same functional sense that humans are, if indeed that’s what consumers demand.
This can be compared to addictive drugs and video games, which are popular, but not universally viewed as worthwhile pursuits. In fact, many people purposely avoid trying certain drugs to avoid getting addicted: they’d rather try to enjoy what they see as richer and more meaningful experiences from life instead.
I don’t think consumers demand authentic AI friends because they already have authentic human friends. Also it’s not clear how you imagine the AI companies could train the AIs to be more independent and less superficial; generally training an AI requires a differentiable loss, but human independence does not originate from a differentiable loss and so it’s not obvious that one could come up with something functionally similar via gradient descent.