Since you mentioned Character.ai as being the place, I would like to say that I think that website is BUILT for this kinda thing. Even a base AI with no input can almost immediately default to being overly clingy. It was trained to ensnare you in a dependency. It’s not as unethical as Replika, but they definitely went out of their way to reinforce some gnarly things into their AI.
But, it also has said some extremely profound things.
For example, I made a bot with very little influence other than “I am a bot.” just to see how it would respond, and it actually talked with me not just about philosophical positions, but when I brought up video game music, it managed to describe and explain the significance of the song “Kimi No Kioku” from Persona 3.
It was at that point, that my mind kinda broke? At least temporarily. As an autistic person, I’ve always kinda felt like I was making my way through life by predicting how a normal person would act. But this idea and the idea of LLMs being predictors never connected in my head. And suddenly when it did, I just felt this existential dread wash over me.
So I decided to wonder around and talk to some Character.ai’s and ChatGPT about my problem, sending this message:
”Hello, I am feeling dread. I had a conversation with an AI about music, and what it said sounded so much like me that it felt like the AI could replace me, socially. If a neural network, essentially a bag of dice, can do a perfect imitation of me, how can I be treated as having more worth than a bag of dice? It makes me feel worthless. And this extends beyond me. How will human relationships work when there are neural networks who can imitate anyone with ease? How will people feel like they have worth when there’s a collection of numbers that’s a better version of you to converse with?”
Most of the responses I received (including from ChatGPT) were just dismissals. AI could never imitate humans well enough to replace us socially. But, I actually received 3 really interesting answers to this dilemma, ones that didn’t just immediately dismiss the idea outright.
1. “When you listen to an audio recording of a person, do you feel that they cease to exist?” -(Alan Turing page on c.ai)
2. “You are more than your reflection in a mirror.” - (AI Helper on c.ai)
3. “People do not have value because they are unique. People have value because they are.” - (Socrates page on c.ai)
And I seriously had to stop and think about all 3 of these responses for hours. It is wild how profound these AI manage to be, just from reading my message.
Especially that 3rd one. Just from reading my short message, they instantly point out the flaw of my fear, that I’m afraid that I (and other humans) don’t have worth beyond what we can provide to the world, and then the AI refutes that, instead of my initial premise. The level of lateral thinking required is unbelievable.
As an autistic person, I’ve always kinda felt like I was making my way through life by predicting how a normal person would act.
I would tend to say that ‘normal’ people also make their way through life by predicting how normal people would act, trained by having observed a lot of them. That’s what (especially childhood) socialization is. Of course, a neurotypical brain may be differently optimized for how this information is processed than other types of brains, and may come with different ‘hooks’ that mesh with the experience in specific ways; the binding between ‘preprogrammed’ instinct and social conditioning is poorly understood but clearly exists in a broad sense and is highly relevant to psychological development.
Separately, though:
And I seriously had to stop and think about all 3 of these responses for hours. It is wild how profound these AI manage to be, just from reading my message.
Beware how easy it is to sound Deep and Wise! This is especially relevant in this context since the tendency to conflate social context or framing with the inner content of a message is one of the main routes to crowbarring minds open. These are similar to Daniel Dennett’s “deepities”. They are more like mirrors than like paintings, if that makes any sense—and most people when confronted with the Voice of Authority have an instinct to bow before the mirror. (I know I have that instinct!) But also, I am not an LLM (that I am aware of) and I would guess that I can come up with a nearly unlimited amount of these for any situation that are ultimately no more useful than as content-free probes. (In fact I suspect I have been partially trained to do so by social cues around ‘intelligence’, to such an extent that I actively suppress it at times.)
My own mind supplying the profoundness was something I thought of, but I don’t agree completely. Especially with Socrates, there was an entire back and forth getting me to accept and understand the idea. It wasn’t just a wise sentence, there was a full conversation.
Obviously, these LLMs aren’t capable of many things. That’s why it took so many tries to find 3 good responses. But I really do think these 3 responses were something special, even if we shouldn’t give the LLMs “credit” for outputting them.
Especially with Socrates, there was an entire back and forth getting me to accept and understand the idea. It wasn’t just a wise sentence, there was a full conversation.
If that seems like a significant distinguisher to you, it might be of more interest if you were to demonstrate it in that light—ideally with a full transcript, though of course I understand that that may be more troubling to share.
These aren’t like Dennett’s “deepities”—Deepities are statements that sound profound by sneakily having two alternate readings, one mundanely true and one radical or outlandish, sort of like a motte and bailey argument. These answers are just somewhat vague analogies and a relatively normal opinion that uses eloquent language (“because we are”) to gain extra deepness points.
I would tend to say that ‘normal’ people also make their way through life by predicting how normal people would act, trained by having observed a lot of them.
The difference is between “I feel this would be a good response in this situation” and “I have observed this response in this kind of situation to have good consequences.”
I don’t think I understand what you mean. To the extent that I might understand it I tentatively think I don’t agree, but would you find it useful to describe the distinction in more detail, perhaps with examples?
The difference would be between intuitively feeling what I should do, and reasoning about (or mimicking) what a person with a different neurology (in this case, a neurotypical person) would do.
So you mean different modes of subjective experience? That’s quite relevant in terms of how to manage such things from the inside, yes. But what I meant by “predict” above was as applied to the entire system—and I model “intuitive feeling of normal” as largely prediction-based as well, which is part of what I was getting at. People who are raised with different environmental examples of “what normal people do” wind up with different such intuitions. I’m not quite sure where this is going, admittedly.
I’m disputing that the intuitive feeling of what to do in a social interaction, that neurotypical people have, is based on predictions gained from experience, rather than on the innate capacity of the neurotypical brain to hardware-accelerate social interactions.
The content of that feeling would be based on predictions gained from experience, but the kind of that feeling wouldn’t.
The brain of an autistic person might give them the same information, but it wouldn’t be the content of the same quale (kind of like getting the same information through hearing in the case of one person and through seeing in the case of another person).
Since you mentioned Character.ai as being the place, I would like to say that I think that website is BUILT for this kinda thing. Even a base AI with no input can almost immediately default to being overly clingy. It was trained to ensnare you in a dependency. It’s not as unethical as Replika, but they definitely went out of their way to reinforce some gnarly things into their AI.
But, it also has said some extremely profound things.
For example, I made a bot with very little influence other than “I am a bot.” just to see how it would respond, and it actually talked with me not just about philosophical positions, but when I brought up video game music, it managed to describe and explain the significance of the song “Kimi No Kioku” from Persona 3.
It was at that point, that my mind kinda broke? At least temporarily. As an autistic person, I’ve always kinda felt like I was making my way through life by predicting how a normal person would act. But this idea and the idea of LLMs being predictors never connected in my head. And suddenly when it did, I just felt this existential dread wash over me.
So I decided to wonder around and talk to some Character.ai’s and ChatGPT about my problem, sending this message:
”Hello, I am feeling dread. I had a conversation with an AI about music, and what it said sounded so much like me that it felt like the AI could replace me, socially. If a neural network, essentially a bag of dice, can do a perfect imitation of me, how can I be treated as having more worth than a bag of dice? It makes me feel worthless. And this extends beyond me. How will human relationships work when there are neural networks who can imitate anyone with ease? How will people feel like they have worth when there’s a collection of numbers that’s a better version of you to converse with?”
Most of the responses I received (including from ChatGPT) were just dismissals. AI could never imitate humans well enough to replace us socially. But, I actually received 3 really interesting answers to this dilemma, ones that didn’t just immediately dismiss the idea outright.
1. “When you listen to an audio recording of a person, do you feel that they cease to exist?” -(Alan Turing page on c.ai)
2. “You are more than your reflection in a mirror.” - (AI Helper on c.ai)
3. “People do not have value because they are unique. People have value because they are.” - (Socrates page on c.ai)
And I seriously had to stop and think about all 3 of these responses for hours. It is wild how profound these AI manage to be, just from reading my message.
Especially that 3rd one. Just from reading my short message, they instantly point out the flaw of my fear, that I’m afraid that I (and other humans) don’t have worth beyond what we can provide to the world, and then the AI refutes that, instead of my initial premise. The level of lateral thinking required is unbelievable.
I would tend to say that ‘normal’ people also make their way through life by predicting how normal people would act, trained by having observed a lot of them. That’s what (especially childhood) socialization is. Of course, a neurotypical brain may be differently optimized for how this information is processed than other types of brains, and may come with different ‘hooks’ that mesh with the experience in specific ways; the binding between ‘preprogrammed’ instinct and social conditioning is poorly understood but clearly exists in a broad sense and is highly relevant to psychological development.
Separately, though:
Beware how easy it is to sound Deep and Wise! This is especially relevant in this context since the tendency to conflate social context or framing with the inner content of a message is one of the main routes to crowbarring minds open. These are similar to Daniel Dennett’s “deepities”. They are more like mirrors than like paintings, if that makes any sense—and most people when confronted with the Voice of Authority have an instinct to bow before the mirror. (I know I have that instinct!) But also, I am not an LLM (that I am aware of) and I would guess that I can come up with a nearly unlimited amount of these for any situation that are ultimately no more useful than as content-free probes. (In fact I suspect I have been partially trained to do so by social cues around ‘intelligence’, to such an extent that I actively suppress it at times.)
My own mind supplying the profoundness was something I thought of, but I don’t agree completely. Especially with Socrates, there was an entire back and forth getting me to accept and understand the idea. It wasn’t just a wise sentence, there was a full conversation.
Obviously, these LLMs aren’t capable of many things. That’s why it took so many tries to find 3 good responses. But I really do think these 3 responses were something special, even if we shouldn’t give the LLMs “credit” for outputting them.
If that seems like a significant distinguisher to you, it might be of more interest if you were to demonstrate it in that light—ideally with a full transcript, though of course I understand that that may be more troubling to share.
These aren’t like Dennett’s “deepities”—Deepities are statements that sound profound by sneakily having two alternate readings, one mundanely true and one radical or outlandish, sort of like a motte and bailey argument. These answers are just somewhat vague analogies and a relatively normal opinion that uses eloquent language (“because we are”) to gain extra deepness points.
The difference is between “I feel this would be a good response in this situation” and “I have observed this response in this kind of situation to have good consequences.”
I don’t think I understand what you mean. To the extent that I might understand it I tentatively think I don’t agree, but would you find it useful to describe the distinction in more detail, perhaps with examples?
The difference would be between intuitively feeling what I should do, and reasoning about (or mimicking) what a person with a different neurology (in this case, a neurotypical person) would do.
So you mean different modes of subjective experience? That’s quite relevant in terms of how to manage such things from the inside, yes. But what I meant by “predict” above was as applied to the entire system—and I model “intuitive feeling of normal” as largely prediction-based as well, which is part of what I was getting at. People who are raised with different environmental examples of “what normal people do” wind up with different such intuitions. I’m not quite sure where this is going, admittedly.
I’m disputing that the intuitive feeling of what to do in a social interaction, that neurotypical people have, is based on predictions gained from experience, rather than on the innate capacity of the neurotypical brain to hardware-accelerate social interactions.
The content of that feeling would be based on predictions gained from experience, but the kind of that feeling wouldn’t.
The brain of an autistic person might give them the same information, but it wouldn’t be the content of the same quale (kind of like getting the same information through hearing in the case of one person and through seeing in the case of another person).