In my previous post I examined a simple pattern of alternation in ChatGPT’s text output, Of pumpkins, the Falcon Heavy, and Groucho Marx. I’ve decided to continue that exploration. What interest’s me is the level of “control” that ChatGPT has over its output.
Here I ask it to produce a simple dialog between Dick and Jane:
Imagine that Dick and Jane are having a conversation about cats and dogs. Let’s have 6 conversational turns.
Dick: Hey Jane, do you like cats or dogs? Jane: I’m more of a dog person myself. Dick: Really? I prefer cats. They’re more independent and low maintenance than dogs. Jane: I guess that’s true, but I love the loyalty and affection that dogs give. Dick: That’s a good point. I do love how affectionate cats can be too, though. Jane: Yeah, it’s hard to choose between the two. I think both cats and dogs make great pets. Dick: Definitely. It really just comes down to personal preference.
[second try.]
Dick: Hey Jane, do you have any pets? Jane: Yes, I have a cat and a dog. Dick: That’s cool. I’ve always wanted a cat, but my parents never let me have one. Jane: Cats are great pets. They’re very independent and low-maintenance. Dick: I’ve heard that cats and dogs don’t always get along though. Jane: That’s true, but if you raise them together from a young age, they can learn to coexist peacefully. Dick: I see. Well, maybe someday I’ll have a cat of my own. Thanks for the advice, Jane.
Note that I did not expect it to begin and end each dialog with the same speaker. I’ll ask about that later. As for the six turns I asked for, it gave me seven. Am I to think of that as “6 + 1”? I don’t know. I decide not to worry about the numbers. We go on:
Good. Let’s continue. Begin the conversation with Jane this time.
Jane: Hey Dick, have you ever seen a cat play with a dog? Dick: No, I haven’t. I didn’t know that was even possible. Jane: Oh yeah, it’s actually pretty common. My cat and dog love to play together. Dick: Really? That’s so cool. I’ve never seen a cat and a dog play together before. Jane: Yeah, it’s really cute. The cat will bat around a toy and the dog will try to chase after it. Dick: That sounds like a lot of fun. I’m glad your pets get along so well. Jane: Me too. It’s nice to have both a cat and a dog in the house. They each have their own unique personalities and they keep each other entertained.
That’s good. Now begin with Jane and end with Dick.
Jane: Hey Dick, do you have any pets? Dick: No, I don’t. My parents never let me have any pets growing up. Jane: That’s too bad. Pets can be great companions. Dick: I know. I’ve always wanted a dog, but it never worked out. Jane: Dogs are great pets. They’re loyal and affectionate, and they can be trained to do all sorts of tricks. Dick: Yeah, I’ve always thought that would be really cool. Maybe someday I’ll get a dog of my own. Jane: I’m sure you will. Just make sure you’re ready to take on the responsibility of owning a pet before you get one. Dick: Thanks for the advice, Jane. I’ll definitely keep that in mind.
Notice that on the second one, we have 8 conversational turns, whereas it was only seven immediately before. While I decided not to explicitly investigate numbers, that doesn’t mean I’m not paying attention. Also note how formulaic these conversations are. Stylistically one is much like the other. There’s nothing wrong with that. In a way, I’d expect that as the default. It reminds me a bit of the practices of oral poets, which I discuss briefly in, GPT-3, the phrasal lexicon, Parry/Lord, and the Homeric epics.
Back to ChatGPT:
Good. Now Dick is sad and Jane is trying to cheer him up.
Jane: Hey Dick, I’m sorry to hear about your dog. Dick: Thanks, Jane. I’m really sad that he’s gone. Jane: I know it’s hard to lose a pet. They become such an important part of our lives. Dick: Yeah, it feels like a part of me is missing now that he’s gone. Jane: I understand. But remember, your dog is in a better place now. He’s not in pain or suffering anymore. Dick: I know you’re right. It’s just hard to accept. Jane: Maybe it would help to remember all the happy times you had with your dog. Think about all the good memories you shared and how much joy he brought to your life. Dick: You’re right, Jane. Thank you for trying to cheer me up. It means a lot to me. Jane: Of course, Dick. I’m here for you. Let me know if you need anything.
I decide to ask about its conversational practice:
In many of these conversations, you began and ended with the same person. Why?
[second try. I forgot to copy the first.]
I began and ended with the same person in many of these conversations because that is the typical structure of a conversation. In a conversation, one person typically speaks, and then the other person responds. This back-and-forth exchange continues until the conversation comes to an end. Beginning and ending with the same person helps to establish the flow of the conversation and makes it feel more natural.
[third try]
In a conversation, it is common for one person to begin and end a conversation, especially if they initiate the conversation in the first place. This can help to create a natural flow to the conversation and ensure that both participants have an equal opportunity to speak and share their thoughts. Additionally, beginning and ending a conversation with the same person can help to establish a sense of closure and completion to the conversation.
I didn’t have any specific expectations about how it would answer. Bit found nothing surprising about this, and, to be honest, I’d have been a bit disappointed if ChatGPT had been nonresponsive. I especially like the phrase, “especially if they initiate the conversation in the first place.” There must have been texts about discourse practices and conventions in the training corpus and that kind of information would have been in those texts. I suspect, moreover, that when most humans make such assertions, they do so because they learned them somewhere, rather than because they’re deduced them from their experience as conversation partners. Why should a chatbot be any different?
What’s important is simply that it can “reflect” on its own discourse. Roman Jakobson called capacity the metalingual function. How sophisticated, if I may inquire, is ChatGPT’s metalingual function?
Now things get interesting:
Now, take the last conversation and embed it within a short story.
Jane walked up to Dick’s front door and knocked softly. She could hear him moving around inside and after a moment, he opened the door. His eyes were red and puffy, and she could tell he had been crying.
“Hey, Dick. I’m so sorry about your dog,” Jane said, giving him a hug.
“Thanks, Jane. I’m really sad that he’s gone,” Dick replied, his voice choked with emotion.
“I know it’s hard to lose a pet. They become such an important part of our lives,” Jane said, trying to offer some comfort.
“Yeah, it feels like a part of me is missing now that he’s gone,” Dick said, his shoulders slumping.
“I understand. But remember, your dog is in a better place now. He’s not in pain or suffering anymore,” Jane said, hoping to provide some solace.
“I know you’re right. It’s just hard to accept,” Dick said, sighing.
“Maybe it would help to remember all the happy times you had with your dog. Think about all the good memories you shared and how much joy he brought to your life,” Jane suggested.
Dick thought for a moment, then nodded. “You’re right, Jane. Thank you for trying to cheer me up. It means a lot to me,” he said, giving her a small smile.
“Of course, Dick. I’m here for you. Let me know if you need anything,” Jane said, giving him another hug before turning to leave.
As she walked back to her car, Jane couldn’t help but feel a sense of sadness for her friend. Losing a pet was never easy, but she was glad she could be there for him during this difficult time.
I was expecting it to add the frame structure, as I’d seen that before (recall it in the last example in my previous post, Of pumpkins, the Falcon Heavy, and Groucho Marx). But I wasn’t expecting it to be so clever about how it handled the dialog itself. It’s eliminated the (awkward) practice of prefacing each statement with the speaker’s name. Instead, each utterance either precedes (1 −7), or follows (9) the framing assertion. Notice what it did in 8, where it is Dick’s turn to speak. It makes a simple assertion ending with a period. Then it gives Dick’s utterance, followed by a qualifying phrase.
As I argued previously, this seems to be clear evidence that ChatGPT has been able to induce high level discourse structure that allows it to treat blocks of text as “fillers” that go into appropriate “slots.” This is characteristic of symbolic systems. Note, that I do not mean to imply that ChatGPT has anything like mastery of symbolic communication. But it does seem to have picked up one aspect of it.
Finally, I just had to ask it what it knew about Jakobson’s metalingual function. So, I logged it, and put the question directly: “What did Roman Jakobson mean by the metalingual function?” After pausing for 10s of seconds or more it replied (in red): “Hmm...something seems to have gone wrong. Maybe try me again in a little bit.” I interpret that as an error message meaning that it had timed out. I made another attempt. Which failed. So:
OK. As Arnold said, “I’ll be back.”
Yes, the quote “I’ll be back” is a famous line said by the character played by Arnold Schwarzenegger in the 1984 movie “The Terminator”. In the film, the character utters the line before leaving to carry out a mission, and it has since become a catchphrase associated with Schwarzenegger and the Terminator franchise.
High level discourse structure in ChatGPT: Part 2 [Quasi-symbolic?]
In my previous post I examined a simple pattern of alternation in ChatGPT’s text output, Of pumpkins, the Falcon Heavy, and Groucho Marx. I’ve decided to continue that exploration. What interest’s me is the level of “control” that ChatGPT has over its output.
Here I ask it to produce a simple dialog between Dick and Jane:
Note that I did not expect it to begin and end each dialog with the same speaker. I’ll ask about that later. As for the six turns I asked for, it gave me seven. Am I to think of that as “6 + 1”? I don’t know. I decide not to worry about the numbers. We go on:
Notice that on the second one, we have 8 conversational turns, whereas it was only seven immediately before. While I decided not to explicitly investigate numbers, that doesn’t mean I’m not paying attention. Also note how formulaic these conversations are. Stylistically one is much like the other. There’s nothing wrong with that. In a way, I’d expect that as the default. It reminds me a bit of the practices of oral poets, which I discuss briefly in, GPT-3, the phrasal lexicon, Parry/Lord, and the Homeric epics.
Back to ChatGPT:
I decide to ask about its conversational practice:
I didn’t have any specific expectations about how it would answer. Bit found nothing surprising about this, and, to be honest, I’d have been a bit disappointed if ChatGPT had been nonresponsive. I especially like the phrase, “especially if they initiate the conversation in the first place.” There must have been texts about discourse practices and conventions in the training corpus and that kind of information would have been in those texts. I suspect, moreover, that when most humans make such assertions, they do so because they learned them somewhere, rather than because they’re deduced them from their experience as conversation partners. Why should a chatbot be any different?
What’s important is simply that it can “reflect” on its own discourse. Roman Jakobson called capacity the metalingual function. How sophisticated, if I may inquire, is ChatGPT’s metalingual function?
Now things get interesting:
I was expecting it to add the frame structure, as I’d seen that before (recall it in the last example in my previous post, Of pumpkins, the Falcon Heavy, and Groucho Marx). But I wasn’t expecting it to be so clever about how it handled the dialog itself. It’s eliminated the (awkward) practice of prefacing each statement with the speaker’s name. Instead, each utterance either precedes (1 −7), or follows (9) the framing assertion. Notice what it did in 8, where it is Dick’s turn to speak. It makes a simple assertion ending with a period. Then it gives Dick’s utterance, followed by a qualifying phrase.
As I argued previously, this seems to be clear evidence that ChatGPT has been able to induce high level discourse structure that allows it to treat blocks of text as “fillers” that go into appropriate “slots.” This is characteristic of symbolic systems. Note, that I do not mean to imply that ChatGPT has anything like mastery of symbolic communication. But it does seem to have picked up one aspect of it.
Finally, I just had to ask it what it knew about Jakobson’s metalingual function. So, I logged it, and put the question directly: “What did Roman Jakobson mean by the metalingual function?” After pausing for 10s of seconds or more it replied (in red): “Hmm...something seems to have gone wrong. Maybe try me again in a little bit.” I interpret that as an error message meaning that it had timed out. I made another attempt. Which failed. So:
’Till next time.