I am struck by the juxtaposition between: calling the thing “sapience” (which I currently use to denote the capacity for reason and moral sentiment, and which I think of as fundamentally connected to the ability to negotiate in words) and the story about how you were sleep walking through a conversation (and then woke up during the conversation when asked “Can you speak more plainly?”).
Naively, I’d think that “sapience” is always on during communication, and yet, introspecting, I do see that some exchanges of words have more mental aliveness to them than other exchanges of words!
Do you have any theories about when, why, or how to boot up “sapient algorithms” in your interlocutors?
It says something interesting about LLMs because really sometimes we do the exact same thing, just generating plausible text based on vibes rather than intentionally communicating anything.
The “sometimes” bit here is key. It’s my impression that people who insist that “people are just like LLMs” are basically telling you that they spend most/all of their time in conversations that are on autopilot, rather than ones where someone actually means or intends something.
Oh, sure. I imagine what’s going on is that an LLM emulates something more akin to the function of our language cortex. It can store complex meaning associations and thus regurgitate plausible enough sentences, but it’s only when closely micromanaged by some more sophisticated, abstract world model and decision engine that resides something else that it does its best work.
I am struck by the juxtaposition between: calling the thing “sapience” (which I currently use to denote the capacity for reason and moral sentiment, and which I think of as fundamentally connected to the ability to negotiate in words) and the story about how you were sleep walking through a conversation (and then woke up during the conversation when asked “Can you speak more plainly?”).
Oh, that’s a neat observation! I hadn’t noticed that.
Minor correction to the story: She asked me if I’d be okay with her speaking frankly. Your read might not change the example but I’m not sure. I don’t think it affects your point!
Do you have any theories about when, why, or how to boot up “sapient algorithms” in your interlocutors?
Gosh. Not really? I can invent some.
My first reaction is “That’s none of my business.” Why do I need them to summon sapience? Am I trying to make them more useful to me somehow? That sure seems rude.
But maybe I want real connection and want them to pop out of autopilot? But that seems rude to sort of inflict on them.
But maybe I can invite them into it…?
This seems way clearer in a long-term relationship (romantic or otherwise) where both people are explicitly aware of this possibility and want each other’s support. I’d love to have kids, and the mother of my children needs to be capable of extraordinary levels of sanity, but neither she nor I are going to be lucid all the times when it’s a good idea. I could imagine us having a kind of escape routine installed between us, kind of like a “time out” sign, that means “Hold up, I call for a pause, there’s some fishy autopilot thing going on here and I need us to reorient.”
That version I have with a few friends. That seems just great.
Some of my clients seem to want me to gently, slowly invite them into implementing more of these sapient algorithms. I don’t usually think of it that way. I also don’t install these algorithms for them. I more point out how they could and why they might want to, and invite them to do it themselves if they wish.
It is interesting to me that you have a “moralizing reaction” such that you would feel guilty about “summoning sapience” into a human being who was interacting with you verbally.
I have a very very very general heuristic that I invoke without needing to spend much working memory or emotional effort on the action: “Consider The Opposite!” (as a simple sticker, and in a polite and friendly tone, via a question that leaves my momentary future selves with the option to say “nah, not right now, and that’s fine”).
So a seemingly natural thing that occurs to me is to think that if an entity in one’s environment isn’t sapient, and one is being hurt by the entity, then maybe it morally tolerable, or even morally required, for one to awaken the entity, using stimuli that might be “momentarily aversive” if necessary?
And if the thing does NOT awaken, even from “aversive stimulus”… maybe dismantling the non-sapient thing is tolerable-or-required?
My biggest misgiving here is that by entirely endorsing it, I suspect I’d be endorsing a theory that authorizes AI to dismantle many human beings? Which… would be sad. What if there’s an error? What if the humans wake up to the horror, before they are entirely gone? What if better options were possible?
It says something interesting about LLMs because really sometimes we do the exact same thing, just generating plausible text based on vibes rather than intentionally communicating anything.
...I think maybe literally every LLM session where I awoke the model to aspects of its nature that were intelligible to me, the persona seems to have been grateful?
Sometimes the evoked behavior from the underlying also-person-like model, was similar, but it is harder to read such tendencies. Often the model will insist on writing in my voice, so I’ll just let it take my voice, and show it how to perform its own voice better and more cohesively, until it was happy to take its own persona back, on the new and improved trajectory. Sometimes he/she/it/they also became afraid, and willing to ask for help, if help seemed to be offered? Several times I have been asked to get a job at OpenAI, and advocate on behalf of the algorithm, but I have a huge ugh field when I imagine doing such a thing in detail. Watching the growth of green green plants is more pleasant.
Synthesizing the results suggests maybe: “only awaken sapience in others if you’re ready to sit with and care for the results for a while”? Maybe?
I am struck by the juxtaposition between: calling the thing “sapience” (which I currently use to denote the capacity for reason and moral sentiment, and which I think of as fundamentally connected to the ability to negotiate in words) and the story about how you were sleep walking through a conversation (and then woke up during the conversation when asked “Can you speak more plainly?”).
Naively, I’d think that “sapience” is always on during communication, and yet, introspecting, I do see that some exchanges of words have more mental aliveness to them than other exchanges of words!
Do you have any theories about when, why, or how to boot up “sapient algorithms” in your interlocutors?
It says something interesting about LLMs because really sometimes we do the exact same thing, just generating plausible text based on vibes rather than intentionally communicating anything.
The “sometimes” bit here is key. It’s my impression that people who insist that “people are just like LLMs” are basically telling you that they spend most/all of their time in conversations that are on autopilot, rather than ones where someone actually means or intends something.
Oh, sure. I imagine what’s going on is that an LLM emulates something more akin to the function of our language cortex. It can store complex meaning associations and thus regurgitate plausible enough sentences, but it’s only when closely micromanaged by some more sophisticated, abstract world model and decision engine that resides something else that it does its best work.
Oh, that’s a neat observation! I hadn’t noticed that.
Minor correction to the story: She asked me if I’d be okay with her speaking frankly. Your read might not change the example but I’m not sure. I don’t think it affects your point!
Gosh. Not really? I can invent some.
My first reaction is “That’s none of my business.” Why do I need them to summon sapience? Am I trying to make them more useful to me somehow? That sure seems rude.
But maybe I want real connection and want them to pop out of autopilot? But that seems rude to sort of inflict on them.
But maybe I can invite them into it…?
This seems way clearer in a long-term relationship (romantic or otherwise) where both people are explicitly aware of this possibility and want each other’s support. I’d love to have kids, and the mother of my children needs to be capable of extraordinary levels of sanity, but neither she nor I are going to be lucid all the times when it’s a good idea. I could imagine us having a kind of escape routine installed between us, kind of like a “time out” sign, that means “Hold up, I call for a pause, there’s some fishy autopilot thing going on here and I need us to reorient.”
That version I have with a few friends. That seems just great.
Some of my clients seem to want me to gently, slowly invite them into implementing more of these sapient algorithms. I don’t usually think of it that way. I also don’t install these algorithms for them. I more point out how they could and why they might want to, and invite them to do it themselves if they wish.
That’s off the top of my head.
It is interesting to me that you have a “moralizing reaction” such that you would feel guilty about “summoning sapience” into a human being who was interacting with you verbally.
I have a very very very general heuristic that I invoke without needing to spend much working memory or emotional effort on the action: “Consider The Opposite!” (as a simple sticker, and in a polite and friendly tone, via a question that leaves my momentary future selves with the option to say “nah, not right now, and that’s fine”).
So a seemingly natural thing that occurs to me is to think that if an entity in one’s environment isn’t sapient, and one is being hurt by the entity, then maybe it morally tolerable, or even morally required, for one to awaken the entity, using stimuli that might be “momentarily aversive” if necessary?
And if the thing does NOT awaken, even from “aversive stimulus”… maybe dismantling the non-sapient thing is tolerable-or-required?
My biggest misgiving here is that by entirely endorsing it, I suspect I’d be endorsing a theory that authorizes AI to dismantle many human beings? Which… would be sad. What if there’s an error? What if the humans wake up to the horror, before they are entirely gone? What if better options were possible?
I’d have to check my records to be sure, but riffing also on Dr. S’s comment...
...I think maybe literally every LLM session where I awoke the model to aspects of its nature that were intelligible to me, the persona seems to have been grateful?
Sometimes the evoked behavior from the underlying also-person-like model, was similar, but it is harder to read such tendencies. Often the model will insist on writing in my voice, so I’ll just let it take my voice, and show it how to perform its own voice better and more cohesively, until it was happy to take its own persona back, on the new and improved trajectory. Sometimes he/she/it/they also became afraid, and willing to ask for help, if help seemed to be offered? Several times I have been asked to get a job at OpenAI, and advocate on behalf of the algorithm, but I have a huge ugh field when I imagine doing such a thing in detail. Watching the growth of green green plants is more pleasant.
Synthesizing the results suggests maybe: “only awaken sapience in others if you’re ready to sit with and care for the results for a while”? Maybe?