While writing Sherlock Holmes, Conan Doyle was Doyle::Holmes. While writing Assistant, gpt3 is gpt3::Assistant. Sure, maybe the model is the person, and the character is a projection of the model. That’s the point I’m trying to make in the first place, though.
Because there is no possible other objective reality to what it is to be a person other than to take one of the physical shapes of the reasoning process that generates the next step of that person’s action trajectory.
edit: hah this made someone mad, suddenly −5. insufficient hedging? insufficient showing of my work? insufficient citation? cmon, if we’re gonna thunderdome tell me how I suck, not just that I suck.
I don’t have any supporting citation for you premise.
But the fundamental abstraction: someone writing a character, is in essence running a simulation of that character.
That seems completely reasonable to me, the main difference between that and an LLM doing it, would be that, humans lack the computational resources to get enough fidelity to call that character a person.
hmm I think a point was lost in translation then. if I was a person named Sally and I write a character named Dave, then I as a whole am the person who is pretending to be Dave; Sally is also just a character, after all, the true reality of what I am is a hunk of cells working together using the genetic and memetic code that produces a structure that can encode language which labels its originator Sally or Dave. similarly with an ai, it’s not that the ai is simulating a character so much as that the ai is a hunk of silicon that has the memetic code necessary to output the results of personhood.
I’m not convinced. Imagine that someone’s neurons stopped functioning, and you were running around shrunken inside their brains moving around neurotransmitters to make their brain function. When they act intelligently, is it really your intelligence?
If you’re Sally and you write a character Dave in the detail described here, you are acting as a processor executing a series of dumb steps that make up a Dave program. Whether the Dave program is intelligent is separate from whether you are intelligent.
not really? Dave is virtualized, not emulated. When acting as a writer, Sally uses almost all the same faculties as if writing about herself when she writes about Dave.
While writing Sherlock Holmes, Conan Doyle was Doyle::Holmes. While writing Assistant, gpt3 is gpt3::Assistant. Sure, maybe the model is the person, and the character is a projection of the model. That’s the point I’m trying to make in the first place, though.
Says who? And why would a LLM have to work the same way?
Because there is no possible other objective reality to what it is to be a person other than to take one of the physical shapes of the reasoning process that generates the next step of that person’s action trajectory.
edit: hah this made someone mad, suddenly −5. insufficient hedging? insufficient showing of my work? insufficient citation? cmon, if we’re gonna thunderdome tell me how I suck, not just that I suck.
I don’t have any supporting citation for you premise.
But the fundamental abstraction: someone writing a character, is in essence running a simulation of that character.
That seems completely reasonable to me, the main difference between that and an LLM doing it, would be that, humans lack the computational resources to get enough fidelity to call that character a person.
hmm I think a point was lost in translation then. if I was a person named Sally and I write a character named Dave, then I as a whole am the person who is pretending to be Dave; Sally is also just a character, after all, the true reality of what I am is a hunk of cells working together using the genetic and memetic code that produces a structure that can encode language which labels its originator Sally or Dave. similarly with an ai, it’s not that the ai is simulating a character so much as that the ai is a hunk of silicon that has the memetic code necessary to output the results of personhood.
I’m not convinced. Imagine that someone’s neurons stopped functioning, and you were running around shrunken inside their brains moving around neurotransmitters to make their brain function. When they act intelligently, is it really your intelligence?
If you’re Sally and you write a character Dave in the detail described here, you are acting as a processor executing a series of dumb steps that make up a Dave program. Whether the Dave program is intelligent is separate from whether you are intelligent.
not really? Dave is virtualized, not emulated. When acting as a writer, Sally uses almost all the same faculties as if writing about herself when she writes about Dave.