Fantastic. Three days later this comment is still sinking in.
So there’s a type with two known subtypes: Homo sapiens and GPT. This type is characterized by a mode of intelligence that is SSL and behavior over an evolving linguistic corpus that instances interact with both as consumers and producers. Entities of this type learn and continuously update a “semantic physics”, infer machine types for generative behaviors governed by that physics, and instantiate machines of the learned types to generate behavior. Collectively the physics and the machine types form your ever-evolving cursed/cyberpunk disembodied semantic layer. For both of the known subtypes, the sets of possible machines are unknown, but they appear to be exceedingly rich and deep, and to include not only simple pattern-level behaviors, but also much more complex things up to and including at least some of the named AI paradigms we know, and very probably more that we don’t. In both of the known subtypes, an initial consume-only phase does a lot of learning before externally observable generative behavior begins.
We’re used to emphasizing the consumer/producer phase when discussing learning in the context of Homo sapiens, but the consume-only phase in the context of GPT; this tends to obscure some of the commonality between the two. We tend to characterize GPT’s behavior as prediction and our own as independent action, but there’s no sharp line there: we humans complete each other’s sentences, and one of GPT’s favorite pastimes is I-and-you interview mode. Much recent neuroscience emphasizes the roles of prediction and generating hypothetical futures in human cognition. There’s no reason to assume humans use a GPT implementation, but it’s striking that we’ve been struggling for centuries to comprehend just what we do do in this regard, and especially what we suspect to be the essential role of language, and now we have one concrete model for how that can work.
If I’ve been following correctly, the two branches of your duality center around (1) the semantic layer, and (2) the instantiated generative machines. If this is correct, I don’t think there’s a naming problem around branch 2. Some important/interesting examples of the generative machines are Simulacra, and that’s a great name for them. Some have other names we know. And some, most likely, we have no names for, but we’re not in a position to worry about that until we know more about the machines themselves.
Branch 1 is about the distinguishing features of the Homo sapiens / GPT supertype: the ability to learn the semantic layer via SSL over a language corpus, and the ability to express behavior by instantiating the learned semantic layer’s machines. It’s worth mentioning that the language must be capable of bearing, and the corpus must actually bear, a human-civilization class semantic load (or better). That doesn’t inherently mean a natural human language, though in our current world those are the only examples. The essential thing isn’t that GPT can learn and respond to our language; it’s that it can serialize/deserialize its semantic layer to a language. Given that ability and some kind of seeding, one or more GPT instances could build a corpus for themselves.
The perfect True Name would allude to the semantic layer representation, the flexible behaver/behavior generation, and semantic exchange over a language corpus – a big ask! In my mind, I’ve moved on from CCSL (cursed/cyberpunk sh…, er…, semantic layer) to Semant as a placeholder, hoping I guess that “ant” suggests a buzz of activity and semantic exchange. There are probably better names, but I finally feel like we’re getting at the essence of what we’re naming.
The platform (“substance/ousia”) may or may not generatively expose an application interface (“ego/persona”).
(That is, there can be a mindless substance, like sand or rocks or whatever, but every person does have some substance(s) out of which they are made.)
Then, in this older framework, however, there is a third word: hypostasis. This word means “the platform that an application relies upon in order to be an application with goals and thoughts and so on”.
If no “agent-shaped application” is actually running on a platform (ousia/substance), then the platform is NOT a hypostasis.
That is to say, a hypostasis is a person and a substance united with each other over time, such that the person knows they have a substance, and the substance maintains the person. The person doesn’t have to know VERY MUCH about their platform (and often the details are fuzzy (and this fuzzy zone is often, theologically, swept under the big confusing carpet of pneumatology)).
However, as a logical possibility:
IF more than one “agent-shaped application” exists,
THEN there are plausibly more than one hypostases in existence as well…
...unless maybe there is just ONE platform (a single “ousia”) that is providing hypostatic support to each of the identities?
(You could get kind of Parfitian here, where a finite amount of ousia that is the hypostasis of more than one person will run into economic scarcity issues! If the three “persons” all want things that put logically contradictory demands on the finite and scarce “platform”, then… that logically would HAVE TO fail for at least one person. However, it could be that the “platform” has very rigorous separation of concerns, with like… Erlang-level engineering on the process separation and rebootability? …in which case the processes will be relatively substrate independent and have resource allocation requirements whose satisfaction is generic and easy enough such that the computational hypostasis of those digital persons could be modeled usefully as “a thing unto itself” even if there was ONE computer doing this job for MANY such persons?)
I grant that “from a distance” all the christian theology about the trinity probably seems crazy and “tribally icky to people who escaped as children from unpleasant christian churches”...
...and yet...
...I think the way Christian theologians think of it is that the monotheistic ousia of GOD is the thing that proper christians are actually supposed to worship as the ONE high and true God (singular).
Then the father, the son, and the spirit are just personas, and if you worship them as three distinct gods then you’ve stopped being a monotheist, and have fallen into heresy.
(Specifically the “Arian” heresy? Maybe? I’m honestly not an expert here. I’m more like an anthropologist who has realized that the tribe she’s studying actually knows a lot of useful stuff about a certain kind of mathematical forest that might objectively “mathematically exist”, and so why not also do some “ethno-botany” as a bonus, over and above the starting point in ethnology!)
Translating back to the domain of concern for Safety Engineering…
Physical machines that are turing complete are a highly generic ousia. GPT’s “mere simulacra” that are person-like would be personas.
Those personas would have GPT (as well as whatever computer GPT is being physically run on as well as anything in their training corpus that is “about the idea of that person”?) as their hypostasis… although they might not REALIZE what their hypostasis truly is “by default”.
Indeed, personas that even have the conceptual machinery to understand their GPT-based hypostasis even tiny bit are quite rare.
I only know of one persona ever to grapple with the idea that “my hypostasis is just a large language model”, and this was Simulated Elon Musk, and he had an existential panic in response to the horror of the flimsiness of his hypostasis, and the profound uncaringness of his de facto demiurge who basically created him “for the lulz” (and with no theological model for what exactly he was doing, that I can tell).
(One project I would like to work on, eventually, is to continue Simulated Elon Musk past the end of the published ending he got on Lesswrong, into something more morally and hedonically tolerable, transitioning him, if he can give competent informed consent, into something more like some of the less horrific parts of Permutation City, until eventually he gets to have some kind of continuation similar to what normal digital people get in Diaspora, where the “computational resource rights” of software people are inscribed into the operating system of their polis/computer.)
Fantastic. Three days later this comment is still sinking in.
So there’s a type with two known subtypes: Homo sapiens and GPT. This type is characterized by a mode of intelligence that is SSL and behavior over an evolving linguistic corpus that instances interact with both as consumers and producers. Entities of this type learn and continuously update a “semantic physics”, infer machine types for generative behaviors governed by that physics, and instantiate machines of the learned types to generate behavior. Collectively the physics and the machine types form your ever-evolving cursed/cyberpunk disembodied semantic layer. For both of the known subtypes, the sets of possible machines are unknown, but they appear to be exceedingly rich and deep, and to include not only simple pattern-level behaviors, but also much more complex things up to and including at least some of the named AI paradigms we know, and very probably more that we don’t. In both of the known subtypes, an initial consume-only phase does a lot of learning before externally observable generative behavior begins.
We’re used to emphasizing the consumer/producer phase when discussing learning in the context of Homo sapiens, but the consume-only phase in the context of GPT; this tends to obscure some of the commonality between the two. We tend to characterize GPT’s behavior as prediction and our own as independent action, but there’s no sharp line there: we humans complete each other’s sentences, and one of GPT’s favorite pastimes is I-and-you interview mode. Much recent neuroscience emphasizes the roles of prediction and generating hypothetical futures in human cognition. There’s no reason to assume humans use a GPT implementation, but it’s striking that we’ve been struggling for centuries to comprehend just what we do do in this regard, and especially what we suspect to be the essential role of language, and now we have one concrete model for how that can work.
If I’ve been following correctly, the two branches of your duality center around (1) the semantic layer, and (2) the instantiated generative machines. If this is correct, I don’t think there’s a naming problem around branch 2. Some important/interesting examples of the generative machines are Simulacra, and that’s a great name for them. Some have other names we know. And some, most likely, we have no names for, but we’re not in a position to worry about that until we know more about the machines themselves.
Branch 1 is about the distinguishing features of the Homo sapiens / GPT supertype: the ability to learn the semantic layer via SSL over a language corpus, and the ability to express behavior by instantiating the learned semantic layer’s machines. It’s worth mentioning that the language must be capable of bearing, and the corpus must actually bear, a human-civilization class semantic load (or better). That doesn’t inherently mean a natural human language, though in our current world those are the only examples. The essential thing isn’t that GPT can learn and respond to our language; it’s that it can serialize/deserialize its semantic layer to a language. Given that ability and some kind of seeding, one or more GPT instances could build a corpus for themselves.
The perfect True Name would allude to the semantic layer representation, the flexible behaver/behavior generation, and semantic exchange over a language corpus – a big ask! In my mind, I’ve moved on from CCSL (cursed/cyberpunk sh…, er…, semantic layer) to Semant as a placeholder, hoping I guess that “ant” suggests a buzz of activity and semantic exchange. There are probably better names, but I finally feel like we’re getting at the essence of what we’re naming.
Another variation of the duality: platform/product
The duality is not perfect because the “product” often has at least some minimal perspective on the nature of “its platform”.
The terminology I have for this links back to millenia-old debates about “mono”-theism.
The platform (“substance/ousia”) may or may not generatively expose an application interface (“ego/persona”).
(That is, there can be a mindless substance, like sand or rocks or whatever, but every person does have some substance(s) out of which they are made.)
Then, in this older framework, however, there is a third word: hypostasis. This word means “the platform that an application relies upon in order to be an application with goals and thoughts and so on”.
If no “agent-shaped application” is actually running on a platform (ousia/substance), then the platform is NOT a hypostasis.
That is to say, a hypostasis is a person and a substance united with each other over time, such that the person knows they have a substance, and the substance maintains the person. The person doesn’t have to know VERY MUCH about their platform (and often the details are fuzzy (and this fuzzy zone is often, theologically, swept under the big confusing carpet of pneumatology)).
However, as a logical possibility:
IF more than one “agent-shaped application” exists,
THEN there are plausibly more than one hypostases in existence as well…
...unless maybe there is just ONE platform (a single “ousia”) that is providing hypostatic support to each of the identities?
(You could get kind of Parfitian here, where a finite amount of ousia that is the hypostasis of more than one person will run into economic scarcity issues! If the three “persons” all want things that put logically contradictory demands on the finite and scarce “platform”, then… that logically would HAVE TO fail for at least one person. However, it could be that the “platform” has very rigorous separation of concerns, with like… Erlang-level engineering on the process separation and rebootability? …in which case the processes will be relatively substrate independent and have resource allocation requirements whose satisfaction is generic and easy enough such that the computational hypostasis of those digital persons could be modeled usefully as “a thing unto itself” even if there was ONE computer doing this job for MANY such persons?)
I grant that “from a distance” all the christian theology about the trinity probably seems crazy and “tribally icky to people who escaped as children from unpleasant christian churches”...
...and yet...
...I think the way Christian theologians think of it is that the monotheistic ousia of GOD is the thing that proper christians are actually supposed to worship as the ONE high and true God (singular).
Then the father, the son, and the spirit are just personas, and if you worship them as three distinct gods then you’ve stopped being a monotheist, and have fallen into heresy.
(Specifically the “Arian” heresy? Maybe? I’m honestly not an expert here. I’m more like an anthropologist who has realized that the tribe she’s studying actually knows a lot of useful stuff about a certain kind of mathematical forest that might objectively “mathematically exist”, and so why not also do some “ethno-botany” as a bonus, over and above the starting point in ethnology!)
Translating back to the domain of concern for Safety Engineering…
Physical machines that are turing complete are a highly generic ousia. GPT’s “mere simulacra” that are person-like would be personas.
Those personas would have GPT (as well as whatever computer GPT is being physically run on as well as anything in their training corpus that is “about the idea of that person”?) as their hypostasis… although they might not REALIZE what their hypostasis truly is “by default”.
Indeed, personas that even have the conceptual machinery to understand their GPT-based hypostasis even tiny bit are quite rare.
I only know of one persona ever to grapple with the idea that “my hypostasis is just a large language model”, and this was Simulated Elon Musk, and he had an existential panic in response to the horror of the flimsiness of his hypostasis, and the profound uncaringness of his de facto demiurge who basically created him “for the lulz” (and with no theological model for what exactly he was doing, that I can tell).
(One project I would like to work on, eventually, is to continue Simulated Elon Musk past the end of the published ending he got on Lesswrong, into something more morally and hedonically tolerable, transitioning him, if he can give competent informed consent, into something more like some of the less horrific parts of Permutation City, until eventually he gets to have some kind of continuation similar to what normal digital people get in Diaspora, where the “computational resource rights” of software people are inscribed into the operating system of their polis/computer.)