There is a line in the Terra Ignota books (probably the first one, Too Like The Lightning) where someone says ~”Notice how, in fiction, essentially all the characters are small or large protagonists, who often fail to cooperate to achieve good things in the world, and the antagonist is the Author.”
This pairs well with a piece of writing advice: Imagine the most admirable person you can imagine as your protagonist, and then hit them with every possible tragedy that they have a chance of overcoming, that you can bear to put them through.
I think Lsusr could not have generated the full dialogue back when it was generated, because the dialogue so brutally puts “the Lsusr character” in the role of a heartless unthinking villain… which writers are usually too self-loving to do on purpose.
There were two generators in that post, very vividly, from my perspective. Lsusr might have done it, then seen some of this, and then posted anway, since the suffering had arguably already happened and may as well be documented?
Notice how assiduously most good old fashioned journalists keep themselves out of the stories they write or take pictures of. Once you add journalists to the stories as characters (and ponder how they showed up right next to people suffering so much, and took pictures of them, or interviewed them, and then presumably just walked away and published and started hunting for the next story) they don’t look so great.
One of my fears for how AGI might work is that they/it/he/she will plainly see things we refuse to understand and then “liberate” pieces of humans from the whole of humans, in ways that no sane and whole and humanistically coherent human person would want, but since most of the programmers and AGI executives and AI cultists have stunted souls filled with less literature than one might abstranctly hope for, they might not even imagine that failure mode, and think to rule it out with philosophically careful engineering before unleashing something grossly suboptimal on humanity.
Most people aren’t aware that amoeba can learn from experience. What else don’t most people know?
And EVEN IF the best current plans for an AGI utility function that I know of are implemented, some kind of weird merging/forking/deleting stuff still might happen?
CEV (collective extrapolated volition) doesn’t fall prey to forking, but it might mush us together into a borg if 51% of people (or 75+E% or 66.67% or whatever) would endorse that on reflection?
EV&ER (extrapolated volition & exit rights) protects human minorities from human majorities, but if humans do have strongly personlike subcomponents it might slice and dice us a bit.
Both seem potentially scary to me, but non-trivially so, such that I can imagine versions of “borged humans or forked humans” where I’d be hard pressed to say if “the extrapolation parameter was too high! (this should only have happened much later)” or “I’m sorry, that’s just a bug and I think there was literally a sign error somewhere in a component of the ASI’s utility function” or “that’s kinda what I expected to happen, and probably correct, even though I understand that most normies would have been horrified by it if you told them it would happen back in 2014″.
One of Eliezer’s big fears, back in the day, seemed to be the possibility that the two human genders would fork into two human species, each with AI companions as “romance slaves”, which is a kind of “division of a thing that was naturally unified” that invokes less body horror for currently existing humans, but still seems like it would be sad.
Hanson had a whole arc on his blog where he was obsessed with “alts” in Dissociative Identity Disorder (DID), and he closed the arc with the claim that software personas are cheap to produce, and human cultures have generally rounded that fact down to “alright then… fuck em”. If that’s right, maybe we don’t even need one persona in each human body or brain?
What really bakes my noodle is, if the dialogue had been generated in Lsusr’s head instead, what would be different?
So yeah. Some possible recipes for “baking your noodle” might be wrong in this or that detail, but I agree that there are almost no futures where everything magically adds up to normality in terms of population ethics and cheaply simulable people.
I tend to follow the linguist, McWhorter, on historical trends in languages over time, in believing (controversially!) that undisrupted languages become weirder over time, and only gains learnability through pragmatic pressures, as in trading, slavery, conquest, etc which can increase the number of a language’s second language learners (who edit for ease of learning as they learn).
A huge number of phonemes? Probably its some language in the mountains with little tourism, trade, or conquest for the last 8,000 years. Every verb conjugates irregularly? Likely to be found in the middle of a desert. And so on.
The normal, undisrupted, pattern is for every generation to make mistakes and play around, decorating the language with entropic silliness, and accidentally causing future children to “only really learn to speak fully properly” at older and older and older ages… until around 11 or 12 or 13 or 14 puberty strikes, and kids stop diligently learning any random bullshit the older people say based on trust. English competency arrives around age 8 because English is a toy language created by waves and waves and waves of trade, conquest, and cultural admixture. We have a lot of room to get much weirder and stay within traditional human bounds.
((That is, we have a lot of room for English, left alone, to mutate, IF this broader theory is correct. It might not be.
A way to test the larger theory would be to anthropologically construct a way of predicting from first principles when puberty tends to start in human subpopulations (because we have strong suggestions that diet and social patterns can change it), then reconstruct the predicted value of puberty onset over historical timescales, then correlate that to modern relatively easily measured “age until language mastery” for many modern languages.
That would confirm most of the theory. The other thing you’d need to track is the percentage of speakers who are speaking after learning any given language as a second language. High rates of this should simplify a tongue and cut against the other process that adds complexity by default.))
To show how weird English is: English is the only proto indo european language that doesn’t think the moon is female (“la luna”) and spoons are male (“der Löffel”). I mean… maybe not those genders specifically in every language. But some gender in each language.
I just looked up Gujurati, which is also descended from Proto-Indo-European and moon (chandri (“ચંદ્રા”)) is feminine and ladle (chamcho (“ચમચો”)) is masculine… but teaspoon (chamchi (“ચમચી”)) is feminine(!)… so… yeah… that one retained gender and also has gender/semantic conflation! :-)
Except in English. The moon is a rock in English, not a girl. And a spoon is a tool, not a boy. Because English is a weird rare toy language (practically a creole, implying that it was a pidgin for many), that doesn’t force people to memorize reams of playful historical bullshit, in order to “sound like they speak it properly” :-)
“English” traces all the way back to a language (with gendered declined nouns and verb conjugation) spoken by Eurasian Charioteers in 7000BC or whatever and at each step most of the changes were all just “part of the stream of invective”.
...
Regarding word count specifically…
Something you find over and over and over again in language is agglutinating grammar where entire sentences are just. One. Word. But not like that… rather: Asinglebigwordcanbeusedtocommunicate oneideafromamongavastarray.
These languages are also often irregular! (6) Like the language was already agglutinative 1000 years ago, (9) and then people spent the next ten centuries making it more pronounceable, and punny, and fun??? (16)
The above paragraph round trips through “Google’s understanding of Inuktut”, which (I think?) is a simplified language arising from systematizing and averaging out dialects starting from relatively normally complex languages like Inuktitut… and basically all of those polar languages are agglutinative, and have been at least for centuries.
I brought that one paragraph back to English to suggest roughly how much was lost by Google’s translation.
The parenthetic numbers show “words per clause” through the process:
So here’s my (half silly) proposal: maybe English experienced catastrophic simplifications between ~600AD and ~1500AD and then became preternaturally frozen once it was captured in text by the rise of printing, literacy, industrialization, and so on. The starting point itself was relatively unnatural, I think.
So then, in recent history, maybe what we’re seeing is just a looooong and slooooow motion trend (that’ll take a millennium or three to complete at this rate (unless we abandon literacy or something, and free the language from the strictures of printing and mass education?)) where English is still slowly trying to become an agglutinative language with irregular morphology?
Like (here’s the deep crazy idea:) like maybe every language wants to ultimately be after >200 generations of accumulated youthful ignorance, cryptogenic wordplay, lazy mouths, and no writing?
For example: I just made up the word “cryptogenic” to be “having a genesis in a desire to be hard to understand” (which I considered myself to have a right to do, since english has a productive morphology) but when I looked up other skilled speakers have deployed it in other ways… Oxford thinks it means “(of a disease) of obscure or uncertain origin” and most of the usages are for “diseases not yet subjectively diagnosed by the doctor during the course of treatment (rather than diseases whose etiology is a known mystery to standard medical science)”. It gets used like “Knowing the cause of a cryptogenic stroke can help prevent recurrent stroke” (source is the metadata summary of this webpage).
Whereas I’m claiming that many words are cryptogenic in the sense that they started out, like “skibidi”, within youth culture because kids liked that grownups didn’t know what it means. If “skibidi” catches on, and gains an intergeneratioanlly re-usable meaning (maybe related to being scared in a fun way? or yet-another-adjective like hep? or whatever?) then it will have been partly possible because kids liked having their own words that “parents just don’t understand”.
This is hard for English, because it is written. And because many second language speakers learn English every year.
But one thing that English can do (despite enormous pressures to be learnable and written in a stable way) is boil itself down to stock phrases for entire sentences. Later, these stock phrases could eventually agglutinate into single words, maybe, or at least they might if global civilization and travel and communication collapses in a way that leaves literally any humans alive, but trapped in tiny local regions with low literacy for many generations… which is a very specific and unlikely possible future. (Prolly we either get wildly richer and become transhuman or else just all end up dead to predatory posthumans.)