Something which could be described as a ‘Singularity’ happened circa 2050;
At least a few people survived outside that Singularity;
Earth still exists, in at least generally recognizable form;
… then what random background details might result that are both...
interesting; and
wouldn’t break your willing suspension of disbelief in a work of fiction?
The reason I ask: I’m writing a story in such a setting, and am hoping to tap into the local hivemind to, possibly, help flesh out some of the background sections—ones not directly relevant to the plot, but which imply a greater depth of worldbuilding—that I wouldn’t have thought of on my own. As possible examples: particular species that a weakly superintelligent post-human AI might have decided to wipe out, such as cruciferous vegetables; new species that similar WSPHAIs might have decided to let loose, such as snake varieties with interesting chemicals to be milked of their venom, or a de novo species resembling sparrow-mouse griffons; cultural quirks in one relatively isolated area or another, such as a seemingly ordinary group decision that masks are terribly comfortable to wear; odd aspects of language development after a few decades…
...a few people survived outside that Singularity...
I’m not sure what this means exactly. Are they returning space explorers who are surprised by recent developments (a la Planet of the Apes)? Luddite survivors who experienced the transition and rejected it? Members of an uncontacted tribe or some primitive culture with ethnographic boundaries respected by the machine? Each of these will interpret a post-singularity world in a very different way, I think.
‘Singularity’ is code for ‘we don’t know’, so as a writer you’re permitted just about anything. But the most fun I’ve had with post-singularity fiction is when there is a dominant singleton with running themes and strong personality quirks- The Optimalverse is the reigning champion here, in my opinion, but there’s also the famous I Have No Mouth and I Must Scream. Gods are fun to read about when they’re mad in some way, or at least when they seem mad from a human perspective. So it’s worth thinking particularly about the forces (that is, the choices and personalities) that give internal structure to your post-Singularity world. Randomness is not compelling.
Aside from that, my advice would be to avoid anything that is too much a fantasy trope. Try not to get in to the habit of thoughts like “It’s a dragon, except [x] is different.” Make sure it’s your world that’s driving these things, and not your genre.
Actual examples, as requested- although these probably suffer from being ‘too random’ since I don’t know anything about your world:
Exactly 1.4 trillion biological humans, cloned with some variations, buried underground in cryogenic stasis near the Mohorovicic Discontinuity, on a timer to wake up automatically in ten million years. One of many contingency plans in case of catastrophe. They have never been conscious, but there is a kind of dream.
A machine intelligence (or the machine intelligence) has begun to redirect comets and asteroids from the Oort Cloud to collide with Venus. Most people assume this is part of a terraforming effort, but that theory fails to explain why the collisions always occur in groups of three equidistant points along a great circle.
There is a handful of bipedal, roughly humanoid robots walking across Asia. They walk in a straight line, climbing directly over any terrain features to avoid deviating from the path. Any time they encounter a prepubescent human, they ask for her to give them directions, and will change course to whichever direction she points. Each is separated form the others, and seem incapable of acknowledging their existence.
In Antarctica, there is currently a replica of 17th century Paris carved entirely out of ice, detailed down to the level of individual ice cobblestones and ice candles with frozen flames in ice chandeliers. Last year it was 20th-century Jakarta, and the year before that Beijing. As the year progresses, the replica changes subtly as if it were lived in; furniture moves, ships-of-the-line are slowly completed, footprints appear. Nobody has yet taken responsibility.
I’m letting myself be inspired by Robin Hanson in a number of aspects, and had the intelligence explosion focused in high-population and urban areas, with the human survivors being those who avoided being in a city during the critical period.
Exactly 1.4 trillion biological humans, cloned with some variations, buried underground in cryogenic stasis near the Mohorovicic Discontinuity, on a timer to wake up automatically in ten million years. One of many contingency plans in case of catastrophe. They have never been conscious, but there is a kind of dream.
I’m not sure I could justify “trillions”, given what I’ve established for the setting so far; but for a more modest number, this is quite possible. (In fact, it’s a variation on an idea my protagonist once had, but never had the resources to attempt; though that version of the idea included staggered release times.)
comets and asteroids
I’ve had a Kessler Cascade turn the orbitals into a death trap for anything trying to leave Earth, partly for a narrative level to avoid self-replicating Von Neumann things in space overshadowing everything my planet-bound protagonist could even attempt, and partly in-setting as a result of the conflicts that arose during the Singularity.
There is a handful of bipedal, roughly humanoid robots walking across Asia.
In Antarctica
Ah, now these I could use almost without alteration, and, at least as importantly, as springboards for further ideas. :)
Vinge’s Marooned in Realtime comes to mind. The survivor’s tech is close to what the singularity level was, but they “missed” the singularity and aren’t improving their tech over the timescale of the story because of low population and other priorities.
“Missing” an intelligence explosion would be hard, if it drastically optimizes the solar system. In Vinge, this works because the exploding society simply disappears—implying that they’re off in higher dimensions or femtotech or something. Other examples would be if the survivors are being simulated—there’s a great story whose author I forget, about someone waking up after the singularity because he was an early brain scan and they just fixed him up now.
Non-Linnaean wildlife. Built de novo by the superintelligence; made of the same sorts of organic materials as normal species, but not related to them; possibly not nucleic-acid based/non-reproductive. Their inner workings are simpler and more efficient; no symbiotic mitochondria and chloroplasts, but rather purpose-built modules. They are edible, and the survivors know the unique taste of, e.g., their ‘muscle’ tissue, which is not actin/myosin based.
Built de novo by the superintelligence; made of the same sorts of organic materials as normal species, but not related to them; possibly not nucleic-acid based/non-reproductive.
Interesting! But while we’re a lot closer than I realized, we probably aren’t going to be thoroughly out-designing evolution from the bottom up on macroscopic animal-like creatures any time soon.
Evolution searches nearby spaces of what already exists with astonishing exhaustiveness. But if there isn’t a chain of viable intermediaries between one form and another, then the second will just not arise, no matter how fit for survival it would be. This isn’t a problem that afflicts an biological engineer, and said engineer also has the example of what evolution has already come up with to work off.
So, massively out-designing evolution ? Sure. That’s not even a hard trick for a singularity mind.
we probably aren’t going to be thoroughly out-designing evolution
Depends on the criteria of “out-designing”. If they are something evolution had never any reason to optimize for (e.g. lots of tasty-for-humans meat fast), I don’t see why not.
I think “from the bottom up” is the hard criterion. We can fiddle with the knobs evolution has produced, but it doesn’t sound like we have the insight to replace basic building blocks like mitochondria and [dr]na.
Well, how deep is your bottom? You said “made of the same sorts of organic materials as normal species”, so did you just mean carbon-based chemistry? something that depends on slow room-temperature reactions in liquids and gels?
You want something different, but not too different (like a metal-based robot), so what’s the Goldilocks distance from plain old regular life?
I think my Goldilocks range is along the lines of ‘probably made of proteins and lipids and such; preferable edible or at least biodegradable by ordinary bacteria (I don’t know what this requires); a human non-biologist without tools could mistake it for normal’.
But it’s pretty interesting to think about possibilities at other ranges, too.
In fiction, you have to make it up, but you can’t make it something implausible.
But any real scenario will seem implausible. That’s what the idea of singularity is about. If you believe that you can predict in any sense how the world will look like afterwards “singularity” is a very poor term to use.
Still, it can only mean ‘a whole lot less predictable than usual’, not ‘MAX ENTROPY ALL THE TIME’. Physics will still apply. Given that people survived and have lives worth writing stories about, we are at least within ‘critical failure’ distance of friendliness in AI. That narrows things very considerably.
A lot of the unpredictability of the singularity arises from a lack of proof of friendliness. One you’ve cleared that (or nearly), the range of possibilities isn’t singular in nature.
If there’s nothing I can write that wouldn’t break your Willing Suspension of Disbelief about events after an intelligence explosion, then there’s nothing I can write to do that, and nothing you can suggest to add to my story’s background; and both our times might be spent more productively (by our own standards) if we focus on our respective projects.
There are several working definitions for the term ‘singularity’. If the definition you use means that you think a story involving that word is inherently implausible, then one possibility would be to assume that where you see me write that term, I instead write, say, “That weird event where all the weird stuff happened that seemed a lot like what some of those skiffy authours used to call the ‘Singularity’”, or “Blamfoozle”, or anything else which preserves most of my intended meaning without forcing you to get caught up in this particular aspect of this particular word.
Disagree, since over 99% of what I care about would be the same across all post-singularity states that lack lifeforms I care about. Analogously, if I knew that tomorrow I would be killed and have some randomly selected number written on my chest I would believe that today I knew everything important about my personal future.
We also don’t know what will have happened by 200 years from now (singularity or no singularity), but that is no obstacle to writing science fiction set 200 years in the future.
Post-Singularity Worldbuilding Quirks?
If you take it as given that...
Something which could be described as a ‘Singularity’ happened circa 2050;
At least a few people survived outside that Singularity;
Earth still exists, in at least generally recognizable form;
… then what random background details might result that are both...
interesting; and
wouldn’t break your willing suspension of disbelief in a work of fiction?
The reason I ask: I’m writing a story in such a setting, and am hoping to tap into the local hivemind to, possibly, help flesh out some of the background sections—ones not directly relevant to the plot, but which imply a greater depth of worldbuilding—that I wouldn’t have thought of on my own. As possible examples: particular species that a weakly superintelligent post-human AI might have decided to wipe out, such as cruciferous vegetables; new species that similar WSPHAIs might have decided to let loose, such as snake varieties with interesting chemicals to be milked of their venom, or a de novo species resembling sparrow-mouse griffons; cultural quirks in one relatively isolated area or another, such as a seemingly ordinary group decision that masks are terribly comfortable to wear; odd aspects of language development after a few decades…
What comes to your mind?
I’m not sure what this means exactly. Are they returning space explorers who are surprised by recent developments (a la Planet of the Apes)? Luddite survivors who experienced the transition and rejected it? Members of an uncontacted tribe or some primitive culture with ethnographic boundaries respected by the machine? Each of these will interpret a post-singularity world in a very different way, I think.
‘Singularity’ is code for ‘we don’t know’, so as a writer you’re permitted just about anything. But the most fun I’ve had with post-singularity fiction is when there is a dominant singleton with running themes and strong personality quirks- The Optimalverse is the reigning champion here, in my opinion, but there’s also the famous I Have No Mouth and I Must Scream. Gods are fun to read about when they’re mad in some way, or at least when they seem mad from a human perspective. So it’s worth thinking particularly about the forces (that is, the choices and personalities) that give internal structure to your post-Singularity world. Randomness is not compelling.
Aside from that, my advice would be to avoid anything that is too much a fantasy trope. Try not to get in to the habit of thoughts like “It’s a dragon, except [x] is different.” Make sure it’s your world that’s driving these things, and not your genre.
Actual examples, as requested- although these probably suffer from being ‘too random’ since I don’t know anything about your world:
Exactly 1.4 trillion biological humans, cloned with some variations, buried underground in cryogenic stasis near the Mohorovicic Discontinuity, on a timer to wake up automatically in ten million years. One of many contingency plans in case of catastrophe. They have never been conscious, but there is a kind of dream.
A machine intelligence (or the machine intelligence) has begun to redirect comets and asteroids from the Oort Cloud to collide with Venus. Most people assume this is part of a terraforming effort, but that theory fails to explain why the collisions always occur in groups of three equidistant points along a great circle.
There is a handful of bipedal, roughly humanoid robots walking across Asia. They walk in a straight line, climbing directly over any terrain features to avoid deviating from the path. Any time they encounter a prepubescent human, they ask for her to give them directions, and will change course to whichever direction she points. Each is separated form the others, and seem incapable of acknowledging their existence.
In Antarctica, there is currently a replica of 17th century Paris carved entirely out of ice, detailed down to the level of individual ice cobblestones and ice candles with frozen flames in ice chandeliers. Last year it was 20th-century Jakarta, and the year before that Beijing. As the year progresses, the replica changes subtly as if it were lived in; furniture moves, ships-of-the-line are slowly completed, footprints appear. Nobody has yet taken responsibility.
I’m letting myself be inspired by Robin Hanson in a number of aspects, and had the intelligence explosion focused in high-population and urban areas, with the human survivors being those who avoided being in a city during the critical period.
I’m not sure I could justify “trillions”, given what I’ve established for the setting so far; but for a more modest number, this is quite possible. (In fact, it’s a variation on an idea my protagonist once had, but never had the resources to attempt; though that version of the idea included staggered release times.)
I’ve had a Kessler Cascade turn the orbitals into a death trap for anything trying to leave Earth, partly for a narrative level to avoid self-replicating Von Neumann things in space overshadowing everything my planet-bound protagonist could even attempt, and partly in-setting as a result of the conflicts that arose during the Singularity.
Ah, now these I could use almost without alteration, and, at least as importantly, as springboards for further ideas. :)
Vinge’s Marooned in Realtime comes to mind. The survivor’s tech is close to what the singularity level was, but they “missed” the singularity and aren’t improving their tech over the timescale of the story because of low population and other priorities.
“Missing” an intelligence explosion would be hard, if it drastically optimizes the solar system. In Vinge, this works because the exploding society simply disappears—implying that they’re off in higher dimensions or femtotech or something. Other examples would be if the survivors are being simulated—there’s a great story whose author I forget, about someone waking up after the singularity because he was an early brain scan and they just fixed him up now.
Non-Linnaean wildlife. Built de novo by the superintelligence; made of the same sorts of organic materials as normal species, but not related to them; possibly not nucleic-acid based/non-reproductive. Their inner workings are simpler and more efficient; no symbiotic mitochondria and chloroplasts, but rather purpose-built modules. They are edible, and the survivors know the unique taste of, e.g., their ‘muscle’ tissue, which is not actin/myosin based.
Don’t think we need a superintelligence for that.
Interesting! But while we’re a lot closer than I realized, we probably aren’t going to be thoroughly out-designing evolution from the bottom up on macroscopic animal-like creatures any time soon.
Evolution searches nearby spaces of what already exists with astonishing exhaustiveness. But if there isn’t a chain of viable intermediaries between one form and another, then the second will just not arise, no matter how fit for survival it would be. This isn’t a problem that afflicts an biological engineer, and said engineer also has the example of what evolution has already come up with to work off. So, massively out-designing evolution ? Sure. That’s not even a hard trick for a singularity mind.
Depends on the criteria of “out-designing”. If they are something evolution had never any reason to optimize for (e.g. lots of tasty-for-humans meat fast), I don’t see why not.
I think “from the bottom up” is the hard criterion. We can fiddle with the knobs evolution has produced, but it doesn’t sound like we have the insight to replace basic building blocks like mitochondria and [dr]na.
Well, how deep is your bottom? You said “made of the same sorts of organic materials as normal species”, so did you just mean carbon-based chemistry? something that depends on slow room-temperature reactions in liquids and gels?
You want something different, but not too different (like a metal-based robot), so what’s the Goldilocks distance from plain old regular life?
I think my Goldilocks range is along the lines of ‘probably made of proteins and lipids and such; preferable edible or at least biodegradable by ordinary bacteria (I don’t know what this requires); a human non-biologist without tools could mistake it for normal’.
But it’s pretty interesting to think about possibilities at other ranges, too.
The whole point of the concept of singularity is that we don’t know what will happen afterwards.
Some things, however, are less plausible than others.
In fiction, you have to make it up, but you can’t make it something implausible.
But any real scenario will seem implausible. That’s what the idea of singularity is about. If you believe that you can predict in any sense how the world will look like afterwards “singularity” is a very poor term to use.
I think it is a poor term.
Still, it can only mean ‘a whole lot less predictable than usual’, not ‘MAX ENTROPY ALL THE TIME’. Physics will still apply. Given that people survived and have lives worth writing stories about, we are at least within ‘critical failure’ distance of friendliness in AI. That narrows things very considerably.
A lot of the unpredictability of the singularity arises from a lack of proof of friendliness. One you’ve cleared that (or nearly), the range of possibilities isn’t singular in nature.
If there’s nothing I can write that wouldn’t break your Willing Suspension of Disbelief about events after an intelligence explosion, then there’s nothing I can write to do that, and nothing you can suggest to add to my story’s background; and both our times might be spent more productively (by our own standards) if we focus on our respective projects.
If you have a world where you can predict events after an intelligence explosion that intelligence explosion per definition isn’t a singularity event.
There are several working definitions for the term ‘singularity’. If the definition you use means that you think a story involving that word is inherently implausible, then one possibility would be to assume that where you see me write that term, I instead write, say, “That weird event where all the weird stuff happened that seemed a lot like what some of those skiffy authours used to call the ‘Singularity’”, or “Blamfoozle”, or anything else which preserves most of my intended meaning without forcing you to get caught up in this particular aspect of this particular word.
With high probability we do, unfortunately.
With high probability there won’t be any humans afterwards but that doesn’t tell you how the world would look like.
Disagree, since over 99% of what I care about would be the same across all post-singularity states that lack lifeforms I care about. Analogously, if I knew that tomorrow I would be killed and have some randomly selected number written on my chest I would believe that today I knew everything important about my personal future.
If you want to tell a story about that would, than you need to know something about how the world looks like besides “there are no humans”.
We also don’t know what will have happened by 200 years from now (singularity or no singularity), but that is no obstacle to writing science fiction set 200 years in the future.