In fiction, you have to make it up, but you can’t make it something implausible.
But any real scenario will seem implausible. That’s what the idea of singularity is about. If you believe that you can predict in any sense how the world will look like afterwards “singularity” is a very poor term to use.
Still, it can only mean ‘a whole lot less predictable than usual’, not ‘MAX ENTROPY ALL THE TIME’. Physics will still apply. Given that people survived and have lives worth writing stories about, we are at least within ‘critical failure’ distance of friendliness in AI. That narrows things very considerably.
A lot of the unpredictability of the singularity arises from a lack of proof of friendliness. One you’ve cleared that (or nearly), the range of possibilities isn’t singular in nature.
If there’s nothing I can write that wouldn’t break your Willing Suspension of Disbelief about events after an intelligence explosion, then there’s nothing I can write to do that, and nothing you can suggest to add to my story’s background; and both our times might be spent more productively (by our own standards) if we focus on our respective projects.
There are several working definitions for the term ‘singularity’. If the definition you use means that you think a story involving that word is inherently implausible, then one possibility would be to assume that where you see me write that term, I instead write, say, “That weird event where all the weird stuff happened that seemed a lot like what some of those skiffy authours used to call the ‘Singularity’”, or “Blamfoozle”, or anything else which preserves most of my intended meaning without forcing you to get caught up in this particular aspect of this particular word.
Disagree, since over 99% of what I care about would be the same across all post-singularity states that lack lifeforms I care about. Analogously, if I knew that tomorrow I would be killed and have some randomly selected number written on my chest I would believe that today I knew everything important about my personal future.
We also don’t know what will have happened by 200 years from now (singularity or no singularity), but that is no obstacle to writing science fiction set 200 years in the future.
The whole point of the concept of singularity is that we don’t know what will happen afterwards.
Some things, however, are less plausible than others.
In fiction, you have to make it up, but you can’t make it something implausible.
But any real scenario will seem implausible. That’s what the idea of singularity is about. If you believe that you can predict in any sense how the world will look like afterwards “singularity” is a very poor term to use.
I think it is a poor term.
Still, it can only mean ‘a whole lot less predictable than usual’, not ‘MAX ENTROPY ALL THE TIME’. Physics will still apply. Given that people survived and have lives worth writing stories about, we are at least within ‘critical failure’ distance of friendliness in AI. That narrows things very considerably.
A lot of the unpredictability of the singularity arises from a lack of proof of friendliness. One you’ve cleared that (or nearly), the range of possibilities isn’t singular in nature.
If there’s nothing I can write that wouldn’t break your Willing Suspension of Disbelief about events after an intelligence explosion, then there’s nothing I can write to do that, and nothing you can suggest to add to my story’s background; and both our times might be spent more productively (by our own standards) if we focus on our respective projects.
If you have a world where you can predict events after an intelligence explosion that intelligence explosion per definition isn’t a singularity event.
There are several working definitions for the term ‘singularity’. If the definition you use means that you think a story involving that word is inherently implausible, then one possibility would be to assume that where you see me write that term, I instead write, say, “That weird event where all the weird stuff happened that seemed a lot like what some of those skiffy authours used to call the ‘Singularity’”, or “Blamfoozle”, or anything else which preserves most of my intended meaning without forcing you to get caught up in this particular aspect of this particular word.
With high probability we do, unfortunately.
With high probability there won’t be any humans afterwards but that doesn’t tell you how the world would look like.
Disagree, since over 99% of what I care about would be the same across all post-singularity states that lack lifeforms I care about. Analogously, if I knew that tomorrow I would be killed and have some randomly selected number written on my chest I would believe that today I knew everything important about my personal future.
If you want to tell a story about that would, than you need to know something about how the world looks like besides “there are no humans”.
We also don’t know what will have happened by 200 years from now (singularity or no singularity), but that is no obstacle to writing science fiction set 200 years in the future.