Discussion: Ideas for a Lesswrongian anticipation Sci-Fi set in 2060
So, the usual bet is that the GAI, both F and UF will be created at around that time at the latest. I’d like to set a novel, a thriller, right at that critical moment where everything could be lost or won, and humanity is in the balance. But human societies and the way they interact with each other will have changed a lot by then. So, well, I haven’t read throughly enough here to understand how far we are anticipating what will happen. Not just the friendliness of AI development, but our own impact in the world, and how it will react when it finds out about us and our goals, and takes them seriously.
So I was wondering if you’d help me out here with some brainstorming. I’m looking for some seminal ideas for how the world will look like by then. We don’t need to be 100% precise, although keeping the pieces of the setting vague by avoiding Burdensome Details is a way of avoiding glaring mistakes, and gives a Lord Of The Rings, Ruins In The Distance feel of false depth. Don’t hesitate to suggest seemingly weird but actually reasonable ideas: the future I want to build is a Weirdtopia. The point is to frighten, wonder, and suck the reader in.
Let’s see, for a start: cryogenics and cybernetics are a solved problem, and people’s heads are being resurrected and put on mechanical bodies by default (they could ask for recreated biological bodies, but usually after the first tantrums… they don’t ^_^). The audience can be given someone to identify with through a Temporal Fish Out Of Water, one of the resurrected Human Popsicles. The funny part is that, even though that person happens to be a transhumanist AND a singularitarian, they hadn’t surpassed the Shock Level (I think that’s what Yudkowsky called it when you were enthused with an idea because you don’t think of it as normal yet?), and they are only marginally less freaked out by the world they find themselves into than the normal sci-fi fan readers (or even the mainstream ones, if this ends up so good as to have any).
Fabbing will probably be commonplace in some form. Can people fab food? Biological weapons? Computing power? Who regulates fabbing? Weaponization of this technology will be a historical fact, and AI regulation is likely to be part of the solution.
This is a technology that will probably have major cultural, political and economic effects worth discussing.
The real problem with making predictions even for the nearest future is that technological breakthroughs are often unpredictable. 1950s science fictions authors promised us colonies on Mars by 2000, but who anticipated the Internet?
Well, computers were anticipated in 1671, and
Jules Verne didn’t anticipate Submarines: there were already quite a few of those back then although none as huge and NERV-Titanic (relative to the era) as the Nautilus.
But, IMHO, the ultimate example is:
So, the ultimate badass achievement for science-fiction writers isn’t to just anticipate stuff, it’s to have their anticipation cause the changes and inventions in the first place, as a self-fulfilling prophecy.
So, no, I’m not too worried about making stuff up, cuz that stuff might actually end up being made when it otherwise wouldn’t be. We’re free-roaming in Idea Space, man, let’s just enjoy the ride and try to come up with something fun as well as pedagogic. Plus, the fun thing about Less Wrong is that our focus on human biases and systematic errors gives us foot to write plots that aren’t all that sensitive to Zeerust, relying on deeply-seated human idiosyncrasies instead. Going that route is also the easiest way to appeal to the mainstream and to get the high-status “Literature” qualifier, which is always good publicity AND it would allow us to slip our Author Tract in a fairly honest and straightforward way without making it a heavy handed filibuster...
This misses the main issue: while some writers did correctly anticipate some technologies, as a whole the general accuracy of predictions was very weak. Even those who did make correct predictions they were often buried in a host of other predictions. For example, some of Arthur C. Clarke’s short stories have an internet like thing, but the vast majority do not. Similarly, Gibson’s cyberspace only has a rough similarity to the internet as we know it. So claiming that these were anticipated seems to be almost a file drawer effect.
Sounds fun. I made a little video last month about what Hanson calls “Ems” that’s supposed to grow into a bigger discussion on the political and social consequences. I call ’em “Uploads” though.
http://www.youtube.com/watch?v=RXAuglDs95s
That’s more immediate-future rather than fifty years hence though. The script for later episodes talks more about their having failed to make any kind of AI work properly other than by scanning and uploading though, how learning facts is not the same as understanding with digs at the Cyc project.
If you’re setting something further future I’d think a lot about exactly how this whole internet thing is going to be affecting social change over the next fifty years. Everyone’s presumably connected wirelessly all the time, google and the wikipedia closer to everyone’s brain than merely “at their fingertips”. How does conversation change when everyone know everything there is to know about everything?
Having access to information and actually having assimilated it are two entirely different things. Hainv Wikipedia wired into your brain will allow you to check cursory definitions and article-introductions near instantly, but your interlocutor must wait a couple of minutes for you to read and understand. That time will vary depending on how enhanced your intelligence is and how much you are already acquainted with the topic at hand. It might grow enormously if, say, upon meeting a lesswrongian for the first time, you’re forced to make a wiki walk through their archives just to be up-to-date with them (seriously, this habit of peppering articles with links to other articles when they aren’t strictly necessary for the understanding of the texts should stop, it creates an unhealthy in-house feeling and forces new users into month-long ermitatges trying to close the exponentially exploding army of tabs!).
Given the fact that History is in constant acceleration and that things are more and more inter-connected, I assume there’d be a state-sponsored effort (if not an entire industry) of developing digest history books and other introductory material, not for Dummies, but for Thawees (we should get a better name than that for the Resurrected… we should also get a derogatory one, because racism and priviledge: “The Walking Dead”? “They’re History”? “Time-Skippers”? “Dinosaurs”? (Cue jokes about blood in amber and the prophesized Dinoday),
Indeed, this is what I was talking about with the Cyc project, just having the information isn’t enough, it needs to be integrated, to have meaning.
Still, it seems many of my pub conversations are already changing with wireless mobile internet access as what would have been a large discussion about whether or not something was real, or what it did, or when it was, can be quickly checked by a source both people would agree is better than anyone physically present.
Which also points to ways conversation in general changes. Just coz you aren’t there, doesn’t mean you can’t be consulted immediately. In Farscape, the characters would be conversing with each other even when remote, without having to have an obvious comms device or think to turn it on or all. Just shout at ’em and they hear, wherever they are, whatever they’re doing.
You’re talking to some resurrected dude about his grandson, and suddenly grandson is there in the conversation saying hello from the beach where he’s lazing with a cocktail.
I always thought that looked fun, but given the rants I get from people wondering why I’d bother to log into a website to show ’em pictures of a beach holiday while I “should be off having fun” perhaps there’d be social pressure to keep conversation local.
Dunno. Look forward to finding out how it’ll all pan out anyway :)
You really should make a dedicated discussion-level post for this.
Um, yeah, you’re probably right. Won’t be around to reply/baby-sit it from now till after the weekend though. Maybe I’ll do it Tuesday.
I really enjoyed this, very watchable. Subscribed!
Watched the video. Loved it. Except for the bit at the end, the positions were made to look too emotional rather than reasoned. Instead of saying an outraged “going to church makes him moral?” he could have said something along the lines of “you know who else went to church every day? Archbishop Richeleu and Girgori Rasputin. You know what they had in common, besides being high-ranking priests? They shanked a number of women that’s in the order of the hundreds” or maybe some shorter but equally strong counterexample in the “him going to church doesn’t prove anything” line, including that exact phrasing, especially if they’ve talked about the topic before.
There’s also a serious audio problem, I really had to strain my ears to listen.
Otherwise, as I said, I loved it, especially the implications of “living in the Metaverse”.
Thanks.
Yeah, you should have heard the sound before Danny cleaned it up ;) I should buy better equipment probably.
I think showing that the Uploads still react emotionally is going to be an important part of any work which features ’em, especially if they’re “smart” people, otherwise it can look like uploading turns you into a Spock-Bot. Mostly I was just trying to keep the dialog tight. My natural writing is way too verbose for a five minute video, perhaps I overcompensated there a little.
I know that feel, bro. Whenever I write a play I have to compact the dialogues because there is no time-
When I wrote science fiction, I always tried to dismiss AI from the equation so I wouldn’t be tempted to use it as a deus ex machina. It might be worthwhile to explore the issues, like AGI being a real deus or maleficarus ex machina first and then design a world that throws those into sharp relief rather than trying to be realistic.
Hehe, a FGAI could be easily called a literal Deus Ex Machina… The God me made could be in many ways better and more competent than any God we made up XD This phrase is sure to piss off so many people, I think Í′ll make it the slogan of the pro FAI group. Although I somewhat optimistically anticipate that by 2060 such a thing wouldn’t even be remotely provocative but instead would be smack dab in the middle of the Overton Window and would be about as bold as saying “YES, WE CAN” in 2008...
… I need to read up the Fun Theory sequence before elaborating on how people and the FAI would interact, otherwise I risk getting seriously panned for coming up with messy stuff like this [ITWASABEGINNERSMISTAKEPLEASEDON’TSINKITMORETHANITALREADYISIMBEGGINYA ;_;].
Small note, I don’t think it is at all a consensus that FAI will be developed by 2060.
Trying to write hard science fiction for this setting is not remotely possible for any human equivalent mind.
Your options:
Go ahead and get an embarrassing failure that makes every LWer grit their teeth.
Make it clear that it’s not realistic/take place in the kind of universe we actually live in.
Use a completely different setting.
Not to discourage you by any means. Especial that third option is very flexible and can be done subtly. Just writing what you originally intended and putting a disclaimer in the foreword will do, but having it in mind will save you a lot of headaches over unnecessary realism that is just a pipedream anyway. (Note that this is not a free ticket to cut corners on DETAIL or CONSISTENCY)
(Then again, I have zero writing experience so you probably shouldn’t take what I say to seriously.)
Ghost in the Shell does it pretty well, IMO. Well enough to suspend disbelief, at least.
It does it well enough to provoke a “huh, this is pretty interesting and thoughtful” response, but well enough that we won’t be embarrassed by it in fifty years? I’d give very steep odds on that.
Creating a sci fi setting well enough that the audience can suspend disbelief is very different from creating one that’s accurately predictive.
No it dosn’t. I haven’t seen it but I know nothing like that exists.
I’d wager it’s the second solution.
Being able to suspend disbelief is not correlated, it just means it’s a good story. (with a few exceptions, and possible interpersonal variance) disbelief can be suspended just as well to fantasy as to reports of real world events if they are written well enough.
Again, none of this is an attack on the works artistic merit, or their entertainment value, or even their usefulness for learning about science or rationality. But if you think scifi that can lay similar claims to being prophetic that it did a few decades ago is possible today you’re wrong.
Ok, that’s a horrible simplification. I suck at trying to explain things. Basically want I’m saying is that when the standards of hard scifi were developed you could predict indefinitely into the future and still stay within the bounds, but with what we know now, ESPECIALLY someone on LW, that claim cant be honestly made more than a decade or so into the future except in highly contrived circumstances. All the really good transhumanist fiction (that comes to mind at the moment) break some of the implicit rules (if I had actually watched GitS I would probably have listed the way it does so here), or even plain admits to following narrative laws.
You will probably not notice it in most cases, at least not consciously. Most of it is subtly implicit.
I don’t think scifi could ever “lay claims on being prophetic”. It was always more about exploring interesting ideas and look at the stories you could tell than trying to predict anything, AFAIK.
That it’s that way nowadays is what my entire point is. But there is also the notion of the ideal of scifi Hardness. I’m not sure it actually ever existed, but I’m given the impression that at the time they were published things like Asimovs robot series, 1984, (or way back before the ideal even existed: the works of Jules Verne) were considered to be literally possible and unlikely only by virtue of conjunction not any fundamental limitation on what the author could imagine.
Maybe you’ve read less old scifi and thus been less exposed to this ideal?
I haven’t read Verne, but I’ve read 1984, as well as a bunch of works by Asimov and Clarke. I’m not sure if the ideal you’re describing existed or not, but it doesn’t seem to be part of the way the term “hard science fiction” is used today. Today, it just means science fiction that’s scientifically rigorous in the sense of being consistent with all currently known science.
Now I’m confused. “scientifically rigorous in the sense of being consistent with all currently known science.” seems equivalent of “considered to be literally possible and unlikely only by virtue of conjunction not any fundamental limitation on what the author could imagine”, both impossible.
And yea I read the links, their definitions are not exactly the same but close enough, but they seem to qualify specific works and authors much higher than I.
I thought that by “considered to be literally possible and unlikely only by virtue of conjunction not any fundamental limitation on what the author could imagine”, you meant two things:
First, the definition of hard sci-fi as given in those links, i.e. nothing in the story must contradict currently known science.
Second, that the consequences of those technologies must be humanly imaginable and in principle predictable by the author. In other words, after-the-Singularity stories cannot be considered hard sci-fi, because it’s fundamentally impossible for us to imagine the consequences of a greater-than-human intelligence.
I was saying that the common usage of the term hard sci-fi only requires that the first of these criteria be met, not necessarily both. Was this not what you meant?
I am unable to distinguish between these, or even clearly comprehend what such a distinction would mean.
Well, using the Internet as an example. There were some pretty good predictions about something like the Internet. But for someone in 1980, say, to write a story set in 2020 and come up with all of the consequences of the Internet would have been impossible. I don’t think anyone predicted Wikipedia or Facebook or 4chan or the impact those would have on our daily life. At least they didn’t predict the combined impact of all three and various other services besides. Heck, even we don’t yet know what all the consequences will be, since there are probably lots of ways of using the ’Net that still remain to be invented.
However, what they could do is to write a sci-fi story about the consequences they can imagine. Maybe they predicted online shopping, and e-mail and working remotely, or maybe they based their story on this eighties’ study. In any case, their story would have been consistent with what the science of 1980 knew.
If we apply both criteria 1 and 2, this would not have been hard sci-fi, as it couldn’t have predicted all the consequences of the Internet. If we apply only criterion 1, then it would have been hard sci-fi.
Likewise with the Singularity. We have no way of predicting all the things that a superintelligence might do. But we can come up with things that the superintelligent could plausibly do, that’s consistent with science as we know it. If someone writes a story where a superintelligence escapes into the Internet by hacking a million computers and running as a distributed intelligence, and then launces a brilliant social engineering scheme targeting all of humanity after it has read all the psych, sociology and marketing papers ever published—well, that contradicts no science that we know of. So going only by the first criterion, that’s hard sci-fi.
I don’t think I really have your concept of a surface level discrete “consequence”. One intuition is telling me I’m thinking to much like reality, I’m not really sure how that’d work but it probably has something to do with how the simulations authors have in their heads are different from reality. I’m not really in the best condition right now maybe I’ll get it later.
Now do we or do we not have flying cars and jetpacks? How about information-based teleports? Or, even better, slave robots teleports? Since people can retain their personality with just their heads, why not go the next step and have them use interchangeable, remote-controlled surrogate bodies for anything they’d need to do in person? I haven’t seen the movie “The Surrogates”, but I heard it has a similar idea that it completely failed to exploit properly.
Also, what about the exploration of dreams? I accept Paprika as a source of inspiration, but not Inception: that movie has so many glaring resarch errors it’s hilarious. For one thing, you can’t read in a dream, that brain area doesn’t work, and all tries of reading by lucid dreamers have resulted in either failure or awakening.… (I know, I have to find the research paper I read that in, cuz links or it didn’t happen). Actually Inception would be a more appropriate inspiration for a story about cyberspace:
That is not true. I have read in dreams. The idea that you can’t do so was perpetuated by a Batman: The Animated Series episode, but it has no basis in fact.
All that episode said is that Batman couldn’t read in his dreams. I remember this nightmare I had. I was before an exam sheet. I read the exercise. I thought I understood it, but just to make sure, once i reached the bottom, I reviewed it, starting from the top. It had changed. I SPENT ALL THE NIGHT LIKE THAT: It was the most horrible nightmare I ever had, even worse than right after seeing Jurassic Park, when I dreamed of the T Rex coming to eat me on the toilet.
Anyhoo, I just checked the Internets for a while. No mention of a study (I must have made that up, somehow, which deeply disturbs me, it’s the second time in my life my memory makes shit up), but plenty of people saying they can read, can’t read, or can read but if they check for content they find the text changes offscreen.
More realistically, it’s the second time in your life that you noticed that your memory made shit up. Memories do that all the time.
Not all memories: some don’t . Except for those two instances, I have never, ever remembered something wrong in my life. Either I know or I don’t. I don’t want to get into a discussion on this: some people’s brains work in subtly different ways from others. In my case, I never remember stuff wrong. When I’m making shit up or taking a guess, I actually know it, I know there’s a gap, a blank. That’s the way it is, period. Sometimes I’m tempted to fill it, but I systematically shy from that, like an instinctive feeling of danger.
Sorry for the harsh tone, it’s just that I remember (heh) having a similar discussion on the TV Tropes fora, and it took me, like, pages to convince them of this, and I wouldn’t replay it. Anyway, what’s the accuracy of my assesment of my own memory have to do with the topic? Plus, you can’t question people’s subjective experiences of their own brain: there’s just no way you can decide the discussion through evidence either way. I could argue that people who thought they read in their books actuall heard the words, or thought them without the intermediary of letters in the paper, or something like that. Such an argument won’t lead anyone anywhere.
...
Hey, that’s an interesting topic too: what if brainscanning or brain interfaces allow for an objective understanding of what happens inside? What if some “gifted” people have brains that have a functioning so far from the norm it’s indecipherable for machines (at least until they come up with a better algorythm)?
It’d certainly help improving a lot of stuff, understanding how people think.
There’s a difference between reading something where the text changes, and being wholly unable to read. It would be easy to create a webpage in which the text changed periodically, just using Javascript.
Also, note that in the Batman episode, he makes the same claim that you did, that the “reading” part of the brain doesn’t work in dreams.
… I am so ashamed...
It must be Kevin Konroy’s Goddamn Batman voice.
Yeah, but the text changing as soon as you look away… that’s gaslighting… Anyhoo, I see how the disticntion you draw is relevant. i also see how most people wouldn’t tell the difference until one pointed it out at them.
I remember reading about an experiment in which they did exactly that: change the text on a computer screen during eye sucades, when the eyes aren’t processing data, IE while you’re “Not looking”. Which reminded me of trying to read in dreams, certainly.
I once had a lucid dream in which I decided to see how good my latent memory was by picking up a book from my self in the dream and reading the first line to see, when I woke, if I’d got it right. But it was just nonsense babble which, as you point out, kept changing. Oh well.
We have those things, but then we don’t have them.
If you want to make a story that sounds plausible, try and consider not just “would this technology work?” but “if we could make this, would we use it?” Consider the other things we’d be able to make if we could make that particular technology.
Whoops didn’t mean to comment.
How does this sound for a start?
I woke up to the smell of artificial carrots and a feeling of wetness on my cheek; opening my eyes I quickly identified the cause. It was my robo-sagan-dog, part of the reconstructed Carl Sagan group mind. Yawning, I rose up in bed and picked up my terminal from the bedside table. I’d been outbidded on nootropics on the bitcoin market; I decided to worry about it later. Switching to a news feed I learned that reconstructed Carl Sagan had been appointed president again, 3rd term running. Ever since the system had realized democracy is stupid they just had Yudkowsky select our presidents. Reconstructed Carl Sagan was the popular choice anyway: his namesake had been the public face of science, he was the public face of the state. Some people thought his personality wasn’t true to the original but you couldn’t argue with success: since he had first been elected 8 years ago public opinion of SIAI had risen nearly 20%.
To be continued…