I thought that by “considered to be literally possible and unlikely only by virtue of conjunction not any fundamental limitation on what the author could imagine”, you meant two things:
First, the definition of hard sci-fi as given in those links, i.e. nothing in the story must contradict currently known science.
Second, that the consequences of those technologies must be humanly imaginable and in principle predictable by the author. In other words, after-the-Singularity stories cannot be considered hard sci-fi, because it’s fundamentally impossible for us to imagine the consequences of a greater-than-human intelligence.
I was saying that the common usage of the term hard sci-fi only requires that the first of these criteria be met, not necessarily both. Was this not what you meant?
Well, using the Internet as an example. There were some pretty good predictions about something like the Internet. But for someone in 1980, say, to write a story set in 2020 and come up with all of the consequences of the Internet would have been impossible. I don’t think anyone predicted Wikipedia or Facebook or 4chan or the impact those would have on our daily life. At least they didn’t predict the combined impact of all three and various other services besides. Heck, even we don’t yet know what all the consequences will be, since there are probably lots of ways of using the ’Net that still remain to be invented.
However, what they could do is to write a sci-fi story about the consequences they can imagine. Maybe they predicted online shopping, and e-mail and working remotely, or maybe they based their story on this eighties’ study. In any case, their story would have been consistent with what the science of 1980 knew.
If we apply both criteria 1 and 2, this would not have been hard sci-fi, as it couldn’t have predicted all the consequences of the Internet. If we apply only criterion 1, then it would have been hard sci-fi.
Likewise with the Singularity. We have no way of predicting all the things that a superintelligence might do. But we can come up with things that the superintelligent could plausibly do, that’s consistent with science as we know it. If someone writes a story where a superintelligence escapes into the Internet by hacking a million computers and running as a distributed intelligence, and then launces a brilliant social engineering scheme targeting all of humanity after it has read all the psych, sociology and marketing papers ever published—well, that contradicts no science that we know of. So going only by the first criterion, that’s hard sci-fi.
I don’t think I really have your concept of a surface level discrete “consequence”. One intuition is telling me I’m thinking to much like reality, I’m not really sure how that’d work but it probably has something to do with how the simulations authors have in their heads are different from reality. I’m not really in the best condition right now maybe I’ll get it later.
I thought that by “considered to be literally possible and unlikely only by virtue of conjunction not any fundamental limitation on what the author could imagine”, you meant two things:
First, the definition of hard sci-fi as given in those links, i.e. nothing in the story must contradict currently known science.
Second, that the consequences of those technologies must be humanly imaginable and in principle predictable by the author. In other words, after-the-Singularity stories cannot be considered hard sci-fi, because it’s fundamentally impossible for us to imagine the consequences of a greater-than-human intelligence.
I was saying that the common usage of the term hard sci-fi only requires that the first of these criteria be met, not necessarily both. Was this not what you meant?
I am unable to distinguish between these, or even clearly comprehend what such a distinction would mean.
Well, using the Internet as an example. There were some pretty good predictions about something like the Internet. But for someone in 1980, say, to write a story set in 2020 and come up with all of the consequences of the Internet would have been impossible. I don’t think anyone predicted Wikipedia or Facebook or 4chan or the impact those would have on our daily life. At least they didn’t predict the combined impact of all three and various other services besides. Heck, even we don’t yet know what all the consequences will be, since there are probably lots of ways of using the ’Net that still remain to be invented.
However, what they could do is to write a sci-fi story about the consequences they can imagine. Maybe they predicted online shopping, and e-mail and working remotely, or maybe they based their story on this eighties’ study. In any case, their story would have been consistent with what the science of 1980 knew.
If we apply both criteria 1 and 2, this would not have been hard sci-fi, as it couldn’t have predicted all the consequences of the Internet. If we apply only criterion 1, then it would have been hard sci-fi.
Likewise with the Singularity. We have no way of predicting all the things that a superintelligence might do. But we can come up with things that the superintelligent could plausibly do, that’s consistent with science as we know it. If someone writes a story where a superintelligence escapes into the Internet by hacking a million computers and running as a distributed intelligence, and then launces a brilliant social engineering scheme targeting all of humanity after it has read all the psych, sociology and marketing papers ever published—well, that contradicts no science that we know of. So going only by the first criterion, that’s hard sci-fi.
I don’t think I really have your concept of a surface level discrete “consequence”. One intuition is telling me I’m thinking to much like reality, I’m not really sure how that’d work but it probably has something to do with how the simulations authors have in their heads are different from reality. I’m not really in the best condition right now maybe I’ll get it later.