Has anyone read The Integral Trees by Larry Niven? Something I always wonder about people supporting cryonics is why do they assume that the future will be a good place to live in? Why do they assume they will have any rights? Or do they figure that if they are revived, FAI has most likely come to pass?
A dystopian society is unlikely to thaw out and revive people in cryostasis. Cryostasis revival makes sense for societies that are benevolent and have a lot of free resources. Also, be careful not to try to generalize from fictional examples. They are not evidence. That’s all the more the case here because science fiction is in general a highly reactionary genre that even as it uses advance technology either warns about the perils or uses it as an excuse to hearken back to a more romantic era. For example look how many science fiction stories and universes have feudal systems of government.
There certainly is a large chunk of science fiction that could be accurately described as medieval fantasy moved to a superficially futuristic setting.
There is also the legitimate question of how fragile our liberal norms and economy are—do they depend on population density? on the ratio between the reach of weapons and the reach of communications? on the dominance of a particular set of subcultures that attained to industrial hegemony through what amounts to chance and might not be repeated?
If egalitarianism is not robust to changes in the sociological environment, then there might simply be many more possible futures with feudal regimes than with capitalist or democratic regimes.
Yes, but how often do they bother to explain this rise other than in some very vague way? And it isn’t just feudalism. Look for example at Dune where not only is there a feudal system but the technology conveniently makes sword fighting once again a reasonable melee tactic. Additional evidence for the romantic nature is that almost invariably the stories are about people who happen to be nobles. So there’s less thinking and focusing on how unpleasant feudalism is for the lower classes.
The only individual I’ve ever seen give a plausible set of explanations for the presence of feudal cultures is Bujold in her Vorkosigan books. But it is important to note that there there are many different governmental systems including dictatorships and anarcho-capitalist worlds and lots of other things. And she’s very aware that feudalism absolutely sucks for the serfs.
I don’t think that most of these writers are arriving at their societies by probabilistic extrapolation. Rather, they are just writing what they want their societies to have. (Incidentally, I suspect that many of these cultural and political norms are much more fragile than we like to think. There are likely large swaths of the space of political systems that we haven’t even thought about. There might well be very stable systems that we haven’t conceived of yet. Or there might be Markov chains of what systems are likely to transfer to other systems).
I don’t think that most of these writers are arriving at their societies by probabilistic extrapolation. Rather, they are just writing what they want their societies to have.
Those aren’t the only possibilities—much more likely is the Rule of Cool. Wielding a sword is cooler than wielding a gun, and swordfights are more interesting than gunfights.
I don’t think that most of these writers are arriving at their societies by probabilistic extrapolation.
Granted. Some are, though. Two more counter-examples, besides Bujold:
Asimov’s Foundation, e.g. the planet of Anacreon. Feudalism is portrayed as the result of a security dilemma and the stagnation of science, as reducing the access of ordinary people to effective medicine and nuclear power, and as producing a variety of sham nobles who deserve mockery.
Brave New World. Feudalism is portrayed as a logical outgrowth of an endless drive toward bureaucratic/administrative efficiency in a world where personal freedom has been subordinated to personal pleasure. Regionally-based bureaucrat-lords with concentrically overlapping territories ‘earn’ their authority not by protecting ordinary serfs from the danger of death but from the danger of momentary boredom or discomfort. Huxler doesn’t seem overly fond of this feudalism; the question of whether a romantic would prefer this sort of system is, at worst, left as an exercise for the reader.
Huh. I had not really thought Brave New World as using a feudal system but that really is what it is. It might be more accurate to then make the point that the vast majority of the other cases have systems that aren’t just feudal but are ones in which the positions are inherited.
I agree that some of these writers are extrapolating. Since Asimov is explicitly writing in a world where the running theme is the ability to reliably predict social changes it shouldn’t be that surprising that he’d actually try to do so. (Note also that Asimov also avoids here the standard trap of having protagonists who are nobles).
That’s all the more the case here because science fiction is in general a highly reactionary genre that even as it uses advance technology either warns about the perils or uses it as an excuse to hearken back to a more romantic era. For example look how many science fiction stories and universes have feudal systems of government.
This is a little too broad for me to be comfortable with. There are certainly subgenres and authors who are reactionary but then there are those that are quite the opposite. Military SF and space opera (which, frankly, is just fantasy with lasers) are usually quite reactionary. Cyberpunk is cautionary but not so much about technology as about capitalism. Post-apocalyptic sf is sometimes about technology getting to great for us to handle but the jewel of the genre, A Canticle for Leibowitz is about the tragedy of a nationwide book burning. Post-cyberpunk is characterized by it’s relative optimism. Hard sf varies in its political sensibilities (there seem to be a lot of libertarians) but it’s almost always pro-tech for obvious reasons.
I’m having a hard time coming up with authors that fit the reactionary bill, but that might be because I read the wrong subgenres. And the libertarians are hard to classify. Michael Crichton is the obvious one that occurs to me. Larry Niven, I suppose. Card and Heinlein could be put there though both are more complicated than that. Herbert. In the other camp: Brin, Kim Stanley Robinson, LeGuin, Dick, Neil Stephenson, Gibson, Vonnegut, Orwell, Doctorow, Bradbury. Asimov and Clark probably fall in the second camp...
I think it would be fair to say that the more famous authors in general are less reactionary. But if I had to list reactionaries I’d list Herbert, Crichton, Pournelle, Weber, Anderson, McCaffrey’ (arguable, but definite aspects in Pern), Koontz, Shelley, Lovecraft and to some extent Niven and Card.
Also, there seems to be a lot more of a general reactionary bent in the less successful scifi. The major authors seem to have less of that (possibly because their views are so unique that they override anything as simple as being reactionary or not).
The example you give of a Canticle for Liebowitz is more complicated: While book burning and such is portrayed as bad, that’s still a response to a nuclear apocalypse. Indeed, in that regard, almost any science fiction that’s post nuclear war has a reactionary aspect.
If we move outside literature directly, and say into movies and TV the general pattern is pretty clear. While people often think of Star Trek as optimistic about technology, even in TOS many episodes dealt with the threat of new technologies (androids and intelligent computers both came up). The Outer Limits in both its original form and reincarnation were generally anti-technology. It was a safe bet in any episode of the reincarnation that any new knowledge or new technology was going to fail or cause horribly disturbing side effects that would be summarized with a moralistic voice over at the end that would make Leon Kass proud. Similarly Doctor Who has had multiple incarnations of the Doctor lecture about how bad trying to be immortal is. Movies have a similar track record (The Terminator, Hollowman, The Sixth Day, for just a few examples. Many more examples can be given)
I agree that overall this was likely a hasty generalization. Science fiction has reactionary elements but it is by no means an intrinsically reactionary genre.
Shelley and Lovecraft are good calls, I had forgotten to think about the early stuff. We can put Vern in the progressive camp, I think.
There is sort of an interesting division among the “cautionary tales”. There’s the Crichton/Shelley/Romero zombie tradition of humans try to play God and get their asses kicked as punishment unless traditional values/folkways come to the rescue. And then theres the more leftist tradition: new technology has implications capitalism or statism isn’t equipped to deal with, here we include H.G. Wells, Brave New World and other dystopias, cyberpunk, Gattaca, a lot of post-nuke war stuff, etc.
Are both groups reactionary under your definition or just the first?
I totally agree about Hollywood. There is also the whole alien invasion subgenre which originally was really about Cold War anxiety. Cloverfield is probably counts as a modern-day equivalent.
There’s the Crichton/Shelley/Romero zombie tradition of humans try to play God and get their asses kicked as punishment unless traditional values/folkways come to the rescue.
How do you classify Egan? Pretty pro-tech in his novels, iirc, but a pretty high proportion of his short stories are effectively horror about new tech.
That isn’t how his short stories have struck me. A handful that come to mind about near-future technology, not having the books in front of me, are Axiomatic, Silver Fire, The Moral Virologist, Worthless, and one whose name I forget about artificial nanomcguffins that let you gradually reprogram your own mind just by wishing the change you want. They’re pretty dark, but I wouldn’t classify them as horror. That is, I don’t read them as saying “these are things that man should not know”, but “after such knowledge, these are issues that must be faced”.
The original The War of the Worlds by H.G. Wells has many similarities to the era’s “invasion stories” in which a hostile foreign power (usually Germany or France) launches a very successful surprise invasion of Great Britain. Wells just replaced Germany with Martians.
The point about there being different categories is one I had not thought about. I agree that the first is unambiguously iin the reactionary form. I’m not sure that the second is always reactionary: It might depend on the degree at which the technology is caricatured. Thus for example, Brave New World and Gattaca both seem to be such extreme caricatures of what might happen with those technologies that they seem reactionary. That’s in contrast with say “A Deepness in the Sky” which takes the same technologies and shows different societal responses to them (careful use, arguable abuse and outright tyranny). Similar, a lot of Bujold’s works raise serious issues about the ethical and policy issues brought up by specific, plausible technologies, but she’s generally careful to show both use and abuse, not just horrific dystopias.
agree that the first is unambiguously iin the reactionary form. I’m not sure that the second is always reactionary: It might depend on the degree at which the technology is caricatured.
This sounds a lot like just debating definitions—is “reactionary” such a useful term here? Sounds to me like you’re trying to shoehorn it in a context where it doesn’t really fit? Wouldn’t replacing it with a more precise and narrow terms make the discussion clearer—such as “romantic about traditional societies” or something?
That’s a valid point. Maybe split into two forms: 1) Romantic attitudes towards traditional societies and 2) extreme caricatures of the potential negative ramifications of new technologies. 1 and 2 both seem to be highly correlated in science fiction. Many of the examples given show aspects of both.
Science fiction has a bias towards things going wrong.
In the particular case of cryonics, if there’s a dystopian future where the majority of people have few or no rights, it’s a disaster all around, but as ata says, you can presumably commit suicide. There’s a chance that even that will be unfeasible—for example if brains are used, while conscious, for their processing power. This doesn’t seem likely, but I don’t know how to evaluate it in detail.
The other case—people in general have rights, but thawed people, or thawed people from before a certain point in time, do not—requires that thawed people do not have a constituency. This doesn’t seem terribly likely, though as I recall, Niven has it that it takes a very long time for thawing to be developed.
Normally, I would expect for there to be commercial and legal pressures for thawed people to be treated decently. (I’ve never seen an sf story in which thawed people are a political football, but it’s an interesting premise.)
I think the trend is towards better futures (including richer, with less reason to enslave people), but there’s no guarantee. I think it’s much more likely that frozen people won’t be revived than that they’ll be revived into a bad situation.
Science fiction has a bias towards things going wrong.
All fiction has a bias towards things going wrong. Need some kind of conflict.
(Reality also has a bias towards things going wrong, but if Fun Theory is correct, then unlike with fiction, we can change that condition without reducing the demand for reality.)
Or do they figure that if they are revived, FAI has most likely come to pass?
Can’t speak for any other cryonics advocates, but I find that to be likely. I see AI either destroying or saving the world once it’s invented, if we haven’t destroyed ourselves some other way first, and one of those could easily happen before the world has a chance to turn dystopian. But in any case, if you wake up and find yourself in a world that you couldn’t possibly bear to live in, you can just kill yourself and be no worse off than if you hadn’t tried cryonics in the first place.
Strongly unFriendly AI (the kind that tortures you eternally, rather than kills you and uses your matter to make paperclips) would be about as difficult to create as Friendly AI. And since few people would try to create one, I don’t think it’s a likely future.
“unFriendly” doesn’t mean “evil”, just “not explicitly Friendly”. Assuming you already have an AI capable of recursive self-improvement, it’s easy to give it a goal system that will result in the world being destroyed (not because it hates us, but because it can think of better things to do with all this matter), but creating one that’s actually evil or that hates humans (or has some other reason that torturing us would make sense in its goal system) would probably be nearly as hard as the problem of Friendliness itself, as gregconen pointed out.
Actually, it’s quite possible to deny physical means of suicide to prisoners, and sufficiently good longevity tech could make torture for a very long time possible.
I think something like that (say, for actions which are not currently considered to be crimes) to be possible, considering the observable cruelty of some fraction of the human race, but not very likely—on the other hand, I don’t know how to begin to quantify how unlikely it is.
Has anyone read The Integral Trees by Larry Niven? Something I always wonder about people supporting cryonics is why do they assume that the future will be a good place to live in? Why do they assume they will have any rights? Or do they figure that if they are revived, FAI has most likely come to pass?
A dystopian society is unlikely to thaw out and revive people in cryostasis. Cryostasis revival makes sense for societies that are benevolent and have a lot of free resources. Also, be careful not to try to generalize from fictional examples. They are not evidence. That’s all the more the case here because science fiction is in general a highly reactionary genre that even as it uses advance technology either warns about the perils or uses it as an excuse to hearken back to a more romantic era. For example look how many science fiction stories and universes have feudal systems of government.
Now that’s a reasonable argument: benevolent, resource rich societies are more likely to thaw people. Thanks.
And yes, that’s true, science fiction does often look at what could go really wrong.
There certainly is a large chunk of science fiction that could be accurately described as medieval fantasy moved to a superficially futuristic setting.
There is also the legitimate question of how fragile our liberal norms and economy are—do they depend on population density? on the ratio between the reach of weapons and the reach of communications? on the dominance of a particular set of subcultures that attained to industrial hegemony through what amounts to chance and might not be repeated?
If egalitarianism is not robust to changes in the sociological environment, then there might simply be many more possible futures with feudal regimes than with capitalist or democratic regimes.
Yes, but how often do they bother to explain this rise other than in some very vague way? And it isn’t just feudalism. Look for example at Dune where not only is there a feudal system but the technology conveniently makes sword fighting once again a reasonable melee tactic. Additional evidence for the romantic nature is that almost invariably the stories are about people who happen to be nobles. So there’s less thinking and focusing on how unpleasant feudalism is for the lower classes.
The only individual I’ve ever seen give a plausible set of explanations for the presence of feudal cultures is Bujold in her Vorkosigan books. But it is important to note that there there are many different governmental systems including dictatorships and anarcho-capitalist worlds and lots of other things. And she’s very aware that feudalism absolutely sucks for the serfs.
I don’t think that most of these writers are arriving at their societies by probabilistic extrapolation. Rather, they are just writing what they want their societies to have. (Incidentally, I suspect that many of these cultural and political norms are much more fragile than we like to think. There are likely large swaths of the space of political systems that we haven’t even thought about. There might well be very stable systems that we haven’t conceived of yet. Or there might be Markov chains of what systems are likely to transfer to other systems).
Those aren’t the only possibilities—much more likely is the Rule of Cool. Wielding a sword is cooler than wielding a gun, and swordfights are more interesting than gunfights.
Granted. Some are, though. Two more counter-examples, besides Bujold:
Asimov’s Foundation, e.g. the planet of Anacreon. Feudalism is portrayed as the result of a security dilemma and the stagnation of science, as reducing the access of ordinary people to effective medicine and nuclear power, and as producing a variety of sham nobles who deserve mockery.
Brave New World. Feudalism is portrayed as a logical outgrowth of an endless drive toward bureaucratic/administrative efficiency in a world where personal freedom has been subordinated to personal pleasure. Regionally-based bureaucrat-lords with concentrically overlapping territories ‘earn’ their authority not by protecting ordinary serfs from the danger of death but from the danger of momentary boredom or discomfort. Huxler doesn’t seem overly fond of this feudalism; the question of whether a romantic would prefer this sort of system is, at worst, left as an exercise for the reader.
Huh. I had not really thought Brave New World as using a feudal system but that really is what it is. It might be more accurate to then make the point that the vast majority of the other cases have systems that aren’t just feudal but are ones in which the positions are inherited.
I agree that some of these writers are extrapolating. Since Asimov is explicitly writing in a world where the running theme is the ability to reliably predict social changes it shouldn’t be that surprising that he’d actually try to do so. (Note also that Asimov also avoids here the standard trap of having protagonists who are nobles).
This is a little too broad for me to be comfortable with. There are certainly subgenres and authors who are reactionary but then there are those that are quite the opposite. Military SF and space opera (which, frankly, is just fantasy with lasers) are usually quite reactionary. Cyberpunk is cautionary but not so much about technology as about capitalism. Post-apocalyptic sf is sometimes about technology getting to great for us to handle but the jewel of the genre, A Canticle for Leibowitz is about the tragedy of a nationwide book burning. Post-cyberpunk is characterized by it’s relative optimism. Hard sf varies in its political sensibilities (there seem to be a lot of libertarians) but it’s almost always pro-tech for obvious reasons.
I’m having a hard time coming up with authors that fit the reactionary bill, but that might be because I read the wrong subgenres. And the libertarians are hard to classify. Michael Crichton is the obvious one that occurs to me. Larry Niven, I suppose. Card and Heinlein could be put there though both are more complicated than that. Herbert. In the other camp: Brin, Kim Stanley Robinson, LeGuin, Dick, Neil Stephenson, Gibson, Vonnegut, Orwell, Doctorow, Bradbury. Asimov and Clark probably fall in the second camp...
Am I just missing the reactionary stuff?
I think it would be fair to say that the more famous authors in general are less reactionary. But if I had to list reactionaries I’d list Herbert, Crichton, Pournelle, Weber, Anderson, McCaffrey’ (arguable, but definite aspects in Pern), Koontz, Shelley, Lovecraft and to some extent Niven and Card.
Also, there seems to be a lot more of a general reactionary bent in the less successful scifi. The major authors seem to have less of that (possibly because their views are so unique that they override anything as simple as being reactionary or not).
The example you give of a Canticle for Liebowitz is more complicated: While book burning and such is portrayed as bad, that’s still a response to a nuclear apocalypse. Indeed, in that regard, almost any science fiction that’s post nuclear war has a reactionary aspect.
If we move outside literature directly, and say into movies and TV the general pattern is pretty clear. While people often think of Star Trek as optimistic about technology, even in TOS many episodes dealt with the threat of new technologies (androids and intelligent computers both came up). The Outer Limits in both its original form and reincarnation were generally anti-technology. It was a safe bet in any episode of the reincarnation that any new knowledge or new technology was going to fail or cause horribly disturbing side effects that would be summarized with a moralistic voice over at the end that would make Leon Kass proud. Similarly Doctor Who has had multiple incarnations of the Doctor lecture about how bad trying to be immortal is. Movies have a similar track record (The Terminator, Hollowman, The Sixth Day, for just a few examples. Many more examples can be given)
I agree that overall this was likely a hasty generalization. Science fiction has reactionary elements but it is by no means an intrinsically reactionary genre.
Shelley and Lovecraft are good calls, I had forgotten to think about the early stuff. We can put Vern in the progressive camp, I think.
There is sort of an interesting division among the “cautionary tales”. There’s the Crichton/Shelley/Romero zombie tradition of humans try to play God and get their asses kicked as punishment unless traditional values/folkways come to the rescue. And then theres the more leftist tradition: new technology has implications capitalism or statism isn’t equipped to deal with, here we include H.G. Wells, Brave New World and other dystopias, cyberpunk, Gattaca, a lot of post-nuke war stuff, etc.
Are both groups reactionary under your definition or just the first?
I totally agree about Hollywood. There is also the whole alien invasion subgenre which originally was really about Cold War anxiety. Cloverfield is probably counts as a modern-day equivalent.
For anyone who hasn’t already seen it — Caveman Science Fiction!
How do you classify Egan? Pretty pro-tech in his novels, iirc, but a pretty high proportion of his short stories are effectively horror about new tech.
That isn’t how his short stories have struck me. A handful that come to mind about near-future technology, not having the books in front of me, are Axiomatic, Silver Fire, The Moral Virologist, Worthless, and one whose name I forget about artificial nanomcguffins that let you gradually reprogram your own mind just by wishing the change you want. They’re pretty dark, but I wouldn’t classify them as horror. That is, I don’t read them as saying “these are things that man should not know”, but “after such knowledge, these are issues that must be faced”.
I think those are the “Grey Knights” from “Chaff”.
Was this intended to be a reply to Jack’s post?
Yes, sorry.
The original The War of the Worlds by H.G. Wells has many similarities to the era’s “invasion stories” in which a hostile foreign power (usually Germany or France) launches a very successful surprise invasion of Great Britain. Wells just replaced Germany with Martians.
The point about there being different categories is one I had not thought about. I agree that the first is unambiguously iin the reactionary form. I’m not sure that the second is always reactionary: It might depend on the degree at which the technology is caricatured. Thus for example, Brave New World and Gattaca both seem to be such extreme caricatures of what might happen with those technologies that they seem reactionary. That’s in contrast with say “A Deepness in the Sky” which takes the same technologies and shows different societal responses to them (careful use, arguable abuse and outright tyranny). Similar, a lot of Bujold’s works raise serious issues about the ethical and policy issues brought up by specific, plausible technologies, but she’s generally careful to show both use and abuse, not just horrific dystopias.
This sounds a lot like just debating definitions—is “reactionary” such a useful term here? Sounds to me like you’re trying to shoehorn it in a context where it doesn’t really fit? Wouldn’t replacing it with a more precise and narrow terms make the discussion clearer—such as “romantic about traditional societies” or something?
That’s a valid point. Maybe split into two forms: 1) Romantic attitudes towards traditional societies and 2) extreme caricatures of the potential negative ramifications of new technologies. 1 and 2 both seem to be highly correlated in science fiction. Many of the examples given show aspects of both.
Science fiction has a bias towards things going wrong.
In the particular case of cryonics, if there’s a dystopian future where the majority of people have few or no rights, it’s a disaster all around, but as ata says, you can presumably commit suicide. There’s a chance that even that will be unfeasible—for example if brains are used, while conscious, for their processing power. This doesn’t seem likely, but I don’t know how to evaluate it in detail.
The other case—people in general have rights, but thawed people, or thawed people from before a certain point in time, do not—requires that thawed people do not have a constituency. This doesn’t seem terribly likely, though as I recall, Niven has it that it takes a very long time for thawing to be developed.
Normally, I would expect for there to be commercial and legal pressures for thawed people to be treated decently. (I’ve never seen an sf story in which thawed people are a political football, but it’s an interesting premise.)
I think the trend is towards better futures (including richer, with less reason to enslave people), but there’s no guarantee. I think it’s much more likely that frozen people won’t be revived than that they’ll be revived into a bad situation.
All fiction has a bias towards things going wrong. Need some kind of conflict.
(Reality also has a bias towards things going wrong, but if Fun Theory is correct, then unlike with fiction, we can change that condition without reducing the demand for reality.)
Science fiction has a stronger bias towards things going wrong on a grand scale than most fiction does.
Otherwise, the advanced technology would just make everything great. They need extra-conflict to counter-out.
Can’t speak for any other cryonics advocates, but I find that to be likely. I see AI either destroying or saving the world once it’s invented, if we haven’t destroyed ourselves some other way first, and one of those could easily happen before the world has a chance to turn dystopian. But in any case, if you wake up and find yourself in a world that you couldn’t possibly bear to live in, you can just kill yourself and be no worse off than if you hadn’t tried cryonics in the first place.
Unless it’s unFriendly AI that revives you and tortures you forever.
Strongly unFriendly AI (the kind that tortures you eternally, rather than kills you and uses your matter to make paperclips) would be about as difficult to create as Friendly AI. And since few people would try to create one, I don’t think it’s a likely future.
“unFriendly” doesn’t mean “evil”, just “not explicitly Friendly”. Assuming you already have an AI capable of recursive self-improvement, it’s easy to give it a goal system that will result in the world being destroyed (not because it hates us, but because it can think of better things to do with all this matter), but creating one that’s actually evil or that hates humans (or has some other reason that torturing us would make sense in its goal system) would probably be nearly as hard as the problem of Friendliness itself, as gregconen pointed out.
Actually, it’s quite possible to deny physical means of suicide to prisoners, and sufficiently good longevity tech could make torture for a very long time possible.
I think something like that (say, for actions which are not currently considered to be crimes) to be possible, considering the observable cruelty of some fraction of the human race, but not very likely—on the other hand, I don’t know how to begin to quantify how unlikely it is.