Rationalist Fiction
Followup to: Lawrence Watt-Evans’s Fiction
Reply to: On Juvenile Fiction
MBlume asked us to remember what childhood stories might have influenced us toward rationality; and this was given such excellent answers as Norton Juster’s The Phantom Tollbooth. So now I’d like to ask a related question, expanding the purview to all novels (adult or child, SF&F or literary): Where can we find explicitly rationalist fiction?
Now of course there are a great many characters who claim to be using logic. The whole genre of mystery stories with seemingly logical detectives, starting from Sherlock Holmes, would stand in witness of that.
But when you look at what Sherlock Holmes does—you can’t go out and do it at home. Sherlock Holmes is not really operating by any sort of reproducible method. He is operating by magically finding the right clues and carrying out magically correct complicated chains of deduction. Maybe it’s just me, but it seems to me that reading Sherlock Holmes does not inspire you to go and do likewise. Holmes is a mutant superhero. And even if you did try to imitate him, it would never work in real life.
Contrast to A. E. van Vogt’s Null-A novels, starting with The World of Null-A. Now let it first be admitted that Van Vogt had a number of flaws as an author. With that said, it is probably a historical fact about my causal origins, that the Null-A books had an impact on my mind that I didn’t even realize until years later. It’s not the sort of book that I read over and over again, I read it and then put it down, but -
- but this is where I was first exposed to such concepts as “The map is not the territory” and “rose1 is not rose2″.
Null-A stands for “Non-Aristotelian”, and the premise of the ficton is that studying Korzybski’s General Semantics makes you a superhero. Let’s not really go into that part. But in the Null-A ficton:
1) The protagonist, Gilbert Gosseyn, is not a mutant. He has studied rationality techniques that have been systematized and are used by other members of his society, not just him.
2) Van Vogt tells us what (some of) these principles are, rather than leaving them mysteriously blank—we can’t be Gilbert Gosseyn, but we can at least use some of this stuff.
3) Van Vogt conveys the experience, shows Gosseyn in action using the principles, rather than leaving them to triumphant explanation afterward. We are put into Gosseyn’s shoes at the moment of his e.g. making a conscious distinction between two different things referred to by the same name.
This is a high standard to meet.
But Marc Stiegler’s David’s Sling (quoted in e.g. this post) meets this same standard: The Zetetics derive their abilities from training in a systematized tradition; we get to see the actual principles the Zetetics are using, and they’re ones we could try to apply in real life; and we’re put into their shoes at the moments of their use.
I mention this to show that it isn’t only van Vogt who’s ever done this.
However...
...those two examples actually exhaust my knowledge of the science fiction and fantasy literature, so far as I can remember.
It really is a very high standard we’re setting here. To realistically show your characters using an interesting technique of rationality, you have to know an interesting technique of rationality. Van Vogt was inspired by Korzybski, who—I discovered when I looked this up, just now—actually invented the phrase “The map is not the territory”. Marc Stiegler was inspired by, among other sources, Eric Drexler and Robin Hanson. (Stiegler has another novel called Earthweb about using prediction markets to defend the Earth from invading aliens, which was my introduction to the concept of prediction markets.)
If I relax the standard to focus mainly on item (3), fiction that transmits a powerful experience of using rationality, then I could add in Greg Egan’s Distress, some of Lawrence Watt-Evans’s strange little novels, the travails of Salvor Hardin in the first Foundation novel, and probably any number of others.
But what I’m really interested in is whether there’s any full-blown Rationalist Fiction that I’ve missed—or maybe just haven’t remembered. Failing that, I’m interested in stories that merely do a good job of conveying a rationalist experience. (Please specify which of these cases is true, if you make a recommendation.)
- Rereading Atlas Shrugged by 28 Jul 2020 18:54 UTC; 161 points) (
- Whining-Based Communities by 7 Apr 2009 20:31 UTC; 84 points) (
- A History Of Universalist Greed by 4 Jun 2020 0:22 UTC; 56 points) (
- Learning from counterfactuals by 25 Nov 2020 23:07 UTC; 11 points) (
- 26 Mar 2021 18:00 UTC; 9 points) 's comment on Eric Raymond’s Shortform by (
- [SEQ RERUN] Rationalist Fiction by 29 Mar 2013 4:45 UTC; 4 points) (
- 31 Aug 2010 14:05 UTC; 4 points) 's comment on Luminosity (Twilight fanfic) discussion thread by (
- 28 Dec 2011 10:08 UTC; 4 points) 's comment on How to Draw Conclusions Like Sherlock Holmes by (
- 13 Nov 2012 9:54 UTC; 3 points) 's comment on Meetup : Melbourne social meetup by (
- 25 Feb 2020 4:26 UTC; 3 points) 's comment on [HPMOR] Harry—example or anti-example? by (
- 2 May 2009 17:44 UTC; 2 points) 's comment on Open Thread: May 2009 by (
- 14 Jul 2009 11:31 UTC; 2 points) 's comment on Recommended reading for new rationalists by (
- 10 Sep 2012 9:19 UTC; 1 point) 's comment on Tell Your Rationalist Origin Story by (
- 10 Apr 2023 16:31 UTC; 1 point) 's comment on Catching the Eye of Sauron by (
- 26 Nov 2010 6:27 UTC; 0 points) 's comment on Another rationalist protagonist takes on “magic” by (
How about Scooby Doo? It’s elementary, but I spent a lot of time on it back when I was 3-4 and would have continued watching for somewhat longer if they hadn’t started introducing stories where the magic WAS real.
The moral “it’s ALWAYS natural” and the extremely repetitive plots (repetition is, I suspect, very good for kids) are basic but definitely positive.
Only saw one or two episodes, but I think Kimba the White Lion may also have had positive but elementary rationalist messages.
Especially because Scooby Doo always featured a villain who was taking advantage of peoples’ superstition and irrationality.
Scooby Doo, absolutely. The mystery was always solved; the reason was always given.
How about The Bloodhound Gang on that PBS show Electric Company? Same formula as Scooby Doo.
Although admittedly this is not fiction, exactly.
I’m...assuming this isn’t the same Bloodhound Gang which went on to record The Bad Touch and Foxtrot Uniform Charlie Kilo?
I believe the segment on the Electric Company is where that group derived its name. Although I’m not sure. No taste for that sort of thing.
checks wikipedia
you’re quite right =)
It’s odd I don’t remember the original Bloodhound Gang then, I do remember 3-2-1 Contact...did the Bloodhound Gang perhaps either replace or predate Mathnet? Because that’s what I remember—two faux-FBI agents solving crimes by triangulation and the fibonacci sequence and so forth.
The Bloodhound Gang predated MathNet. They were a subshow of 3-2-1 Contact; MathNet was a subshow of Square One, which followed 3-2-1 by many years.
Thank you! I was completely mixing up Contact and Square One—it’s been too long.
Ah, the Angle Dance. So many memories...
Mathman, no! 7 minus 4 is not 4!
Aaargh! How could you do this? You almost NEVER win a level!
I loved Mathnet! ^_^ 1 1 2 3 5 -- eureka!
3-2-1 Contact—that was the name of that show—not the Electric Company. That’s the bad 80s hit, isn’t it …?
I don’t remember seeing anything called Mathnet. My 3-2-1 Contact memories are roughly 1980-1984, somewhere thereabouts. Yours?
ah, about...1991-1994, so that explains it nicely. Did you get Reading Rainbow and Where in the World is Carmen San Diego before and after, by any chance?
Did see Reading Rainbow, although I think this was later … late 80s?. We had Where In The World Is Carmen San Diego as a computer game, late 80s, also, I believe. The game was boring as sin.
Truth. :P
I liked Terry Pratchett’s book “The Wee Free Men”. It’s a fantasy novel where the main character saves herself by having “First Sight” (original seeing; the ability to notice what her eyes are telling her even when she wouldn’t have expected it) and “Second Thoughts” (cognitive reflectiveness; the ability to look over her own thinking for biases and distortions).
In defense of Conan Doyle, Wikipedia says:
and goes off to claim that later detective fiction actually became less realistic as writers shifted attention to psychology rather than forensics.
The Prince of Nothing. The Prince of Nothing. The Prince of Nothing. I’ll say it as many times as I have to to get people on this blog to read it. The Prince of Nothing Trilogy. The Darkness That Comes Before. R. Scott Bakker.
True, Anasurimbor Kellhus is one of the “mutants”, even to the point of the author explicitly stating the Dunyain spent a few centuries running a eugenics program to get an intellect of that stature. But the later books of the trilogy also go into some detail about the rationalist training Kellhus undergoes with the Dunyain, the methods he uses, and even a little of the social structure of the Ishual monastery. He’s one of the perspective characters, and we see him using his techniques; there’s always a strong sense of “I could do that”, which remains right up until I actually try. And there’s no better work to demonstrate the “sense that more is possible”, or the ways in which a real rationalist would be the polar opposite of the “Spock” prototype, or a bunch of other things (disclaimer: many don’t become fully clear until The Thousandfold Thought, the last book in the trilogy).
The Dunyain conception of rationality isn’t exactly like our own, and rereading it recently there were a few things that bothered me, but overall it’s basically the story of a fantasy hero who is as good at probability theory as Aragorn is at swordfighting, with similar results.
I just finished the first book of the trilogy and it disappointed me. Kellhus is actually much better than Aragorn at swordfighting, which saves his ass all the time when he really should have thought in advance. His other (mental) superpower mostly manifests itself in charming people with NLP-like techniques, not probability theory: some fragments read almost like PUA sequences.
I’m still open for something that would fit your description, though :-)
Started reading the first one—from the prologue alone, Kellhus seems absurdly strong/skilled/fast. He reads people’s minds by looking at the patterns of their facial muscles, catches arrows out of the air, kills large groups of enemies by himself in hand-to-hand combat, etc. I’m not sure what lessons could really be derived from this, since these actions are far beyond the realm of normal human ability. Does the series/book get any better, or am I missing something here?
I’ve heard this complaint from others, and it’s valid. Where the series really starts coming into its own, in my opinion, is around the end of the first book/ start of the second where Kellhus gets involved in politics and persuasion. This is the part that gives me a better understanding of “superintelligences” and what they might do.
I’ll agree that the Scott Bakker’s stuff is great rationality fiction.
I love the protag heading into the wilderness with his rationality training, encountering evidence that indicates error, and updating his beliefs. I think its awesome when he integrates the new evidence into his model of the world, investigates the confusing things about the world and resolves them. Then he exploits his greater knowledge of the world’s structure to achieve amazing things.
I searched the site to see had anyone else here read this series, and specifically if anyone else had put quotes in the quotes thread. There’s some great dialogue in book one that I think would fit well. (There’s less in book two, and I’ve just started book three.) Glad to see people have heard of it!
I agree some aspects of Kellhus’s abilities are a little cheesy (the probability trance and the NLP-style memory hacking come to mind), but he is still essentially a rationalist character, though his lack of morality means I can’t really class him a hero.
The author blurb seems to indicate he’s a professional philosopher—I’d be curious to read some of his writing.
It’s interesting that “Watchmen” (in theaters now) is the Hamlet of a genre that is strongly anti-rational, yet has numerous rational elements. The most important, I think, is that the Earth is saved only by inhumanly rational people making rational decisions—rational decisions which the typical viewer cannot condone even after the fact, even knowing that they saved the Earth. In so doing, it proves to these viewers, not just consciously but deep in their gut, that they themselves would doom Earth by their irrationality.
On the other hand, Dr. Manhattan embodies the popular culture’s prejudice against rationality perfectly when he explains why he isn’t interested in life by saying, “A living body and a dead one contain the same number of atoms”. People try to imagine why scientists are interested in little things under microscopes, but not in gossip or football games. They can’t imagine that the little things under the microscope are actually more interesting, so they conclude that scientists are cold and boring, and thus unable to see how interesting gossip and football really are.
You can’t do what the character does, and in the comic it’s strongly implied that it didn’t work anyway.
Are you responding to the first paragraph, or the second?
First.
Second!
No, seriously: EY was responding to a question that I asked and then deleted a few seconds later because I figured it out myself. He’s fast.
I see Dr. Manhattan’s problem as not being rationality, but A) seeing things at a subatomic level and having to expend effort to see them at a human level, B) seeing all parts of time, so that ‘to the left of something alive’ and ‘after being alive’ are basically the same idea, and C) being so struck by fatalism due to B that he’s basically given up.
I think you’re steelmanning too much.
Dr. Manhattan shows interest in scientific experiments, even though scientific experiments should be prone to all of those problems as well. You never see him say he doesn’t care about doing an experiment because the number of atoms before and after the experiment is the same.
Furthermore, Dr. Manhattan “changes his mind” when he sees how Laurie is worthy of respect despite her background, That’s not a very close fit to overcoming the problems you describe, but it is a close fit for overcoming the “problems” of stereotypical “rationality”.
I think that fits as part of point C. He has become Jacques the Fatalist minus the sense of humor.
In defense of Sherlock Holmes:
The typical Sherlock Holmes story has Holmes perform twice. First he impresses his client with a seemingly impossible deduction; then he uses another deduction to solve the mystery. Watson or the client convince Holmes to explain the first deduction, which gives the reader the template Holmes will use for the second (likely inferences from small details). The data that Holmes uses to make the second deduction are in the text and available to the reader—the reader’s challenge is to make Holmes’s inference in advance.
Holmes himself attributes his success to observation, not rationality. (There’s a startling passage in A Study In Scarlet where Holmes tells Watson that he can’t be bothered to remember that the sun orbits the earth! Visit the link and search for ‘Copernican Theory’ in the full text for the passage.) The Sherlock Holmes stories are intended to be exercises in attention to detail, which is surely a useful skill for a rationalist.
When you have eliminated the impossible, whatever remains is often more improbable than your having made a mistake in one of your impossibility proofs.
Beautiful comment, but I’d add that whatever remains of the hypotheses you considered is often more improbable than your having missed an unconsidered alternative.
I just stumbled across this and felt this comment and the one above it were worth reminding everyone of in light the Knox case discussion. Way too many of our discussions have involved trying to come up with accounts of the crime that make sense of all the evidence. In retrospect I would labels such discussions as fun, but unhelpful.
-- Dirk Gently
I view Dirk Gently as a kind of wonderfully effective strawman, and his stories were a great aid to realizing I was an atheist, because at first he seems correct: surely, rather than a “localized meteorological phenomenon”, it makes more sense that the guy who’s been rained on for 14 straight years is some kind of rain god.
And then you think about what would happen in the real world, and realize that no, even if someone had been rained on for 14 years straight, I would not believe that they were a rain god. Because rain gods are actually impossible.
That part hit me like a punch in the gut.
In the real world, these are mostly just games we play with words.
Someone who has been rained on for 14 years straight has an extremely surprising property.
The label we assign that property matters a little, since it affects our subsequent behavior with respect to it. If I call it “rain god” I may be more inclined to worship it; if I label it a “localized meteorological phenomenon” I might be more inclined to study it using the techniques of meteorology; if I label it an extremely unlikely coincidence I might be more inclined not to study it at all; if I label it the work of pranksters with advanced technology I might be more inclined to look for pranksters, etc.
Etc.
But other things matter far more.
Do they have any other equally unlikely observable attributes, for example?
Did anything equally unlikely occur 14 years ago?
Etc.
Worrying overmuch about labels can distract us from actually observing what’s in front of us.
So it wouldn’t be possible to convince you that 2+2=3? No matter the evidence?
If someone claimed to be a rain god, or was credibly claimed to be a rain god based on previous evidence, and tested this by going through an EMP, stripping, generally removing any plausible way technological means could be associated with them, then being transported while in a medically-induced coma to a series of destinations not disclosed to them in advance in large deserts, and at all times was directly under, in, or above rainclouds, defying all meteorological patterns predicted by the best models just in advance of the trip, I find it hard to see how you could reasonably fail to assign significant probability to a model which made the same predictions as “this person is a rain god”.
Where does personal insanity become a factor in your probability estimates?
In some sense, basically everywhere there is a very-low or very-high probability belief, since obviously I can’t be more confident in any belief than I can be in the reliableness of my system of reasoning. I definitely consider this when I’m evaluating the proper strength of nearly-certain beliefs. In another sense, almost nowhere.
I don’t know exactly how confident I should be in my sanity, except that the probability of insanity is small. Also, I’m not confident there would be any evidence distinguishing ‘sane and rational’ from ‘insane but apparently rational’. I model a logical-insane VAuroch as being like the anti-inductors; following different rules which, according to their own standards, are self-consistent.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
This is an amazingly apt description of the mind-state that Robert Anton Wilson called “Chapel Perilous”.
It is interesting that you think so, but I can’t make head or tail of his description of the state, and other descriptions don’t bear any particular resemblance to the state of mind I describe.
My position on the matter boils down to “All my beliefs may be unjustified, but until I have evidence suggesting they are, I should provisionally assume the opposite, because worrying about it is counterproductive.”
It’d be possible, but it would take more evidence than someone having been rained on for 14 years.
If you’re talking about models and predictions you’ve already made the relevant leap, IMO. Even if you’re calling the person a “god”, you’re still taking a fundamentally naturalistic approach; you’re not assuming basic mental entities, you’re not worshiping.
Calling someone a rain god is making the prediction “If I worship this person, rain will occur at the times I need it more often than it would if I did not worship this person.” Worship doesn’t stop being worship just because it works.
This reminds me of a bit in the Illuminatus! trilogy—there was a man who had filing cabinets full of information about the Kennedy assassination. [1]
He kept hoping that he’d find one more piece of information which would make sense of everything he’d accumulated, little realizing that most of what he had was people getting things wrong and covering their asses.
[1] Once upon a time, it was normal to store information in filing cabinets, and there was only one Kennedy assassination.
Sounds like me and my PhD project.
If you don’t mind, what was the subject?
The thing is, it’s usually much easier to solve the mystery by getting a feel for Doyle’s tells than by trying to piece together whatever abstruse chain of deductions Holmes is going to use. Examples:
Watson is an incredibly good judge of character. If he thinks someone seems cold, that person is heartless. If he says someone seems shifty, they are guilty of something (although maybe not the crime under investigation).
The woman never did it. The only two exceptions to this are a story in which he clears one woman to implicate another (who is the only other possible suspect), and one in which an innocent woman is corrupted and manipulated by an evil man.
Just from those two rules you can usually figure out whodunit, at which point you can occupy yourself by figuring out how, a task made relatively simple by conservation of detail.
“Holmes himself attributes his success to observation, not rationality. (There’s a startling passage in A Study In Scarlet where Holmes tells Watson that he can’t be bothered to remember that the sun orbits the earth!)”
He then states that such knowledge can have no influence on the things he’s concerned about, and so he doesn’t bother learning it.
That seems like a starkly rational position.
I appreciate your defense of Holmes. As I mentioned in another comment, I haven’t read much of him, but I do remember one particular passage which annoyed me due to the way I had felt like I could have figured out the deductions had I been there personally, but because of the way Dr Watson narrates, the deduction eluded me.
Basically, Watson describes the client as wearing some sort of “odd circular jewelry with square holes through which a thin string was passed” (paraphrased from memory). From this, Sherlock deduces that the client has recently been on vacation to China. How? Well, that jewelry are Chinese coins, of course!
I know what Chinese coins look like, but was completely misled by Watson’s description. Furthermore, “recent vacation to China” is somewhat of a lucky guess. Perhaps it was a friend who went to China, and brought these coins back as a souvenir gift.
Fortunately you can rely on Conan Doyle and writers in general being parsimonious: unlike reality, stories don’t contain odd details unless they’re important to the plot.
I don’t think that most Holmes stories should even be read as we do modern mystery stories. They are adventure stories, and Conan Doyle is more like an intellectual Raymond Chandler than a precursor to Agatha Christie (although I suppose, in fact, that he is both). It’s simply impossible to solve most of the early ones (including A Study in Scarlet), although the later stories (which postdate Christie’s first stories) were more honest mysteries. (At one point he even has Watson apologise for having been unfair in the past.)
Hello Less Wrong,
My first comment ever. I have been lurking on Less Wrong for several years already (and on Overcoming Bias before there was even a Less Wrong site), and have been mostly cyber-stalking EY ever since I caught wind of his AI-Box exploits.
This year 2012, on a whim, I joined the NaNoWriMo (National Novel Writing Month) last November, and started writing a novel I had been randomly thinking of making, “Judge on a Boat”. The world is that humanity manages to grow up a little without blowing itself up, rationality techniques are taught regularly (a certain minimum level of knowledge in these techniques is required for all citizens), practical mind simulations and artificial intelligence are still far-off (but being actively worked on, somewhere way, way off in the background of the novel), and experts in morality and ethical systems, called “Judges”, are given the proper respect they deserve.
The premise is that a trainee Judge, Nicole Angel, visiting Earth for her final examinations (she’s from Mars Lagrange Point 1), gets marooned on a lifeboat with a small group of people. She is then forced to act as a full Judge (despite not actually passing the exams yet) for the people in the boat.
The other premise is that a new Judge, Emmanuel Verrens, is reading about Nicole Angel’s adventures in novel form, under the guidance of high-ranking Judge David Adams. Emmanuel’s thinking is remarkably similar to hers, despite her being a fictional character -
The novel was intended to be more about moral philosophy than strictly rationality, but as I was using Less Wrong as an ideas pump, it ended up being more about rationality, really. (^^)v
Anyway, if anyone is interested in the early draft text, see this.
My name is Dave, this is my first post.
I consider myself an aspiring newbie epistemic rationalist, having been turned on to it by HPMOR, i’ve been studying it for a couple months now and feel I have already greatly benefited from learning even the most basic concepts. I have read “Judge on a Boat” and found it quite as satisfying as HPMOR, and would highly recommend it to anyone who is looking for another highly engaging, thought-provoking, rational fiction.
Jimmy, the main character of Fleep is, at least, a very good empiricist. I’m undecided on how artificial the evidence he has to work from is, but it’s an entertaining story, and you can probably read it in about ten minutes.
Admittedly, given the actual answer, there may be an additional reason why he was able to figure it out...
How about The Hardy Boys? I read dozens of these as a young kid, and the thing that stands out in mind now is, there was always an answer to the mystery, one that could be arrived at via clues and deduction. Looking back now, I think they had a major impact on my manner of thinking, reading them as young as I did (kindergarten and 1st grade, I’m talking) such that years later I was inclined to look favorably upon a ‘rationality technique’ when I encountered the idea of one on OB.
I also read lots of The Hardy Boys (original series) as a kid and loved them. I don’t know if they nudged me towards rationality or I liked them because I already felt the pull of rationality, but they were probably a strong influence now that I think about it.
“If You Give a Mouse a Cookie”
A good primer on chaos theory for youth?
“Rendezvous with Rama”
Why have a plot when gradual discovery with expository dialogue will do.
“Contact” -Sagan
More scientific method
“The Diamond Age” -Stephenson
This book even has long discussions of computing
“Sideways Stories from Wayside School”
Anything with jokes is going to be about logic at some point.
“Of Human Bondage” -Maugham
This book has a famous scene where the Phillip goes to Paris to study art; you get the impression that he isn’t very good at painting and as time goes on he starts to recognize that his fellow students are not great painters either. After two years, he gradually builds up the courage to have one of his instructors look at all of his work and let him know whether or not he can achieve his goal of becoming great painter. After he receives a negative verdict he commits to a new life plan.
I can’t believe i forgot Wayside School—those stories were brilliant. I wouldn’t say they taught me any particular technique of rationality, more a general lightness of thought. And they were, of course, a joy to read.
To some extent, I don’t believe that “realism”, to the point of realistic rationalist techniques, is important.
Encyclopedia Brown, Nancy Drew, The Boxcar Children, Sherlock Holmes may fail to convey rationalist techniques, but they certainly convey rationalist values and beliefs, like “intelligence is useful”, “observation is important”, “there is a right answer”. Enthusiasm for the subject is far more important to learning than a head start in knowing the subject.
On the subject of Encyclopedia Brown, a spoof.
Awesome. :)
Encyclopedia Brown is an especially bad example. Most of the mysteries he solves, he solves by knowing some piece of minor trivia which contradicts some off-hand statement of the criminal. This promotes “rationality” as “knowing a lot of facts”, which is absolutely not what we’re trying to promote here, and provides the wrong model of problem solving. Encyclopedia Brown is based on formal logic, not Bayesian probability.
Knowing lots of facts absolutely matters in the real world though. Having a good theoretical framework to organize them and logic (probablistic or otherwise) to manipulate them help too—but not without real facts.
But just “intelligence is useful” takes people farther than many intelligent people get. Seriously.
Empirical data point: Read the Encyclopedia Brown books and liked them, was probably influenced to some degree or other.
“Encyclopedia Brown is based on formal logic, not Bayesian probability.”
1) Formal logic isn’t the wrong model. 2) Encyclopedia Brown doesn’t rely on logic except in a very trivial sense. 3) Encyclopedia Brown relies on an extensive knowledge of trivia which happen to become relevant; rather than being especially intelligent, he merely has an excellent memory and a rudimentary capacity for reason.
What is wrong with formal logic? Would the average fiction reader be harmed by becoming marginally better at formal logic?
I agree with your descriptions of the books. My point was that fiction celebrates various community values, and that some books celebrate rationalist values more than others.
Compare Encyclopedia Brown to Harry Potter. Both solve mysteries, but Harry Potter is explicitly skilled at sports and personal defense and explicitly incompetent at schoolwork.
If you require such specific rationality techniques as “Bayesian probability but not formal logic”, your kids will not have many books to read.
My favorite Encyclopedia Brown stories were the ones where he wasn’t solving mysteries, but where he was fooling the other children.
For example (and this is from memory so the details may be off), Brown wanted to make some money off the other children by running a gambling game, but he knew that he’d get in major trouble if he were caught doing such a thing. He asked an authority figure if he could just run a game where children paid money to win a toy randomly chosen by a spinner, and got (grudging) approval.
Then came his nifty idea: he’d only buy a few toys before the game started, so that he’d quickly run out. Once one of the children won an toy he had ran out of, he’d (after some hesitation for show) give them the amount of money it would take to buy that toy at the store, then make them “promise” that they’d go and spend the money on the toy. He knew that most of the children would stay and put the money right back into the game (thus turning it into real gambling), but he had established a veneer of plausible deniability; how could he know if they were spending from their own initial pocket money or from money they had won back?
I don’t know how much of an example of rationalism that is, but I still think it’s valuable for children to learn to think in terms of someone trying to game a system, as a third option beyond following the system strictly or breaking it outright. It’s useful later on when they find themselves needing to game systems, or to build systems that are hard to game.
I think you may be thinking of The Great Brain, not Encyclopedia Brown, there. Encyclopedia Brown was a boy of upstanding moral character, which meant The Great Brain was more fun to read.
Perhaps you’re right, it’s been a while since I read those books.
Or non-facts.
I haven’t read them, but I think it a bad sign that the tvtropes article “Conviction by Counterfactual Clue” is also known as “Encyclopedia Browned”.
Whenever EB catches somebody this way, I always read it as that he’s bluffing. After all, the perp always confesses when confronted with the alleged proof, so it really doesn’t matter how EB knows (psychological analysis, another clue that would be harder to explain, the knowledge that Bugs Meany always lies); he just has to wait around until he can find something that he can claim proves his case.
Greg Egan’s Orthogonal series, for the weaker criterion. Most of the characters are scientists, you get to see them struggling with problems and playing with new ideas that turn out not to work. That’s not the only rationalist aspect, though: [about the future inhabitants of a generation ship] “They won’t feel as though they’re falling, they’ll feel the way they always feel. Only the old books will tell them there was something called ‘falling’ that felt the same.” [while having to fast] “She… began [working through] the maze of obstructions that made it impossible to [open the cupboard] unless she was fully awake. Halfway through, she paused. Breaking the pattern [3 days eating, 1 day fasting] would set a precedent, inviting her to treat every fast day as a potential exception. Once the behaviour that she was trying to make routine and automatic had to be questioned over and over… [it would be] a dozen times harder.” When launching their rocket, they make plans for every survivable catastrophe they can imagine, and practice them. They explain their ideas to each other: “Why aren’t we testing this on voles?” “We would need smaller needles, and we don’t have those.” One person talks about the difficulty of feeling urgency for a danger when evidence of it is not immediate, despite civilization being at stake when time is limited—perhaps, she says, she’s expecting too much of her animal brain. The rationality is not constant, but is there and noticeable. The characters really do think about things, and we get to see it.
(Postscript: This is my first comment. I wish I could have described more abstractly what the characters were doing, and why I think it counts as rationality. Even if my evidence fails to convince you that the books should be called “rationalist fiction”, you should still consider reading them; I think they are good books in their own right. I have not yet read the third in the trilogy, but would be surprised if it were much worse than the other two.)
The manga/anime series “Death Note”
It’s a long mental battle between two clever people, not much for rationality techniques, but characters think rationally, and the magical parts have well defined rules, similar to Lawrence Watt-Evans’ fiction.
I would be terribly thankful to anybody who could reccomend me some more stories involving these sorts of fights. Trickery and betrayal is common enough, but a prolonged fued of this nature is rare.
Death Note is a brilliant anime, but not really a great of an example of rationality. Tvtropes calls it Xanatos Roulette.
First you start with a smart plan. That can be rational. Then you complicate the plan. It makes characters look even smarter, and still quite rational. At some point the plan is so overcomplicated, so many uncertainties are just assumed, that it’s no longer rationality but plain omniscience and characters “knowing the script of future episodes”. That’s what Death Note is. Light and L overplot, and it’s really fun to watch, and they look really “smart” when it’s well done, but it’s way past any reasonable pretense of rationality.
TvTropes has more examples, like Saw series. They’re all great fun, and not much rational.
I really liked the Death Note anime. However, I think it’s much more Sherlock-Holmes-ish than what Eliezer is asking for here. It’s been quite a long time since I saw it, but I remember at the time I was annoyed often when both the protagonist and the antagonist would make “very lucky guesses”, deducing something which is possible given the evidence at hand, but far from being the only possibility from said evidence. I haven’t read much Sherlock, but from what I’ve heard, Sherlock similarly makes amazingly lucky guesses. Certainly, EY’s summary of “magically finding the right clues and carrying out magically correct complicated chains of deduction” seems to indicate so.
I’d have to watch the Death Note series again to give any specific examples. Maybe I will do that, but probably not within the next month.
I’ve heard that complaint a lot, and I agree in the case of Sherlock Holmes, but death note seemed somehow plausible.
If you can remember it at all, do you think you could tell me specifically which parts you thought were “lucky guesses”? I like to keep those sorts of things in mind when re-reading.
Like I said, I don’t plan on rewatching the anime any time soon (and I don’t know how the anime differs from the manga). That said, if you’re serious about it, send me a private message and I’ll send you my MSN account so that you can nag me on there so I don’t forget to respond to this. =)
As far as I know, there is no private-message function built in to lesswrong. I prefer to maintain some level of anonymity anyway, and it would hardly be worth creating an account specifically for this purpose. I don’t care that much, though a general idea of which character does it or when would be appreciated.
All that aside, reading it made the whole thing move a lot faster, which probably contributed to the enjoyment, but I otherwise I think they are fairly similar.
There actually is a private messaging thing built into LW, but it’s not obvious, and there’s no direct link to see incoming messages.
go to http://lesswrong.com/message/inbox to see your inbox (which includes replies to your comments).
Also, if you click on someone else’s name, that is, so you can see a different person’s profile, then there will be a direct link available to message them. But, again, unless they’re actively checking, I don’t think they’ll have any obvious way to know that a message was sent to them.
EDIT: when I said there’s no direct link, I meant “there’s no obvious simple path from just clicking stuff on the front page of LW to get to your inbox”
Mina is a rather rational magical girl.
http://tvtropes.org/pmwiki/pmwiki.php/Main/ThirtyXanatosPileup
They specifically recommend Code Geass and The Dosadi Experiment.
http://tvtropes.org/pmwiki/pmwiki.php/Main/XanatosSpeedChess also mentions The Vor Game by Bujold.
I’ve spent a lot of time scouring tvtropes.org for something similar, Code Geass was one of the better ones.
Any particular reason to single those two out? I might give The Dosadi Experiment higher priority.
I don’t recommend The Dosadi Experiment as a good example of rationality; I explicitly de-recommend it.
The Vor Game, aside from being delightful, can be seen as a wonderful lesson in how setting priorities can be helpful, but it’s not about rationality, it’s about personal manipulation. One character groks another’s motivational structure and creates a situation that will make her “fall off the horse”, so to speak.
Vorkosigan works primarily through charisma and sub-conscious analysis. He’s not a rationalist in any particular sense.
Ditto for Death Note, though only the first season. The logic of a story is that the good guys will win in the end, which is not what you should necessarily expect in real life.
(spoilers)
The awesomeness of Death Note’s first season was not just in the decent instrumental rationality attributed to the characters (which gave me a very good impression), but also in that you couldn’t guess who would win. (Edited for spoilers)
Please edit this to remove everything after ”...couldn’t guess who would win”. We don’t have proper support for spoilers in comments, and saying “spoilers” isn’t enough.
(I don’t have facilities to edit your comment myself, just remove it.)
EDIT: Wasn’t edited after a bit, so banned, alas.
Seeing as there’s no obvious automated notification of replies, banning someone for not noticing a reply seems unfair.
He’s really just deleted the comment—for some reason the software uses the word “ban”. The commenter is still registered.
Yes, it’s unfair, yes, we should fix this at some point, but I deemed it more important to not spoil a unique anime.
I’m a little confused—what does it mean to ban a comment? I know how one can delete a comment, or ban a user, but I’ve never heard the words used this way before.
Delete, really. The button just says “ban” if you’re an administrator.
Ah, that’s good then. =)
So, what was the work of fiction that was mentioned in the post?
Death Note.
Any fiction that can’t stand up to spoilers isn’t worth reading. I would never recommend fiction that I haven’t reread, often many times—I’d rather reread a good (or even fair) book for relaxation than get irritated trying to read something that drags. And if you’re not reading it for relaxation, textbooks are better than any fiction.
If the story poses a puzzle for the reader, and the solution to the puzzle is given further down the plot, then spoilers can in fact reduce the enjoyability of the story. In Death Note, you can actually discover the flaw in the character’s plan yourself if you pause the video and and think for a bit (although it helps that the protagonist’s intelligence is a bit...uneven. Most real people aren’t simultaneously stupid and smart like that). It’s also fun to arrive at the best possible strategy for each character...it’s pretty satisfying when you and the character independently arrive at the same conclusion.
This only applies to very tightly written stories of course.
Agree that fiction that relies solely on spoilers isn’t worth reading. Though I would not concur that textbooks are better than any fiction. Unless school has gotten waaaaaay better than I remember.
If you are not reading for relaxation, then you are probably reading for information; in that sense textbooks are better than fiction, since they have better presentation of the information in them.
Maybe some of Ted Chiang’s stories?
Also, I don’t remember them well and they’re probably more about irrationality techniques than rationality techniques, but Umberto Eco’s “The Name of the Rose” and “Foucault’s Pendulum” are fun and struck me as having a rationalist spirit or at least being food for rationalist thought.
Yes, irrationality techniques. The Name of the Rose shows how not to solve a mystery, and Foucault’s Pendulum shows how not to create a mystery. (These are not criticisms; this is deliberate.)
ETA: The Holmes figure of The Name of the Rose is supposed to be William of Ockham (even though Eco eventually made him a different person to maintain historical plausibility), and he does his best to be rational in the context of his culture. Nobody in Foucault’s Pendulum is especially rational, as far as I can recall, including the main characters; but people do apply this or that valid method of reasoning, and the main characters are intelligent and discuss (among others) rationalist philosophers. May favourite Eco novel is Baudolino, which contains a rationally solved murder mystery, even though most of it is not rationalist at all.
This is my favorite comment.
Thanks; although now that you drew my attention to it, I’ve added to it, and it’s no longer so pithy.
Economist Russ Roberts’ The Invisible Heart uses fiction to explain how to think logically about several economic issues. (I always assign the book to my intro micro students and they love it.)
The Dune series and Neal Stephenson’s latest novel Anathem both come to mind. The Dune series includes a number of plot devices involving mental discipline (although it’s all semi-mystical.) The world of Anathem, on the other hand, is split into two factions, one of which is specifically rationalist. It gets pretty philosophical and weird toward the end, but it mostly involves rationalist characters using math/science/etc to overcome the hurdles in their way. The world it describes sounds pretty similair to what I’ve read of Eliezer’s Bayesian Conspiracy.
I read Dune a while ago, but I can’t remember ever thinking that the characters were taking a rationalist approach.
Do you have any specific examples?
I suspect he’s thinking mainly of the Mentats and Bene Gesserit. The problem is the semi-mystical basis Herbert explicitly founded Dune on. If that were real, then the characters are more rational than they seem to a shallow reading. Part of the problem is that people have a bias against seeing people who disagree with them on their fundamental interpretations of reality as rational.
It would certainly be possible for the character to act very rationally within the internal logic of the world which they inhabit, even if that world isn’t the same as our own.
But I don’t particularly remember that from the book. I remember lots of politics and intrigue, but I think we need more than that to fit Eliezer’s criteria for “rationalist fiction”, otherwise let’s talk about John Grisham novels.
To be clear: I’m not saying Dune isn’t a rationalist book. I’m just asking for specific examples to refresh my memory.
I was indeed thinking of the Mentats and Bene Gesserit. As you both point out, there was a significant mystical aspect to it. I suppose I was thinking more of the approach taken to mental training (within the world’s internally consistent, but mystical, framework) rather than any specific techniques or events.
Mentats on the other hand have “minds developed to staggering heights of cognitive and analytical ability” (thanks Wikipedia) which would seem to fit the bill.
On the other hand, I suppose that neither of these instances are quite what Eliezer was after, as “you can’t go out and do it at home”.
The Bene Gesserit idea of “decide what’s wrong with the world, make the best you can plan to fix it, and follow it up dispassionately even if takes ten millennia” seemed to me quite “grown-up” in the sense Eliezer uses the word.
As a bonus, they (correctly) reasoned that a good strategy for such a plan includes investigating and perfecting techniques for pushing the human body and mind to its limits. Also, they don’t shy away from using any advantage, including the gullibility of others—even going to the lengths of seeding religions with beliefs that will be useful a thousand years later—, everything they actually believe in does work*, even if not necessarily the “obvious” way (I’m talking about their “witch” powers).
(*: within the logic of the books. Even the effects of the “Tarot” are falsifiable, in the Dune universe.)
The Bene Gesserit are magical witches when seen by less knowledgeable characters, but presented as simply formidable humans when the point of view is internal. This had a strong effect in me (at a greener age) of wanting to learn how to become formidable, instead of wishing for magical powers.
Let’s see those Bene Gesserit ditzes use Voice over a text-only chat.
I always thought the Ixian and Tleilaxu(who, it should be noted, can clone unlimited copies of the most powerful mentats they could find samples of) would have done much better in a fair Dune universe.
One thing I’ve never seen in these threads about rationalist literature is RPG handbooks. The 2nd Edition Dungeon Master’s Guide had an enormous influence on me, because it suggested that the world ran on understandable, deterministic rules, which could be applied both to explicate dramatic situations, and to predict the outcome of situations not yet seen.
One of the first things I ever did (I lacked friends to play D&D with) was to assign stats to fictional characters and make pre-existing stories I felt were unsatisfying play out in a more “realistic” manner. A better word would be internally consistent. But I felt very strongly after that point that it was logical to expect that 9 times out of 10 that the entity with the most advantages would come out on top, contrary to the manner of stories, although the dice-rolling kept total predestination at bay.
There was a large subplot involving a scientist who was trying to terraform Dune.
How about Jules Verne’s “The Mysterious Island”?
It doesn’t have any “rationality ninjas”, but the overall arch of “know your world, understand it, and force your will upon your future” seems a very valuable basis of thought for any rational person. This book was probably the most powerful root of that idea in my mind.
As a bonus, pretty much everything the characters do would actually do work in real life.
(Actually, quite a few other stories by JV are really nice. This one is just the one most tightly connected in my mind with the basic idea of reason.)
Here is a (very paraphrased, non-spoiler) snippet from the beginning of “Margin of Profit” by Poul Anderson. The problem is that the evil space aliens are blockading a trade route, capturing the ships and crew of the trading ships. The Traders are meeting and deciding what to do.
Trader 1: Why don’t we just send in our space fleet and destroy them?
Trader 2: Revenge and violence are un-Christian thoughts. Also, they don’t pay very well, as it is hard to sell anything to a corpse. Anyway, getting that done would take a long time, and our customers would find other sources for the goods they need.
Trader 1: Why don’t we just arm our merchant ships?
Trader 2: You think I haven’t thought of that? We are already on shoestring margins as it is. If we make the ships more expensive, then we are operating at a loss.
(Wow, I write so much worse than Poul Anderson :-) The writing in the story is much better. The “un-Christian thoughts” line is one of my favorites).
This was one of the scenes that showed how to think logically throught the consequences of seemingly good ideas (here, economic decision-making, long-term thinking). You can actually figure out the solution to the problem using one of the techniques that I’ve heard on OB (I don’t want to spoil it by saying which one).
Does this apply?
Good idea—I actually didn’t think of them until I read this, but many of Anderson’s Nicholas van Rijn and David Falkayn short stories would be good choices.
Satoshi Kon’s Paranoia Agent. Its relevance to rationality lies in its criticisms of various kinds of self-deception rather than positive accounts of methods of rationality. A variety of sub-themes orbit the core theme of self-deception: status-obsession, escapism, nostalgia for idealized imaginary past, hypocrisy, externalization of responsibility.
Note though that Paranoia Agent drives the message home not by didacticism and realistic portrayal of situations but by appeal to gut-level feelings and ample nightmare fueling.
How about Tom Godwin’s “The Cold Equations” for a rational horror story.
A great story. There is an interesting feminist analysis of it that claims that it is really about women trying to intrude on the rational worlds of science (and science fiction writing), and that they should be thrown out the airlock.
Actually, it’s just a dumb story—a society that wouldn’t even lock the storage closet or put a sign on the door saying, “If you stow away here, you will die” is one that doesn’t value human life much. (For that matter, if the fuel is so precisely calculated and scarce, why have a big empty supply closet on the ship?)
(Disclaimer: I read these critiques of the story elsewhere; don’t remember the source, though.)
That’s nitpicking. Of course the story is contrived. If you can think of a more plausible premise that would have had the same visceral punch and been easy enough to understand, speak now.
The point of the story is that sometimes there’s just no good way out of a situation, and the thing to do is face up to this and deal with reality on its own terms. Safety labels and supply closets are only incidental to this message. It could just as well apply to ugly tribal wars, or to medical triage, or to the fact that few things worth doing are perfectly safe.
I’ve seen an analysis of “The Cold Equations” which claimed there was no way to set up the plot so that you have to space someone because of simple physics—it would always be organizational failure.
It would be a rather different story if the theme was that organizations sometimes fail to set things up sensibly, and this leads to deaths.
And a quite interesting one if it were a matter of the odds rather than certainty—the stowaway costs enough fuel that there’s 10% chance that the rocket won’t deliver the medicine. Now what?
Dickson’s “Lost Dorsai” is close to that theme—mercenaries are trapped in a bad contract, and there just isn’t enough time to find the flaw which would lead to a good outcome.
I probably read the same comment on the story. There is also a film version done for the twilight zone of the 80s.
Why do engineers use safety margins? Because the unexpected happens. There might be some need no maneuver around something or to compensate for unforeseen movements of the target. If you really want to go cheap then do away with the human pilot and the need for life support by just strapping the cargo on a simple autopiloted rocket.
A similar problem was discussed on a Star Trek TOS episode, where one leader decided to kill half his population to have the reserves to feed the rest. I do not think that was necessary either.
As I said, a sign on the door or a lock would do.
I’m surprised David Gerrold’s “War Against The Chtorr” didn’t get mentioned, but it never did get widely read. It includes a “Global Ethics” class in the first book, and “Mode Training” in subsequent books, which have the character going through what is basically a rationality dojo—and there are numerous full sessions of these classes in the books. It’s where I got the idea that rationality was a teachable/learnable skill, and learned quite a few techniques for handling my biases, irrationalities, etc..
The core theme of the series is that Earth is being invaded by an alien biology attempting to terraform (or “Chtorraform” if you will) the planet. It starts with ~90% of the human race being wiped out (this isn’t a spoiler, this is really how it starts), and humanity having to win at any cost.
It also involves a lot of rational, knowledgeable people suffering at the hands of irrational people and petty bureaucrats. The alien ecology also refuses to yield to “best attempts”—information on the Chtorr is extremely hard to come by, and we’re only seeing a tiny portion of the puzzle, so a lot of theories collapse due to wrong assumptions—even ones that are major aspects of the plot. It very much avoids the classic “mystery novel” issue of having a single clear, easily determined answer, while at the same time making it clear that if humanity does not find the correct answer fast enough, it dies.
I really can’t think of anything that captures LessWrong better than that!
The Steerswoman series by Rosemary Kirstein. It’s unfinished, and at current rates of production and apparent rates of plot resolution, never-to-be-finished (sadly, I’m feeling more and more that way about the paradigmatic rationalist fiction as well). And there are an average of slightly more than one new strains-credibility premises per book. But the characters use some serious swashbuckling rationality. Chief among the “rationality techniques” these books teach are the social ones that underpin scientific progress: honesty, transparency, etc. Which is a nice counterbalance to all the individualist rationalism that you’ll get from most of the other suggestions here.
And there’s a bit about keeping track of one’s surroundings as a rationality skill. Now that I think about it, the same thing turns up in one of Chesterton’s Father Brown stories.
NO… SPOILERS.
I really hope that we all think that developing better techniques for rationality is more important than sparing people spoilers in their fiction.
Just rot13 the spoilers.
If you don’t spare people the spoilers, they don’t read your website. (That was my family tradition, anyway.)
We can do both, though. Does the benefit of including a spoiler in a discussion outweigh the harm of imposing a spoiler on someone?
Just rot13 the spoilers.
I agree until the last paragraph, I seem to remember thinking that there was a way it could have been done better, and that I could excuse his error because he wasn’t overcoming an impossibility.
Unfortunately, I dont remember how I thought to fix it.
Check out Erfworld. It starts off as a webcomic, moves to narrative for the parts that are best done in a narrative style, and then jumps back to webcomic format for battles. It hits all of those marks you mention, as well as a standard that I hold personally. I think that rationalist fiction is at its most compelling when it creates a world with new rules, and sends the empiricist out to learn them. It’s easier to show how to learn things empirically when there’s still low-hanging-fruit to be plucked, and there have to be people in the story who don’t know things in order to show the advantages of knowledge.
One of the catchphrases that develops is, “We try things. Sometimes they even work.”
Discussed here
Not full-blown rationalist fiction, but I would like to mention The Salvation War (an ongoing trilogy of web novels) as promoting some rationalist values. It’s military fiction in which Earth is invaded by the legions of Hell and musters its armies to defend itself. Since it is set in the modern era, there is a fair ammount of focus on science and evidence and the rejection of faith and dogma, and how humanity’s curiosity and knowledge led to it gaining power in the form of technology and engineering. Sample passage below (note, “” is a character’s name, which I have removed for the sake of avoiding a spoiler).
Armageddon?
Pantheocide
Mildly disagree. It’s a military wank story cheerleading for rationalism/enlightenment/atheism, but not doing a particularly good job of promoting their values or showing what they are about. The characters are right because they have the author on their side, not because they reason well or are noticeably more sane than usual. It’s remarkably enjoyable for the sort of story it is considering that the overwhelming disparity in power and author favorism kills much of the suspense, but I don’t think you’d learn good cognitive habits from it.
I found Piers Anthony’s novels, such as the Adept series, to be good preteen rationality books. His characters are constantly trying to figure out the world around them, describe their reasoning, make complicated (for preteen) moral decisions, and solve puzzles. I don’t think his books describe specific techniques that are useful, but they do portray “trying to figure out the world using reason” in a very positive light.
Isaac Asimov’s “The Black Widowers” short stories are very rationalist, especially the character of Henry the waiter, who often exemplifies the principal of Occam’s Razor. That is, when the rest of the group are coming up with wild and convoluted stories to explain the facts, Henry is usually the one to come up with the simplest and most obvious solution. Some of the stories are a little Holmesian (i.e. superhuman powers of deduction) but many are just a question of looking for prosaic ordinary answers instead of going overboard with speculation. In this vein, I particularly recommend:
The Acquisitive Chuckle
The Obvious Factor
Northwestward
All three of these are collected in The Return of the Black Widowers. “The Obvious Factor” in particular taught me a really important lesson in rationality I’ve used with significant effect over the years. It took me somewhat longer to learn the more subtle lesson of “Northwestward”—indeed I’m still working on that one—but if anything it’s even more important.
L. E. Modesitt has written consistently rational fiction, and improved steadily as a writer. He scores on both counts. http://en.wikipedia.org/wiki/L._E._Modesitt
I’m recommending Patricia Briggs’ fiction (epic fantasy earlier, and more recently paranormal romance) as rationalist in the mild sense—characters are apt to think about what they’re doing and it helps them win.
One sort of “rationalist” book is the kind whose story depends on a lot of plotting and strategizing. It’s one of the few ways that fiction can actually be challenging to understand in a technical sense. And it’s a way that you can show characters being actually smarter than the reader, and give the reader a sense of what it might feel like to be smarter.
Ender’s Game, Neal Stephenson’s Baroque Trilogy, and the G.R.R. Martin books are what comes to mind, though certainly any book where geopolitics is involved could be strategy-heavy. Narrative history, of course, has the same quality, especially military and diplomatic history.
For a long time, I actually avoided these books, because competitive strategy tended to confuse me. Now I’m starting to find it fascinating.
I second Stiegler’s novels. I didn’t think much of Vogt’s Null-A novels—also Gosseyn was a mutant, among other things he had two brains.
Heinlein’s juveniles in general. Vogt’s “Voyage of the Space Beagle” .
But Gosseyn wasn’t a mutant rationalist.
Heinlein shows some Traditional Rationality but doesn’t give you a sense that more is possible, apart from his one story showing how to train a mutant to be a better rationalist—I forget what the story was called, it’s the one with the supernova weapon.
“Gulf”, and he wasn’t a mutant, just on the far right of the bell curve. They were trying to breed a new human race from the extremely intelligent. The techniques used in that story were mostly based on General Semantics, like the Null-A novels, but took it in a different direction.
When I wrote Heinlein’s juveniles, I was thinking more of “Rocket Ship Galileo”, “Farmer in the Sky”, where the protagonist is learning the importance of thinking clearly and accurately.
I’m more than a little ashamed to admit I’m only reading this now, after writing about half of a first draft of what is nominally a piece of “Rationalist Fiction,” Erica’s Adventures In The Multiverse.. I say nominally, because reading this I realized that I didn’t even know what “rationalist fiction” is, despite having read and loved HPMOR and having other, even more embarrassing reasons, to school myself in this regard.
The good news is, I’m going through what I’ve written so far, and I think I can salvage what’s good about it while reconstructing what needs to be reconstructed to transform the thing from fake rationalist fiction to something hopefully worthy of the label. It’s invigorating, actually.
I enjoyed Singularity Sky (https://en.wikipedia.org/wiki/Singularity_Sky) and Entoverse (https://en.wikipedia.org/wiki/Giants_series). However, Entoverse may be a bit tiresome if you don’t like a novel to lecture you on laissez-faire.
Singularity Sky is fun SF, but I wouldn’t call it particularly rationalist—it’s mainly about the breakdown of hierarchical social structure on exposure to sufficiently advanced technology, which is something that I might expect to be interesting to LWers but isn’t directly within the site’s ambit. The narrative’s clearly on the side of Enlightenment values, technology, and social liberalism, and against rigid hierarchy and Luddism, but you can find those in a lot of places; similarly, the leads’ advantage is more cultural and technological than cognitive.
One of the leads does (spoilers:) jbex sbe jung’f vzcyvrq gb or, vs abg n Sevraqyl NV, gura ng yrnfg n abg-haSevraqyl bar, but that side of the plot’s not deeply explored.
The first two Gormenghast books, by Mervyn Peak, feel weirdly appropriate for me. They have a villain whose effectiveness depends on his rationality, when all about lack same. The more potent advocate for rationality, however, is the compelling depiction of a world devoid of it, where the shackles of convention enslave the castle’s populace.
I dunno, ymmv, but I implore you to give them a shot. They made a lasting impression on me, in a way that many books don’t.
To me, he just sounds like an extremized example of the attitude Jaynes describes in the first chapter of PT:TLoS (“Bayesianism as opposed to Aristoteliansm and Wilsonism” as Yvain puts it). But then again, I read it many years before I read Jaynes, so I might just have been misremembering it.
Ralph Hayes Jr. is writing an anti-deathist My Little Pony fanfic: The Great Alicorn Hunt.
The geth in Mass Effect had a huge effect on me in this regard. I know that might sound crazy, but for me, they definitely meet your criteria of “a powerful experience of using rationality.” Now, of course I can point out a few instances where I don’t feel the geth don’t exactly act very rationally, but the mere suggestion of such an existence blew my mind. I started wondering, in a way I had never done before, “what, really, IS the purpose of existence? If I were a geth, why would I bother to exist, let alone fight for my survival? Is the intention to survive really just purely irrational?” and so on and so forth. Imo, there is a direct line of causality tracing from my exposure to the geth to being on this very website.
Yay geth! :3
And this post from the ever useful DailyKos makes several suggestions: http://www.dailykos.com/story/2011/06/12/981390/-Sci-Fi-Fantasy-Club:Magic-and-Fantasy?via=siderecent
And this post from the ever useful Daily Kos makes several suggestions: http://www.dailykos.com/story/2011/06/12/981390/-Sci-Fi-Fantasy-Club:Magic-and-Fantasy?via=siderecent
And this post from the ever useful Daily Kos makes several suggestions: http://www.dailykos.com/story/2011/06/12/981390/-Sci-Fi-Fantasy-Club:Magic-and-Fantasy?via=siderecent
It seems unlikely that there are going to be very many examples of fiction containing characters using an interesting technique of rationality since—as far as I know—the idea of a real-life “rationality technique” analogous to a martial art technique is original to you, Eliezer. An author might write a story about some group or individual achieving great things and tell us that this group or person studies rationality, but they won’t be able to describe rationality techniques in detail because the concept of rationality techniques didn’t exist before the establishment of Overcoming Bias. All they’re going to be able to do is talk vaguely about monasteries and Grand Master Rationalists. We ought to be careful about interpreting stories to suit our agenda because it would be so easy to fall into the trap of generalising from fictional evidence. Perhaps learning rationality in a monastery under the tutelage of a Grand Master is a terrible way to learn to Think Better, but if we’ve spent all our time discussing books that contain this trope, we might have trouble rejecting it as a bad idea.
(Stiegler has another novel called Earthweb which is about using prediction markets to defend the Earth from invading aliens, which was my introduction to the concept of prediction markets.)
Would it be rational to use prediction markets to defend the Earth from aliens? Using Rationality Tools in an irrational way turns them into Irrationality Tools. The writer is just presenting an idea, he has no way of knowing if that technique would work in that scenario, it might even make the situation worse (does that make it Irrationalist Fiction?)
It would seem that the existence of two distinct authors predating myself who wrote such fiction constitutes a counterexample to your reasoning.
It doesn’t explicitly promote rationality, but in terms of demonstrating rationalist virtues in action I believe that the Stargate franchise, or at least the parts I’m thinking of* do a pretty good job. It strongly emphasizes curiosity, both for its own sake and in an instrumentalist knowledge = power sense. Secondly, there are large amounts of seemingly supernatural events that are reliable shown to be explainable by non-supernatual means. Admittedly the phelmtonium involved sometimes advanced enough that it effective works a magical black box, but its made clear that this is a property of the protagonists’ rather than inherent to the phelmtonium. Lastly and far from least, most characters actually learn from their own and others experiences then modify there behavior accordingly. Furthermore, how good they are at this is one of the strongest factors in how effective they are. It does have some weakness, for example the characters are somewhat prone to generalizing from fictional example, but I think it comes out as a net positive.
*More or less all of SG-1, including the directed to DVD movies and the parts of Atlantis that I’ve actually seen.
I’ve counted nearly a dozen features of physics and technology that can be used to create clones in the Stargate universe. The most straightforward (and least ‘fantasy physics’ based) example is the robot clones of SG1 that were running around the universe for several years. Yet while the Stargate team have access to that kind of technology I don’t see 10,000 each of McKay and Carter working in a research lab. Nor do I see 1,000,000 clones of Daniel Jackson sitting in rooms meditating to ascension.
If the fate of the universe is at stake and you have the chance to become a demigod then you take it. If that isn’t enough then you make yourself into an entire pantheon. You utterly obliterate, neutralize or render laughably insignificant any threats.
Although I think you are on to something. There is one character in particular in Stargate that seems to behave rationally: Ba’al. And not just because he created an army of Ba’als and tried to take over the galaxy. There are all sorts of lessons that can be learned from him. Not least of which is the ability to cooperate effectively with people with vastly different objectives when it makes sense to do so.
One of the things I love about Naruto Shippuden is that there’s an episode where Naruto (whose signature move creates shadow clones of himself) does precisely this. It’s revealed in the episode that when the shadow clone jutsu ends, the consciousness of the clone merges with the caster, and so training can be parallelized—and because Naruto has more chakra than practically anyone else, he can do it the best.
Even better, there’s an in-universe reason why everyone doesn’t do this: most people can’t create more than three or four shadow clones at a time without killing themselves.
Mind you, Naruto Shippuden doesn’t exactly promote rational values...
TV Tropes has a decent list of forgotten technologies in Stargate here.
Thanks for the link! That first paragraph reminded me of HP: MoR!
Good point, about the cloning tech. I still think that the protagonists act more rationally than most TV characters in that they generally don’t repeat mistakes, but thats mostly me having very low expectations.
Undoubtedly. And the nature of storytelling prohibits too much rational thinking. It wouldn’t have been much of a story if they had made a considered rational judgement on the merits of using the two most valuable minds in the galaxy as part of an elite reconnaissance strike team!
I agree that they are good role models. O’Neil in particular is just what you want in a leader—perhaps more so than the Atlantis guys.
Perhaps the highest praise I can give for Stargate’s (approximately) rational characters is that I watched it without ever being utterly disgusted and abandoning it. That is a fairly rare occurrence. When characters behave irrationally -particularly when I get the feeling that the authors are trying to tell me that the stupid behavior is the right thing to do—it totally breaks the experience for me. I can no longer identify with the characters and all interest in the story/show/book fades away. Not even HP:MoR passed that test!
OOC—what did HP:MOR do that broke it for you?
I know exactly what you mean, although I don’t recall it happening during HPMOR (possibly because it’s been so long since I read it.) Out of curiosity, what caused it to fail this test for you?
First let me say that I hold HPMoR, being what it is, to a higher standard than I hold, say, The Dresden Files. HPMoR is in no small part a tract preaching a position regarding rationality. Harry Dresden doing something excessively deontoloical is a minor irritant but MoR!Harry doing something irrational (if combined with narrator approval) is enough to make me put the fanfic aside for a few weeks till the foul taste dissipates.
The most notable example is the chapter when Hermione panics in response to finding out that Harry is experimenting with transfiguration. This is (so far, from what I have seen) given full endorsement as the sane thing for Hermione to do. But to me it seems insanely reckless. She runs in and disrupts an active transfiguration experiment by doing exactly the thing that could result in the potentially disastrous consequences. (I have most likely written up what Hermione could have done to actually mitigate risk once she noticed the danger.)
The morals of that chapter were (apparently):
Doing experiments like what Harry was doing is irresponsible. Don’t! (Good lesson.)
When you notice a threat you should PANIC! Take your fight and flight instincts, lock them in ‘fight’ and proceed to turn off all your higher brain functions. Don’t worry if your ‘solution’ is itself destructive, dangerous or batshit insane. (Bad lesson.)
Ah, right. I assumed that was Hermione being Hermione, and not being endorsed just because Harry was, in fact, being irresponsible.
Thinking back, I think I was giving EY noticeably more in the way of “benefit of the doubt” than I usually would, probably because the rest had been so, well, rational. (That said, I was quite annoyed at the retcons made to reinforce the narrative of “The Enlightenment versus Death”.)
I hope no one takes this as advice to watch it (and in fact I do not advise watching any of the sci-fi TV shows I can think of right now) but my favorite thing about Stargate SG-1 is also how unobjectionable it is. Star Trek the Next Generation was approximately as unobjectionable, but much less imaginative. (STNG was also written by very skilled entertainment professionals who had little respect for the geekier part of their audience and who were not sincerely trying to be illuminating and not “grappling with any issues”.)
Second longest-running sci fi show ever, says Wikipedia.
I don’t advise watching shows in general!
I do. A life without comedy or drama would probably be a mistake, and a limited amount of carefully chosen TV shows and movies are the best way for most people to partake of them.
.
(Old thread, I know)
The second longest running sci-fi show is Doraemon. The first, of course, is Super Sentai.
Of course, “longest running sci-fi show” is like “tallest building”—you end up having to decide issues of which side you measure from if the building is on a hill, whether spires and antennas count, whether structures that are unoccupied are buildings, etc.
And I found Star Trek: The Next Generation much more objectionable. The episode that made me quit was the one where they unthaw 20th century people and it turns out that the future people don’t have the concept of money, let alone capitalism. That seemed like a blatant attempt to sell ideology to the audience, and it’s not as if that was the only instance.
As for the cloning tech, one of the things that impressed me was that at least in the early seasons of Stargate they were very careful to ensure that whenever some strange device was introduced that would lead to questions of why they didn’t use it every other episode, the device was always destroyed, ran out of power, unreproducible and in limited quantity, or otherwise incapable of being used in future episodes. I can think of lots of cloning technology in the Stargate universe, but not much that’s freely available to use whenever they want, let alone in quantities of 10000.
Funny, I’d always heard the longest-running sci-fi show was Doctor Who. Maybe it’s because DW went on hiatus for a while?
No, it’s because of the “is it the tallest building if the height is only greater when measured from the low side of the hill” question.
Doraemon is on its 35th year. It’s a Japanese cartoon about a robot cat from the future. Is that scifi? Moreover, it aired one season on a different network several years before its current run—does that count as part of the same show considering they used the same source material? Is a cartoon considered a “show” at all? (And is the Doctor Who year where they just had a couple of specials considered a year of the show?)
Super Sentai is on its 38th year and is the Japanese show used as source material for Power Rangers. Is that sci-fi? Each year they change the cast and part of the premise, but keep the general premise of five people in colored costumes who have giant transforming robots. Is that “a show” or several separate shows? (Bear in mind that no live-action show is going to last 38 years with the same people being the stars, anyway.) Is the answer changed by the fact that each show is referred to by the umbrella Super Sentai title as well as the title of the individual series? Is the answer changed by the existence of crossovers which feature both of the “separate” shows?
Also, both Doraemon and Super Sentai started later than Doctor Who but didn’t have large hiatuses. If you go by time since first episode, Doctor Who is longer, but it’s not really fair to count the 17 year hiatus as part of the length of the show.
Wow, thanks for that comprehensive response.
It’s possible I actually heard it referred to as the “oldest sci-fi show still running” or some such distinction; after all, if it makes your show sound important...
As for the definition of “sci-fi” and “show” … I’m willing to leave that up to whoever is trying to get attention for their favorite.
In other news, I learned about Super Sentai, the premise, link to PR etc. just the other day—completely independently to your referencing mystifying me. Funny how that often seems to happen.
Rationality and fooming are distinct concepts.
Not necessarily relevant. I read wedrifid’s comment as being less about fooming per se and more about what I might describe as the virtue of munchkinism: taking advantages available to you even when they conflict with implicit understandings about your role in life. We could debate to what extent that’s a core or necessary rational virtue, but I don’t think it’s very debatable that it is a rational virtue.
Exactly.
It’s not core or necessary unless, say, the aliens with superior technology and the evil gods are trying to kill you.
Actually, a lot of Stargate’s plots are about who has the idiot ball this week. In particular, their grasp of security is just terrible. They’ve never even heard of a duress code! I keep waiting for someone to do for Stargate’s security what MoR has done for HP’s rationality. Also, whoever does this should probably take the opportunity to play up the S&M subtext of Stargate (how many times can Teal’c be tied up and tortured?).
Yeah, that’s probably the thing about SG-1 that annoys me the most: despite how central the military is to the stories, the writers do not understand security, nor do they consider it important enough to have a consultant help them with it.
Stargate: Universe doesn’t meet the same standard, unfortunately. The authors try to set up Rush as a brilliant, ruthless Machiavellian rationalist but fail terribly. He is to a Machiavellian genius what Spock is to a rational agent. Utterly incompetent.
I’ve only seen about 2⁄3 of Universe so far, but I actually think Rush is being set up as brilliant at physics, but basically insane when dealing with humans. Which is not at all the same thing.
He was advertised consistently as “The ship’s brilliant Machiavellian scientist[8]”—to the extent it was about the first thing mentioned in any description of the show. But that past tense is there to stay—the show flopped so he’s going nowhere now!
I highly recommend the works of Dan Brown, particularly Angels and Demons and Deception Point. I hesitate to say why I recommend them, because I can’t really do that without massive spoilers.
I will say this: just because the characters in the books are not particularly rational does not mean the story itself is not rational. These books have a very important lesson to teach, and even though the characters may or may not learn that lesson, an observant reader should.
Another way of alluding to what’s going on here: experience has taught us to reason about plots and attempt unravel mysteries in fiction according to certain tropes. However what’s frequently true in fiction is often almost never to never true in real life. You will make more sense of these books if you come to the same conclusions from the scenes you read as you would were you to encounter those scenes in real life, rather than to the conclusions you would usually come to in a standard work of suspense fiction.
I remember thinking well of Angels and Demons. The Da Vinci Code, however, was, at every turn, horribly implausible. And Digital Fortress runs purely on Idiot Ball. The protagonist only distinguishes herself by being the person who carries the ball the least
EDIT: Also, Angels & Demons came up incidentally in a Facebook discussion between religion/English major friends of mine; Angels & Demons is nearly as jam-packed with completely false information as his other work. It turns out all of Dan Brown’s books are Dan Browned
Dan Brown’s books are thinly veiled nonfiction conspiracy theorizing, aren’t they? He really believes all this stuff. Maybe that’s what causes this effect?
Well, they are suspense novels so yes, there are bad guys in the plot; and yes they are conspiring to certain ends. But I am aware of no evidence that Brown really believes this stuff any more than Neal Stephenson, for example, believes his stories.
Here’s what surprised me about these books. Read them. Pay careful attention to what is actually described; i.e. not what the characters infer from the events of the book but what the characters actually see. You can reason from there in two ways:
1) Base your conclusions on traditional suspense novel tropes.
2) Base your conclusions on what you would likely think if you saw these admittedly implausible events in real life.
One approach will leave you confused. One will not.
I was speculating on the cause of this effect, not disputing it.
As for whether he believes in these conspiracies … well, I know for a fact that they’re based on real conspiracy theories with actual proponents, although I’m not sure where I heard that he believes them himself. I suppose he might be simply using them as inspiration; I’ll try and track down my source on that.
ETA: a moment’s quick googling reveals that
This may be what I was thinking of.