Encyclopedia Brown is an especially bad example. Most of the mysteries he solves, he solves by knowing some piece of minor trivia which contradicts some off-hand statement of the criminal. This promotes “rationality” as “knowing a lot of facts”, which is absolutely not what we’re trying to promote here, and provides the wrong model of problem solving. Encyclopedia Brown is based on formal logic, not Bayesian probability.
Knowing lots of facts absolutely matters in the real world though.
Having a good theoretical framework to organize them and logic (probablistic or otherwise) to manipulate them help too—but not without real facts.
“Encyclopedia Brown is based on formal logic, not Bayesian probability.”
1) Formal logic isn’t the wrong model.
2) Encyclopedia Brown doesn’t rely on logic except in a very trivial sense.
3) Encyclopedia Brown relies on an extensive knowledge of trivia which happen to become relevant; rather than being especially intelligent, he merely has an excellent memory and a rudimentary capacity for reason.
I agree with your descriptions of the books. My point was that fiction celebrates various community values, and that some books celebrate rationalist values more than others.
Compare Encyclopedia Brown to Harry Potter. Both solve mysteries, but Harry Potter is explicitly skilled at sports and personal defense and explicitly incompetent at schoolwork.
If you require such specific rationality techniques as “Bayesian probability but not formal logic”, your kids will not have many books to read.
My favorite Encyclopedia Brown stories were the ones where he wasn’t solving mysteries, but where he was fooling the other children.
For example (and this is from memory so the details may be off), Brown wanted to make some money off the other children by running a gambling game, but he knew that he’d get in major trouble if he were caught doing such a thing. He asked an authority figure if he could just run a game where children paid money to win a toy randomly chosen by a spinner, and got (grudging) approval.
Then came his nifty idea: he’d only buy a few toys before the game started, so that he’d quickly run out. Once one of the children won an toy he had ran out of, he’d (after some hesitation for show) give them the amount of money it would take to buy that toy at the store, then make them “promise” that they’d go and spend the money on the toy. He knew that most of the children would stay and put the money right back into the game (thus turning it into real gambling), but he had established a veneer of plausible deniability; how could he know if they were spending from their own initial pocket money or from money they had won back?
I don’t know how much of an example of rationalism that is, but I still think it’s valuable for children to learn to think in terms of someone trying to game a system, as a third option beyond following the system strictly or breaking it outright. It’s useful later on when they find themselves needing to game systems, or to build systems that are hard to game.
I think you may be thinking of The Great Brain, not Encyclopedia Brown, there. Encyclopedia Brown was a boy of upstanding moral character, which meant The Great Brain was more fun to read.
Whenever EB catches somebody this way, I always read it as that he’s bluffing. After all, the perp always confesses when confronted with the alleged proof, so it really doesn’t matter how EB knows (psychological analysis, another clue that would be harder to explain, the knowledge that Bugs Meany always lies); he just has to wait around until he can find something that he can claim proves his case.
Encyclopedia Brown is an especially bad example. Most of the mysteries he solves, he solves by knowing some piece of minor trivia which contradicts some off-hand statement of the criminal. This promotes “rationality” as “knowing a lot of facts”, which is absolutely not what we’re trying to promote here, and provides the wrong model of problem solving. Encyclopedia Brown is based on formal logic, not Bayesian probability.
Knowing lots of facts absolutely matters in the real world though. Having a good theoretical framework to organize them and logic (probablistic or otherwise) to manipulate them help too—but not without real facts.
But just “intelligence is useful” takes people farther than many intelligent people get. Seriously.
Empirical data point: Read the Encyclopedia Brown books and liked them, was probably influenced to some degree or other.
“Encyclopedia Brown is based on formal logic, not Bayesian probability.”
1) Formal logic isn’t the wrong model. 2) Encyclopedia Brown doesn’t rely on logic except in a very trivial sense. 3) Encyclopedia Brown relies on an extensive knowledge of trivia which happen to become relevant; rather than being especially intelligent, he merely has an excellent memory and a rudimentary capacity for reason.
What is wrong with formal logic? Would the average fiction reader be harmed by becoming marginally better at formal logic?
I agree with your descriptions of the books. My point was that fiction celebrates various community values, and that some books celebrate rationalist values more than others.
Compare Encyclopedia Brown to Harry Potter. Both solve mysteries, but Harry Potter is explicitly skilled at sports and personal defense and explicitly incompetent at schoolwork.
If you require such specific rationality techniques as “Bayesian probability but not formal logic”, your kids will not have many books to read.
My favorite Encyclopedia Brown stories were the ones where he wasn’t solving mysteries, but where he was fooling the other children.
For example (and this is from memory so the details may be off), Brown wanted to make some money off the other children by running a gambling game, but he knew that he’d get in major trouble if he were caught doing such a thing. He asked an authority figure if he could just run a game where children paid money to win a toy randomly chosen by a spinner, and got (grudging) approval.
Then came his nifty idea: he’d only buy a few toys before the game started, so that he’d quickly run out. Once one of the children won an toy he had ran out of, he’d (after some hesitation for show) give them the amount of money it would take to buy that toy at the store, then make them “promise” that they’d go and spend the money on the toy. He knew that most of the children would stay and put the money right back into the game (thus turning it into real gambling), but he had established a veneer of plausible deniability; how could he know if they were spending from their own initial pocket money or from money they had won back?
I don’t know how much of an example of rationalism that is, but I still think it’s valuable for children to learn to think in terms of someone trying to game a system, as a third option beyond following the system strictly or breaking it outright. It’s useful later on when they find themselves needing to game systems, or to build systems that are hard to game.
I think you may be thinking of The Great Brain, not Encyclopedia Brown, there. Encyclopedia Brown was a boy of upstanding moral character, which meant The Great Brain was more fun to read.
Perhaps you’re right, it’s been a while since I read those books.
Or non-facts.
I haven’t read them, but I think it a bad sign that the tvtropes article “Conviction by Counterfactual Clue” is also known as “Encyclopedia Browned”.
Whenever EB catches somebody this way, I always read it as that he’s bluffing. After all, the perp always confesses when confronted with the alleged proof, so it really doesn’t matter how EB knows (psychological analysis, another clue that would be harder to explain, the knowledge that Bugs Meany always lies); he just has to wait around until he can find something that he can claim proves his case.