Eeer… okay, no problem, give me a restircted definition and I’ll keep to it… Basically I understand by it “mental hygiene”, “getting the map closer to the territory”, etc… Now that i check the main article, I don’t see how my words would constitute an abuse, so I think you’d have to be more specific on what worries you here.
I think the narrow decision of “rational” comes from game theory / decision theory, where decisions are usually assumed to be made by a hypothetical “rational agent”. When this is the sense you want, you sometimes (but not always) need to also specify a particular decision theory.
Moreover, in order to use instrumental rationality properly, you first have to use ontological rationality (am I using the right word?). That is, if your agent wants to make decisions rationally, the first step he’ll make is invest as much effort as is profitable into getting as precise a model of the situation as possible, then reason out the best possible plan given that information. In other word, a “rational agent” will automatically be a “rational knower”. Both forms of reason are therefore inseparable. I don’t know if we could go as far as saying they are one and the same, which wouild be the final step in justifying using the exact same word for both.
Do we separate the word “reason” from the word “rationality” here? Using them as syonyms might cause conflicts in derivates, such as “reasoning” VS “thinking rationally” VS “rationalizing”...
I lovearguing about definitions. Hélas, I don’t see how we can evade this task. Using the word “rationalism” already pisses off enough philosophers as it is. Not only does it have the same name as the philosophical current of “let’s write maps in the dark, we’ll still know stuff, the territory sucks anyway”, it also seems to imply that we practice the Art of Being Right (to be precise, the art of being as Less Wrong as possible at any given time, but still, that’s a pretty bold claim) .If we called ourselves Conceptualists (which we are, basically) or some neologism that’d be a little less troulesome, I think.
It seems to me that your argument is essentially that expected-utility-maximization is indistinguishable from believing-truth. I don’t see any particular need to address or dispute that, as your error is elsewhere.
The original post starts with the rather telling words:
Do we really have to be rational all the time in order to teach rationality?breaking the rules of reality within the realm of a work of fiction
Accepting for the sake of argument your earlier tying-together of deciding-well and believing-well, you seem to be incorrectly assuming that believing true things is somehow at odds with writing fantasy fiction. I don’t think that believing truth necessarily requires saying truth, and if your audience understands that it’s fiction, there aren’t any ethical issues related to deceit either.
That’s fine if fiction is about pure escapism, but when fiction is used to convey a message about the real world, whether it’s an Aesop or, worse even, a teaching about how reality works on a physical level. is when I’m feeling queasy.
I agree with this comment, but now I’m not sure what you were trying to get at in the original post.
(I’ll focus on just the first paragraph.)
If Miss Frizzle could do it, why couldn’t we? Do we really have to be rational all the time in order to teach rationality?breaking the rules of reality within the realm of a work of fiction and making the protagonists (or the audience if it’s a videogame) figure the new rules out for themselves… Actually now that I think of it videogamers are very used to adapting themselves to entirely new sets of physics on a weekly basis… but no-one has ever made them stop and think about it for a while, AFAIK.
I read something along the lines of this:
Counter to first intuition, telling stories set in a world whose natural laws differ from those of our own may sometimes be useful to the purpose of teaching worthwhile facts and skills.
I don’t see that this has anything to do with decision theory, so I wouldn’t have used the word “rational” to say it.
Hmm. When someone talks about being rational, I usually understand them to be talking about instrumental rationality rather than technically updating on evidence and such. When read in this way, it makes absolutely no sense to ask “Do we have to be rational when X”
“Rational”, the adjectival form, has horrible connotations for historical reasons, and is best avoided whenever possible. It’s also very vague and will inevitably be justifiably nit-picked and shot down.
Honestly, I have trouble understanding how Rationalism, the old version, even came into existence. What made those guys think they could write a map in the dark in their room and have it reveal some truth to them? Working that way may teach them how to compare maps and make them compatible, and make them otherwise improve their map-drawing skills, which might help a lot once they go outside, especially if they’ve been hypothesizing about unusual phenomena that the more practical, less theoretical mappers aren’t used consider or even expect. But as long as they remain shut in, what’s that knowedge worth.
Otherwise, if anyone starts mentioning the Holocaust being rational, it’s very easy to point out that the process was ridiculously suboptimal in nearly every possible way, and probably cost the Nazis the war.
I think old-school rationalism makes a lot more sense than it may at first appear. Your brain is the territory after all—in fact it’s the richest territory you’ll ever be able to explore, and you’re very intimately familiar with it, you’re intertwined with it and you have been forever. You can immediately test many hypotheses about it, you can change it this way or that, you can use it to reflect on itself and see what it says. I mean, you can feel it as you observe it. The crazy idea, that given enough time to hone yourself you might be able to bootstrap your way from such a rich local context to a much clearer understanding of the universe, doesn’t seem too crazy when put in that light.
Hm, okay, I’ll rewrite it in a way that is more optimized for clarity. You said: “What made those guys think they could write a map in the dark in their room and have it reveal some truth to them?” And I thought: Well, they’re not exactly sitting in the dark. Developmentally, an 18th century Rationalist would have had a very cultured upbringing, would have been very intelligent, would have been at least somewhat predisposed to reflection, et cetera. Evolutionarily, brains are formed with innate heuristics and categories for reasoning about the world, some very basic, and some fascinatingly complex, for example archetypes. That is a lot of information you get ‘for free’ as an 18th century Rationalist, without ever having to leave your cellar. But it’s more than that. Brains are incredibly powerful and can do many things. You can rewrite their algorithms; you can store ideas in them and come back to them later; and you can combine pieces of information already inside them in new and interesting ways, and repeat that process again and again, combinatorially; you can feel how it feels to think one way, then feel how it feels to think a different way, build up rich qualia for concepts and their manipulations and concept structures; you can develop mathematics to help you reason formally about the relations between those concepts, you can develop information theory, you can develop category theory, you can develop statistic mechanics and probability theory. “But,” you might say, “though that may be theoretically possible, it should have been obvious that pragmatically speaking it is best to go out and experiment with the world as well as moving things around in your head.” However it is important to note that the famous Rationalists came before the dawn of Newton and therefore the dawn of discoverable universals; the last famous Rationalist, and indeed the one I find most interesting, was Leibniz, whose monads are an awful lot like computer programs and whose God is an awful lot like a programmer with infinite resources. (Leibniz is often thought of as the first computer scientist. I assure you that you will find this paragraph impressive.) It’s worth noting that Leibniz was a Rationalist and a Monist, and, of course, a physicist and an engineer. It would be caricature to portray Rationalists as drawing up maps in the dark. Rather, the position they had in common was their realization of how powerful minds can become when trained, and that Reason is vitally important for understanding the world. Less Wrong is directly descended from that memetic lineage.
Oh. That’s not the standard defition of “Rationalism” I heard. You make it sound like “overhtinking stuff you know about in the dark will net you more knowledge”, while the definition I’m familiar with is “get an empty mind with no experience of anything besides itself and it will still be able to, say, come up with Math and work from there to make an awesome edifice that’s never had an ounce of sensory experience in it”, which is patently ridiculous.
I wonder now if that was a straw-man or Theme Park Version.
Can we stop abusing the word ‘rationality’? Please?
Eeer… okay, no problem, give me a restircted definition and I’ll keep to it… Basically I understand by it “mental hygiene”, “getting the map closer to the territory”, etc… Now that i check the main article, I don’t see how my words would constitute an abuse, so I think you’d have to be more specific on what worries you here.
I think the narrow decision of “rational” comes from game theory / decision theory, where decisions are usually assumed to be made by a hypothetical “rational agent”. When this is the sense you want, you sometimes (but not always) need to also specify a particular decision theory.
Moreover, in order to use instrumental rationality properly, you first have to use ontological rationality (am I using the right word?). That is, if your agent wants to make decisions rationally, the first step he’ll make is invest as much effort as is profitable into getting as precise a model of the situation as possible, then reason out the best possible plan given that information. In other word, a “rational agent” will automatically be a “rational knower”. Both forms of reason are therefore inseparable. I don’t know if we could go as far as saying they are one and the same, which wouild be the final step in justifying using the exact same word for both.
Do we separate the word “reason” from the word “rationality” here? Using them as syonyms might cause conflicts in derivates, such as “reasoning” VS “thinking rationally” VS “rationalizing”...
I love arguing about definitions. Hélas, I don’t see how we can evade this task. Using the word “rationalism” already pisses off enough philosophers as it is. Not only does it have the same name as the philosophical current of “let’s write maps in the dark, we’ll still know stuff, the territory sucks anyway”, it also seems to imply that we practice the Art of Being Right (to be precise, the art of being as Less Wrong as possible at any given time, but still, that’s a pretty bold claim) .If we called ourselves Conceptualists (which we are, basically) or some neologism that’d be a little less troulesome, I think.
Don’t we have any philosophy majors among us?
I didn’t think we were disputing definitions.
It seems to me that your argument is essentially that expected-utility-maximization is indistinguishable from believing-truth. I don’t see any particular need to address or dispute that, as your error is elsewhere.
The original post starts with the rather telling words:
Accepting for the sake of argument your earlier tying-together of deciding-well and believing-well, you seem to be incorrectly assuming that believing true things is somehow at odds with writing fantasy fiction. I don’t think that believing truth necessarily requires saying truth, and if your audience understands that it’s fiction, there aren’t any ethical issues related to deceit either.
That’s fine if fiction is about pure escapism, but when fiction is used to convey a message about the real world, whether it’s an Aesop or, worse even, a teaching about how reality works on a physical level. is when I’m feeling queasy.
I agree with this comment, but now I’m not sure what you were trying to get at in the original post.
(I’ll focus on just the first paragraph.)
I read something along the lines of this:
I don’t see that this has anything to do with decision theory, so I wouldn’t have used the word “rational” to say it.
Hmm. When someone talks about being rational, I usually understand them to be talking about instrumental rationality rather than technically updating on evidence and such. When read in this way, it makes absolutely no sense to ask “Do we have to be rational when X”
Instrumental rationality? You mean, like, making the optimum choices for the sake of reaching a goal or an inclusively preferred outcome?
Right.
“Rational”, the adjectival form, has horrible connotations for historical reasons, and is best avoided whenever possible. It’s also very vague and will inevitably be justifiably nit-picked and shot down.
Honestly, I have trouble understanding how Rationalism, the old version, even came into existence. What made those guys think they could write a map in the dark in their room and have it reveal some truth to them? Working that way may teach them how to compare maps and make them compatible, and make them otherwise improve their map-drawing skills, which might help a lot once they go outside, especially if they’ve been hypothesizing about unusual phenomena that the more practical, less theoretical mappers aren’t used consider or even expect. But as long as they remain shut in, what’s that knowedge worth.
Otherwise, if anyone starts mentioning the Holocaust being rational, it’s very easy to point out that the process was ridiculously suboptimal in nearly every possible way, and probably cost the Nazis the war.
I think old-school rationalism makes a lot more sense than it may at first appear. Your brain is the territory after all—in fact it’s the richest territory you’ll ever be able to explore, and you’re very intimately familiar with it, you’re intertwined with it and you have been forever. You can immediately test many hypotheses about it, you can change it this way or that, you can use it to reflect on itself and see what it says. I mean, you can feel it as you observe it. The crazy idea, that given enough time to hone yourself you might be able to bootstrap your way from such a rich local context to a much clearer understanding of the universe, doesn’t seem too crazy when put in that light.
I notice that I am confused by what you just said.
Hm, okay, I’ll rewrite it in a way that is more optimized for clarity. You said: “What made those guys think they could write a map in the dark in their room and have it reveal some truth to them?” And I thought: Well, they’re not exactly sitting in the dark. Developmentally, an 18th century Rationalist would have had a very cultured upbringing, would have been very intelligent, would have been at least somewhat predisposed to reflection, et cetera. Evolutionarily, brains are formed with innate heuristics and categories for reasoning about the world, some very basic, and some fascinatingly complex, for example archetypes. That is a lot of information you get ‘for free’ as an 18th century Rationalist, without ever having to leave your cellar. But it’s more than that. Brains are incredibly powerful and can do many things. You can rewrite their algorithms; you can store ideas in them and come back to them later; and you can combine pieces of information already inside them in new and interesting ways, and repeat that process again and again, combinatorially; you can feel how it feels to think one way, then feel how it feels to think a different way, build up rich qualia for concepts and their manipulations and concept structures; you can develop mathematics to help you reason formally about the relations between those concepts, you can develop information theory, you can develop category theory, you can develop statistic mechanics and probability theory. “But,” you might say, “though that may be theoretically possible, it should have been obvious that pragmatically speaking it is best to go out and experiment with the world as well as moving things around in your head.” However it is important to note that the famous Rationalists came before the dawn of Newton and therefore the dawn of discoverable universals; the last famous Rationalist, and indeed the one I find most interesting, was Leibniz, whose monads are an awful lot like computer programs and whose God is an awful lot like a programmer with infinite resources. (Leibniz is often thought of as the first computer scientist. I assure you that you will find this paragraph impressive.) It’s worth noting that Leibniz was a Rationalist and a Monist, and, of course, a physicist and an engineer. It would be caricature to portray Rationalists as drawing up maps in the dark. Rather, the position they had in common was their realization of how powerful minds can become when trained, and that Reason is vitally important for understanding the world. Less Wrong is directly descended from that memetic lineage.
Oh. That’s not the standard defition of “Rationalism” I heard. You make it sound like “overhtinking stuff you know about in the dark will net you more knowledge”, while the definition I’m familiar with is “get an empty mind with no experience of anything besides itself and it will still be able to, say, come up with Math and work from there to make an awesome edifice that’s never had an ounce of sensory experience in it”, which is patently ridiculous.
I wonder now if that was a straw-man or Theme Park Version.