I think the narrow decision of “rational” comes from game theory / decision theory, where decisions are usually assumed to be made by a hypothetical “rational agent”. When this is the sense you want, you sometimes (but not always) need to also specify a particular decision theory.
Moreover, in order to use instrumental rationality properly, you first have to use ontological rationality (am I using the right word?). That is, if your agent wants to make decisions rationally, the first step he’ll make is invest as much effort as is profitable into getting as precise a model of the situation as possible, then reason out the best possible plan given that information. In other word, a “rational agent” will automatically be a “rational knower”. Both forms of reason are therefore inseparable. I don’t know if we could go as far as saying they are one and the same, which wouild be the final step in justifying using the exact same word for both.
Do we separate the word “reason” from the word “rationality” here? Using them as syonyms might cause conflicts in derivates, such as “reasoning” VS “thinking rationally” VS “rationalizing”...
I lovearguing about definitions. Hélas, I don’t see how we can evade this task. Using the word “rationalism” already pisses off enough philosophers as it is. Not only does it have the same name as the philosophical current of “let’s write maps in the dark, we’ll still know stuff, the territory sucks anyway”, it also seems to imply that we practice the Art of Being Right (to be precise, the art of being as Less Wrong as possible at any given time, but still, that’s a pretty bold claim) .If we called ourselves Conceptualists (which we are, basically) or some neologism that’d be a little less troulesome, I think.
It seems to me that your argument is essentially that expected-utility-maximization is indistinguishable from believing-truth. I don’t see any particular need to address or dispute that, as your error is elsewhere.
The original post starts with the rather telling words:
Do we really have to be rational all the time in order to teach rationality?breaking the rules of reality within the realm of a work of fiction
Accepting for the sake of argument your earlier tying-together of deciding-well and believing-well, you seem to be incorrectly assuming that believing true things is somehow at odds with writing fantasy fiction. I don’t think that believing truth necessarily requires saying truth, and if your audience understands that it’s fiction, there aren’t any ethical issues related to deceit either.
That’s fine if fiction is about pure escapism, but when fiction is used to convey a message about the real world, whether it’s an Aesop or, worse even, a teaching about how reality works on a physical level. is when I’m feeling queasy.
I agree with this comment, but now I’m not sure what you were trying to get at in the original post.
(I’ll focus on just the first paragraph.)
If Miss Frizzle could do it, why couldn’t we? Do we really have to be rational all the time in order to teach rationality?breaking the rules of reality within the realm of a work of fiction and making the protagonists (or the audience if it’s a videogame) figure the new rules out for themselves… Actually now that I think of it videogamers are very used to adapting themselves to entirely new sets of physics on a weekly basis… but no-one has ever made them stop and think about it for a while, AFAIK.
I read something along the lines of this:
Counter to first intuition, telling stories set in a world whose natural laws differ from those of our own may sometimes be useful to the purpose of teaching worthwhile facts and skills.
I don’t see that this has anything to do with decision theory, so I wouldn’t have used the word “rational” to say it.
I think the narrow decision of “rational” comes from game theory / decision theory, where decisions are usually assumed to be made by a hypothetical “rational agent”. When this is the sense you want, you sometimes (but not always) need to also specify a particular decision theory.
Moreover, in order to use instrumental rationality properly, you first have to use ontological rationality (am I using the right word?). That is, if your agent wants to make decisions rationally, the first step he’ll make is invest as much effort as is profitable into getting as precise a model of the situation as possible, then reason out the best possible plan given that information. In other word, a “rational agent” will automatically be a “rational knower”. Both forms of reason are therefore inseparable. I don’t know if we could go as far as saying they are one and the same, which wouild be the final step in justifying using the exact same word for both.
Do we separate the word “reason” from the word “rationality” here? Using them as syonyms might cause conflicts in derivates, such as “reasoning” VS “thinking rationally” VS “rationalizing”...
I love arguing about definitions. Hélas, I don’t see how we can evade this task. Using the word “rationalism” already pisses off enough philosophers as it is. Not only does it have the same name as the philosophical current of “let’s write maps in the dark, we’ll still know stuff, the territory sucks anyway”, it also seems to imply that we practice the Art of Being Right (to be precise, the art of being as Less Wrong as possible at any given time, but still, that’s a pretty bold claim) .If we called ourselves Conceptualists (which we are, basically) or some neologism that’d be a little less troulesome, I think.
Don’t we have any philosophy majors among us?
I didn’t think we were disputing definitions.
It seems to me that your argument is essentially that expected-utility-maximization is indistinguishable from believing-truth. I don’t see any particular need to address or dispute that, as your error is elsewhere.
The original post starts with the rather telling words:
Accepting for the sake of argument your earlier tying-together of deciding-well and believing-well, you seem to be incorrectly assuming that believing true things is somehow at odds with writing fantasy fiction. I don’t think that believing truth necessarily requires saying truth, and if your audience understands that it’s fiction, there aren’t any ethical issues related to deceit either.
That’s fine if fiction is about pure escapism, but when fiction is used to convey a message about the real world, whether it’s an Aesop or, worse even, a teaching about how reality works on a physical level. is when I’m feeling queasy.
I agree with this comment, but now I’m not sure what you were trying to get at in the original post.
(I’ll focus on just the first paragraph.)
I read something along the lines of this:
I don’t see that this has anything to do with decision theory, so I wouldn’t have used the word “rational” to say it.