And, if the vast majority of the books in the library are Chinese, then I actually know very little about the “typical” book in the library.
Yes, I agree. In this particularly case, though, we have no idea whether your “if” clause is satisfied, and what the proportion of English to Chinese books really is.
To make an analogy with my previous post where I explain that the ceiling on success rate is actually rather low, most of the books you read either burst into flame when you read them, or their text disappears or turns into gibberish. Sometimes, even forensic inspection can’t tell you what language the book was originally in.
All you can know is that learning English helps you read some of the books in the library. Absent the knowledge of what was in the text that was destroyed before you could read it, you have no idea of the typicality or atypicality of the English books you are capable of reading. Yet if your forensic inspection of the destroyed books reveals more English characters than Chinese characters, or you have some additional theoretical or empirical knowledge on the distribution of languages in the books, then you may have to upgrade your estimate of the proportion of English books. (This assumes that the hypotheses of books being in English or Chinese are both locateable.)
Even if your estimate is wrong, it can still be very valuable to know how to read the typical English book in the library, especially if the alternative is not being able to read any.
You still know very little, of course, about the population of books (or people) you are trying to model. Yet in the case of people, you are often faced with competing hypothesizes about how to behave, and even a small preference for one hypothesis over the other can have great practical significance. That’s why stereotypically we see women picking over their interactions with men with their female friends, and PUAs doing exactly the same thing on internet forums. They have tough decisions to make under uncertainty.
Does a preference for one theory over another, and seeming practical results mean that the preferred theory is “true?” I think we both agree: no. That’s naive realism. Yet when you are engaged in discussion on a practical subject, it’s easy to slip from language about what works to language about what is true, and adopt a pragmatic notion of truth in that context.
As I’ve mentioned before, PUAs do commit naive realism a lot. While there are ceilings to what mass-anecdotal experience of PUAs can show us about epistemic rationality, there is a lot it can show us about instrumental rationality. How to be instrumentally successful when the conclusions of epistemic rationality are up in the air is an interesting subject.
I’m not a PUArtist, I’m a PUInstrumentalist about PU models. Yet when I see a theory (or particularly hypothesis in a theory) working so spectacularly well, and that data which deviates from it generally seems to have an explanation consistent with the theory, and the theory lets me predict novel facts, and it is consistent with psychological research and theories on the topic… then it sometimes makes me wonder if my instrumentalist attitude of suspended judgment on the truth of that theory is a little airy-fairy.
I doubt that PUA models are literally highly probable in totality, yet I hold that particular hypotheses in those models are reasonable even only fueled by anecdotal evidence, and that with certain minor transformations, the models themselves could be turned into something that has a chance of being literally highly probable.
Yes, I agree. In this particularly case, though, we have no idea whether your “if” clause is satisfied, and what the proportion of English to Chinese books really is.
To make an analogy with my previous post where I explain that the ceiling on success rate is actually rather low, most of the books you read either burst into flame when you read them, or their text disappears or turns into gibberish. Sometimes, even forensic inspection can’t tell you what language the book was originally in.
All you can know is that learning English helps you read some of the books in the library. Absent the knowledge of what was in the text that was destroyed before you could read it, you have no idea of the typicality or atypicality of the English books you are capable of reading. Yet if your forensic inspection of the destroyed books reveals more English characters than Chinese characters, or you have some additional theoretical or empirical knowledge on the distribution of languages in the books, then you may have to upgrade your estimate of the proportion of English books. (This assumes that the hypotheses of books being in English or Chinese are both locateable.)
Even if your estimate is wrong, it can still be very valuable to know how to read the typical English book in the library, especially if the alternative is not being able to read any.
You still know very little, of course, about the population of books (or people) you are trying to model. Yet in the case of people, you are often faced with competing hypothesizes about how to behave, and even a small preference for one hypothesis over the other can have great practical significance. That’s why stereotypically we see women picking over their interactions with men with their female friends, and PUAs doing exactly the same thing on internet forums. They have tough decisions to make under uncertainty.
Does a preference for one theory over another, and seeming practical results mean that the preferred theory is “true?” I think we both agree: no. That’s naive realism. Yet when you are engaged in discussion on a practical subject, it’s easy to slip from language about what works to language about what is true, and adopt a pragmatic notion of truth in that context.
As I’ve mentioned before, PUAs do commit naive realism a lot. While there are ceilings to what mass-anecdotal experience of PUAs can show us about epistemic rationality, there is a lot it can show us about instrumental rationality. How to be instrumentally successful when the conclusions of epistemic rationality are up in the air is an interesting subject.
I’m not a PUArtist, I’m a PUInstrumentalist about PU models. Yet when I see a theory (or particularly hypothesis in a theory) working so spectacularly well, and that data which deviates from it generally seems to have an explanation consistent with the theory, and the theory lets me predict novel facts, and it is consistent with psychological research and theories on the topic… then it sometimes makes me wonder if my instrumentalist attitude of suspended judgment on the truth of that theory is a little airy-fairy.
I doubt that PUA models are literally highly probable in totality, yet I hold that particular hypotheses in those models are reasonable even only fueled by anecdotal evidence, and that with certain minor transformations, the models themselves could be turned into something that has a chance of being literally highly probable.
Well put. This is a good delineation of the issues.