Of course, this makes “the meaning was imparted originally, rather than being chosen post-hoc” into an unfalsifiable position.
So, I can imagine a scenario where Yoda says cryptic statement X, and then privately expands on X to make it specific in a letter, and then a month later the person says “I finally get X,” and then Yoda’s letter is opened and checked. That is, Yoda could know exactly what path the student will go down in response to the cryptic statement, and we could test that ahead of time by showing Yoda lots of students and getting him to write down lots of predictions.
But you’re correct that Yoda could inflate his statistics by saying one common deep thing, and then when students come back with twenty specific epiphanies, saying “yes, that specific epiphany was caused by my deep statement,” even though he doesn’t have the ability to predict which student will have which epiphany. (Especially when it comes to reading ancient works, we only have our retrospective predictions and judgments!)
So, I can imagine a scenario where Yoda says cryptic statement X, and then privately expands on X to make it specific in a letter, and then a month later the person says “I finally get X,” and then Yoda’s letter is opened and checked.
I can imagine it too, just not with actual Yodas. (Well, Yoda is fictional, but you know what I mean.) I could even generalize this beyond Yoda: Most people won’t help you figure out whether they’re saying something meaningful or just spouting hot air. This is especially true for the ones that produce the most hot air—after all, they don’t want you to know that they’re doing it. Helping you know, by giving you sealed envelopes or anything else, would defeat the purpose of doing it.
Even the ones that produce hot air out of sincere ignorance probably don’t want you to be able to figure out whether they are spouting hot air—anyone who did would get selected out of existence.
So, I can imagine a scenario where Yoda says cryptic statement X, and then privately expands on X to make it specific in a letter, and then a month later the person says “I finally get X,” and then Yoda’s letter is opened and checked. That is, Yoda could know exactly what path the student will go down in response to the cryptic statement, and we could test that ahead of time by showing Yoda lots of students and getting him to write down lots of predictions.
But you’re correct that Yoda could inflate his statistics by saying one common deep thing, and then when students come back with twenty specific epiphanies, saying “yes, that specific epiphany was caused by my deep statement,” even though he doesn’t have the ability to predict which student will have which epiphany. (Especially when it comes to reading ancient works, we only have our retrospective predictions and judgments!)
I can imagine it too, just not with actual Yodas. (Well, Yoda is fictional, but you know what I mean.) I could even generalize this beyond Yoda: Most people won’t help you figure out whether they’re saying something meaningful or just spouting hot air. This is especially true for the ones that produce the most hot air—after all, they don’t want you to know that they’re doing it. Helping you know, by giving you sealed envelopes or anything else, would defeat the purpose of doing it.
Even the ones that produce hot air out of sincere ignorance probably don’t want you to be able to figure out whether they are spouting hot air—anyone who did would get selected out of existence.