For what it’s worth, as someone with a lot of meditation experience and a longstanding interest in the topic, I didn’t get a GPT-3 vibe at all. To me, the whole thing registered as meaningful communication on a poorly-understood topic, with roughly appropriate levels of tentativeness and epistemic caution.
I’m left wondering if “sounding more like GPT-3” might be a common feature of attempts to communicate across large inferential distances with significant amounts of nonshared referents. How could one distinguish between “there’s no there there” and “there’s a there there but it’s inaccessible from my current vantage point”?
How could one distinguish between “there’s no there there” and “there’s a there there but it’s inaccessible from my current vantage point”?
By having experienced meditators independently rate anonymous articles, some of them written by actual meditators, other written by people who don’t meditate but believe that they can generate the same kind of GPT-3-ish output?
Yeah, that’d work for investigating the hypothesis and is interestingly similar to the theoretical ideal of the peer review process. I was personally more curious about how to distinguish in the general case where reliable domain experts may not be recognizable or readily available but that question may be rationality-complete.
Something like: there are dozen groups of people believing they are experts on meditation, each of them believing the other groups are wrong—how do find the right ones?
In that case, perhaps instead of talking about “meditation”, we could define “meditation1″ as whatever the group1 believes, “meditation2” as whatever the group2 believes… and test independently which groups can be easily fooled by GPT-3.
For what it’s worth, as someone with a lot of meditation experience and a longstanding interest in the topic, I didn’t get a GPT-3 vibe at all. To me, the whole thing registered as meaningful communication on a poorly-understood topic, with roughly appropriate levels of tentativeness and epistemic caution.
I’m left wondering if “sounding more like GPT-3” might be a common feature of attempts to communicate across large inferential distances with significant amounts of nonshared referents. How could one distinguish between “there’s no there there” and “there’s a there there but it’s inaccessible from my current vantage point”?
By having experienced meditators independently rate anonymous articles, some of them written by actual meditators, other written by people who don’t meditate but believe that they can generate the same kind of GPT-3-ish output?
Meditation Turing Test (MTT) [cf. Ideological Turing Test (ITT)], that’s great. 🤔
Yeah, that’d work for investigating the hypothesis and is interestingly similar to the theoretical ideal of the peer review process.
I was personally more curious about how to distinguish in the general case where reliable domain experts may not be recognizable or readily available but that question may be rationality-complete.
Something like: there are dozen groups of people believing they are experts on meditation, each of them believing the other groups are wrong—how do find the right ones?
In that case, perhaps instead of talking about “meditation”, we could define “meditation1″ as whatever the group1 believes, “meditation2” as whatever the group2 believes… and test independently which groups can be easily fooled by GPT-3.