It would be still astonishing that GPT-J would pick up this pattern.
Why would all of these euphemisms cancel out at the centroid and not any other of a thousand other things that use euphemisms and metaphors? Any boolean sarcasm or irony would do.
It might be astonishing, but this is fundamentally how word embedding works, by modelling the co-distribution of words/ expressions. You know the “nudge, nudge, you know what I mean” Python sketch? Try appending “if you know what I mean” to the end of random sentences.
There is more than one possibility when you append “if you know what I mean” to the end of a random sentence:
Sexual innuendos.
Illicit activities or behaviors.
Inside jokes or references understood only by a specific group.
Subtle insults or mocking.
Sure, the first is the strongest, but the others would move the centroid away from “phallus”. The centroid is not at the most likely item but at the average.
It would be still astonishing that GPT-J would pick up this pattern.
Why would all of these euphemisms cancel out at the centroid and not any other of a thousand other things that use euphemisms and metaphors? Any boolean sarcasm or irony would do.
It might be astonishing, but this is fundamentally how word embedding works, by modelling the co-distribution of words/ expressions. You know the “nudge, nudge, you know what I mean” Python sketch? Try appending “if you know what I mean” to the end of random sentences.
There is more than one possibility when you append “if you know what I mean” to the end of a random sentence:
Sexual innuendos.
Illicit activities or behaviors.
Inside jokes or references understood only by a specific group.
Subtle insults or mocking.
Sure, the first is the strongest, but the others would move the centroid away from “phallus”. The centroid is not at the most likely item but at the average.
I’d guess that it’s related specifically to “thing” being a euphemism for penis, as opposed to some broader generalization about euphemisms.
“thing” wasn’t part of the prompt.