That’s interesting, if true. Maybe the tokeniser was trained on a dataset that had been filtered for dirty words.
That’s interesting, if true. Maybe the tokeniser was trained on a dataset that had been filtered for dirty words.