Going from memory: the hitrate of those metaphors is higher than someone would naively expect (wow, “king”—“queen” is nearly the same vector as “man”—“woman”!) but lower than you might expect after hearing that it works some of the time (“king”—“queen” isn’t the same vector as “prince”—“princess”? Weird.). I remember playing around with them and being surprised that some of them worked, and that other similar metaphors didn’t. This stackexchange post suggests about 70%. Again I want to emphasize that this is going from memory and I’m not sure about the exact examples I used—I don’t have a word2vec embedding downloaded to check the examples, and presumably it will depend on the training data & model parameters.
Going from memory: the hitrate of those metaphors is higher than someone would naively expect (wow, “king”—“queen” is nearly the same vector as “man”—“woman”!) but lower than you might expect after hearing that it works some of the time (“king”—“queen” isn’t the same vector as “prince”—“princess”? Weird.). I remember playing around with them and being surprised that some of them worked, and that other similar metaphors didn’t. This stackexchange post suggests about 70%. Again I want to emphasize that this is going from memory and I’m not sure about the exact examples I used—I don’t have a word2vec embedding downloaded to check the examples, and presumably it will depend on the training data & model parameters.