Recently I tried out an experiment using the code from the Geometry of Truth paper to try to see if using simple label words like “true” and “false” could substitute for the datasets used to create truth probes. I also tried out a truth probe algorithm based on classifying with the higher cosine similarity to the mean vectors.
Initial results seemed to suggest that the label word vectors were sorta acceptable, albeit not nearly as good (around 70% accurate rather than 95%+ like with the datasets). However, testing on harder test sets showed much worse accuracy (sometimes below chance, somehow). So I can probably conclude that the label word vectors alone aren’t sufficient for a good truth probe.
Interestingly, the cosine similarity approach worked almost identically well as the mass mean (aka difference in means) approach used in the paper. Unlike the mass mean approach though, the cosine similarity approach can be extended to a multi-class situation. Though, logistic regression can also be extended similarly, so it may not be particularly useful either, and I’m not sure there’s even a use case for a multi-class probe.
Anyways, I just thought I’d write up the results here in the unlikely event someone finds this kind of negative result as useful information.
Recently I tried out an experiment using the code from the Geometry of Truth paper to try to see if using simple label words like “true” and “false” could substitute for the datasets used to create truth probes. I also tried out a truth probe algorithm based on classifying with the higher cosine similarity to the mean vectors.
Initial results seemed to suggest that the label word vectors were sorta acceptable, albeit not nearly as good (around 70% accurate rather than 95%+ like with the datasets). However, testing on harder test sets showed much worse accuracy (sometimes below chance, somehow). So I can probably conclude that the label word vectors alone aren’t sufficient for a good truth probe.
Interestingly, the cosine similarity approach worked almost identically well as the mass mean (aka difference in means) approach used in the paper. Unlike the mass mean approach though, the cosine similarity approach can be extended to a multi-class situation. Though, logistic regression can also be extended similarly, so it may not be particularly useful either, and I’m not sure there’s even a use case for a multi-class probe.
Anyways, I just thought I’d write up the results here in the unlikely event someone finds this kind of negative result as useful information.