Great illustration. For vector similarity, I recommend cosine similarity, and if you want it to extend to infinity, the (negative) logarithm of something between 0 and 1 is good. However, I guess your model is pretty arbitrary already, so I can’t vouch for my suggestion in the context of improving your model’s predictiveness.
Disclaimer: since you’re modeling intensity of belief, cosine similarity (which ignores vector length) alone wouldn’t even apply. I guess since you’re using dotproduct already, my suggestion missed the mark. It does seem like, by squaring, you’re declaring diametrically opposed vectors the same as colinear ones (i.e. negative dotproduct becomes positive when squared).
Great illustration. For vector similarity, I recommend cosine similarity, and if you want it to extend to infinity, the (negative) logarithm of something between 0 and 1 is good. However, I guess your model is pretty arbitrary already, so I can’t vouch for my suggestion in the context of improving your model’s predictiveness.
Disclaimer: since you’re modeling intensity of belief, cosine similarity (which ignores vector length) alone wouldn’t even apply. I guess since you’re using dotproduct already, my suggestion missed the mark. It does seem like, by squaring, you’re declaring diametrically opposed vectors the same as colinear ones (i.e. negative dotproduct becomes positive when squared).