Asserting LLMs’ views/opinions should exclude using sampling( even temperature=0, deterministic seed), we should just look at the answers’ distribution in the logits. My thesis on why that is not the best practice yet is that OpenAI API only supports logit_bias, not reading the probabilities directly.
This should work well with pre-set A/B/C/D choices, but to some extent with chain/tree of thought too. You’d just revert the final token and look at the probabilities in the last (pass through )step.
Asserting LLMs’ views/opinions should exclude using sampling( even temperature=0, deterministic seed), we should just look at the answers’ distribution in the logits. My thesis on why that is not the best practice yet is that OpenAI API only supports logit_bias, not reading the probabilities directly.
This should work well with pre-set A/B/C/D choices, but to some extent with chain/tree of thought too. You’d just revert the final token and look at the probabilities in the last (pass through )step.