Right. I see this as a problem also, asking the model if it’s sure is injecting information if we only ask on wrong answers. If we ask always it may disturb more right answers than it fixes wrong ones.
Its also accuracy dependent—if the model is 99 percent accurate on a subtask then asking if it’s sure may degrade accuracy, while it may improve it on a subtask it’s 50 percent accurate on.
Or in other words, we could prompt it and it might do better on AP English but less good on the bar exam.
Right. I see this as a problem also, asking the model if it’s sure is injecting information if we only ask on wrong answers. If we ask always it may disturb more right answers than it fixes wrong ones.
Its also accuracy dependent—if the model is 99 percent accurate on a subtask then asking if it’s sure may degrade accuracy, while it may improve it on a subtask it’s 50 percent accurate on.
Or in other words, we could prompt it and it might do better on AP English but less good on the bar exam.