I also suspect training a specific dataset for “was your last response indicative of your maximum performance on the task?” would significantly boost the accuracy in the sandbagging case for this type of probe. I see this becoming more important as we move to more realistic sandbagging settings without explicit instructions to perform poorly since “deceiving someone” and “not trying very hard to explore this area of research” seem like non-trivially different concepts.
Good point!