Yes, I think the most natural way to estimate total surprise in practice would be to use sampling like you suggest. You could try to find the best explanation for “the model does $bad_thing with probability less than 1 in a million” (which you believe based on sampling) and then see how unlikely $bad_thing is according to the resulting explanation. In the Boolean circuit worked example, the final 23-bit explanation is likely still the best explanation for why the model outputs TRUE on at least 99% of inputs, and we can use this explanation to see that the model actually outputs TRUE on all inputs.
Another possible approach is analogous to fine-tuning. You could start by using surprise accounting to find the best explanation for “the loss of the model is L” (where L is estimated during training), which should incentivize rich explanations of the model’s behavior in general. Then to estimate the probability that model does some rare $bad_thing, you could “fine-tune” your explanation using an objective that encourages it to focus on the relevant tails of the distribution. We have more ideas about estimating the probability of events that are too rare to estimate via sampling, and have been considering objectives other than surprise accounting for this. We plan to share these ideas soon.
Cool, find-tuning sounds a bit like conditional Kolmogorov complexity—the cost of your explanation would be K(explanation of rare thing | explanation of the loss value and general functionality)
Yes, I think the most natural way to estimate total surprise in practice would be to use sampling like you suggest. You could try to find the best explanation for “the model does $bad_thing with probability less than 1 in a million” (which you believe based on sampling) and then see how unlikely $bad_thing is according to the resulting explanation. In the Boolean circuit worked example, the final 23-bit explanation is likely still the best explanation for why the model outputs TRUE on at least 99% of inputs, and we can use this explanation to see that the model actually outputs TRUE on all inputs.
Another possible approach is analogous to fine-tuning. You could start by using surprise accounting to find the best explanation for “the loss of the model is L” (where L is estimated during training), which should incentivize rich explanations of the model’s behavior in general. Then to estimate the probability that model does some rare $bad_thing, you could “fine-tune” your explanation using an objective that encourages it to focus on the relevant tails of the distribution. We have more ideas about estimating the probability of events that are too rare to estimate via sampling, and have been considering objectives other than surprise accounting for this. We plan to share these ideas soon.
Cool, find-tuning sounds a bit like conditional Kolmogorov complexity—the cost of your explanation would be K(explanation of rare thing | explanation of the loss value and general functionality)