Ideally, by looking a the number of times that I’ve experienced out-of-context problems in the past. You can optimize further by creating models that predict the base amount of novelty in your current environment—if you have reason to believe that your current environment is more unusual / novel than normal, increase your assigned “none of the above” proportionally. (And conversely, whenever evidence triggers the creation of a new top-level heading, that top-level heading’s probability should get sliced out of the “none of the above”, but the fact that you had to create a top-level heading should be used as evidence that you’re in a novel environment, thus slightly increasing ALL “none of the above” categories. If you’re using hard-coded heuristics instead of actually computing probability tables, this might come out as a form of hypervigilance and/or curiosity triggered by novel stimulus.)
Ideally, by looking a the number of times that I’ve experienced out-of-context problems in the past. You can optimize further by creating models that predict the base amount of novelty in your current environment—if you have reason to believe that your current environment is more unusual / novel than normal, increase your assigned “none of the above” proportionally. (And conversely, whenever evidence triggers the creation of a new top-level heading, that top-level heading’s probability should get sliced out of the “none of the above”, but the fact that you had to create a top-level heading should be used as evidence that you’re in a novel environment, thus slightly increasing ALL “none of the above” categories. If you’re using hard-coded heuristics instead of actually computing probability tables, this might come out as a form of hypervigilance and/or curiosity triggered by novel stimulus.)