it doesn’t take any extra code to predict all the outcomes that you’ll never see. Just extra space/time. But those are not the minimized quantity. In fact, predicting all the outcomes that you’ll never see is exactly the sort of wasteful space/time usage that programmers engage in when they want to minimize code length—it’s hard to write code telling your processor to abandon certain threads of computation when they are no longer relevant.
you missed the point. you need code for picking some outcome that you see out of outcomes that you didn’t see, if you calculated those. It does take extra code to predict the outcome you did see if you actually calculated extra outcomes you didn’t see, and then it’s hard to tell what would require less code, one piece of code is not subset of the other and difference likely depends to encoding of programs.
it doesn’t take any extra code to predict all the outcomes that you’ll never see. Just extra space/time. But those are not the minimized quantity. In fact, predicting all the outcomes that you’ll never see is exactly the sort of wasteful space/time usage that programmers engage in when they want to minimize code length—it’s hard to write code telling your processor to abandon certain threads of computation when they are no longer relevant.
you missed the point. you need code for picking some outcome that you see out of outcomes that you didn’t see, if you calculated those. It does take extra code to predict the outcome you did see if you actually calculated extra outcomes you didn’t see, and then it’s hard to tell what would require less code, one piece of code is not subset of the other and difference likely depends to encoding of programs.