We just want our model to be doing the type of things that would make it easier rather than harder to peek inside and see what’s going on, so if something starts going wrong, we know why.
To generalize my question, what if something goes wrong, we peek inside and find out that it’s one of the 10-15% of times when the model doesn’t agree with the known-algorithm which is used to generate the penalty term?
I interpreted your question differently than you probably wanted me to interpret it. From my perspective, we are hoping for greater transparency as an end result, rather than treating it as “similar enough” to some other algorithm and using the other algorithm to interpret it.
If I wanted to answer your generalized question within the context of comparing it to the known algorithm, I’d have to think for much longer. I don’t have a good response on hand.
To generalize my question, what if something goes wrong, we peek inside and find out that it’s one of the 10-15% of times when the model doesn’t agree with the known-algorithm which is used to generate the penalty term?
I interpreted your question differently than you probably wanted me to interpret it. From my perspective, we are hoping for greater transparency as an end result, rather than treating it as “similar enough” to some other algorithm and using the other algorithm to interpret it.
If I wanted to answer your generalized question within the context of comparing it to the known algorithm, I’d have to think for much longer. I don’t have a good response on hand.