Note that the model’s output isn’t what’s relevant for the neutrality measure; it’s the algorithm it’s internally implementing. That being said, this sort of trickery is still possible if your model is non-myopic, which is why it’s important to have some sort of myopia guarantee.
Note that the model’s output isn’t what’s relevant for the neutrality measure; it’s the algorithm it’s internally implementing. That being said, this sort of trickery is still possible if your model is non-myopic, which is why it’s important to have some sort of myopia guarantee.