Just for context, I’m usually assuming we already have a good AI model and just want to find the dashed arrow (but that doesn’t change things too much, I think). As for why this diagram doesn’t solve worst-case ELK, the ELK report contains a few paragraphs on that, but I also plan to write more about it soon.
Yep, the nice thing is that we can write down this commutative diagram in any category,[1] so if we want probabilities, we can just use the category with distributions as objects and Markov kernels as morphisms. I don’t think that’s too strict, but have admittedly thought less about that setting.
Just for context, I’m usually assuming we already have a good AI model and just want to find the dashed arrow (but that doesn’t change things too much, I think). As for why this diagram doesn’t solve worst-case ELK, the ELK report contains a few paragraphs on that, but I also plan to write more about it soon.
Yep, the nice thing is that we can write down this commutative diagram in any category,[1] so if we want probabilities, we can just use the category with distributions as objects and Markov kernels as morphisms. I don’t think that’s too strict, but have admittedly thought less about that setting.
We do need some error metric to define “approximate” commutativity