Bird’s eye perspective: All information theory is just KL-divergence and priors, all priors are just Gibbs measures, and algorithmic information theory is just about how computational costs should be counted as “energy” in the Gibbs measure (description length vs time vs memory, etc).
Frog’s eye perspective: taking the pushforward measure along the semantics of the language P collects all the probability mass of each f’s entire extensional-equivalence-class; no equally natural operation collects only the mass from the single maximum-probability representative.
Strong upvoted.
Bird’s eye perspective: All information theory is just KL-divergence and priors, all priors are just Gibbs measures, and algorithmic information theory is just about how computational costs should be counted as “energy” in the Gibbs measure (description length vs time vs memory, etc).
Frog’s eye perspective: taking the pushforward measure along the semantics of the language P collects all the probability mass of each f’s entire extensional-equivalence-class; no equally natural operation collects only the mass from the single maximum-probability representative.
Both seem underrated.