any finite-entropy function f(X)
Uh...
∀x∈X, P[X=x]>0.
∑XP[X=x]=1.
By “oh, no, the (σ)s have to be non-repeating”, |X|≥|N|. Thus by the nth term test, ∀ϵ>0, ∃x∈X:P[X=x]<ϵ.
By properties of logarithms, −log(P[X=x]) has no upper bound over x∈X. In particular, −P[X=x]log(P[X=x]) has no upper bound over x∈X.
I’m not quite clear on how @johnswentworth defines a “finite-entropy function”, but whichever reasonable way he does that, I’m pretty sure that the above means that the set of all such functions over our X as equipped with distribution P[X] of finite entropy is in fact the empty set. Which seems problematic. I do actually want to know how John defines that. Literature searches are mostly turning up nothing for me. Notably, many kinds of reasonable-looking always-positive distributions over merely countably-large sample spaces have infinite Shannon entropy.
(h/t to @WhatsTrueKittycat for spotlighting this for me!)
Uh...
∀x∈X, P[X=x]>0.
∑XP[X=x]=1.
By “oh, no, the (σ)s have to be non-repeating”, |X|≥|N|. Thus by the nth term test, ∀ϵ>0, ∃x∈X:P[X=x]<ϵ.
By properties of logarithms, −log(P[X=x]) has no upper bound over x∈X. In particular, −P[X=x]log(P[X=x]) has no upper bound over x∈X.
I’m not quite clear on how @johnswentworth defines a “finite-entropy function”, but whichever reasonable way he does that, I’m pretty sure that the above means that the set of all such functions over our X as equipped with distribution P[X] of finite entropy is in fact the empty set. Which seems problematic. I do actually want to know how John defines that. Literature searches are mostly turning up nothing for me. Notably, many kinds of reasonable-looking always-positive distributions over merely countably-large sample spaces have infinite Shannon entropy.
(h/t to @WhatsTrueKittycat for spotlighting this for me!)