Lemma 7:For f,f′,f′′,f′′′∈[X→(−∞,∞] s.t.f′′≪f and f′′′≪f′, and p∈(0,1), and X is compact, LHC, and second-countable, then pf′′+(1−p)f′′′≪pf+(1−p)f′. The same result holds for [X→[0,1]].
This is the only spot where we need that X is LHC and not merely locally compact.
Here’s how it’s going to work. We’re going to use the details of the proof of Theorem 2 to craft two special functions g and g′ of a certain form, which exceed f′′ and f′′′. Then, of course, pf′′+(1−p)f′′′⊑pg+(1−p)g′ as a result. We’ll then show that pg+(1−p)g′ can be written as the supremum of finitely many functions in [X→[−∞,∞]] (or [X→[0,1]]) which all approximate pf+(1−p)f′, so pf+(1−p)f′≫pg+(1−p)g′⊒pf′′+(1−p)f′′′, establishing that pf+(1−p)f′≫pf′′+(1−p)f′′′.
From our process of constructing a basis for the function space back in Theorem 2, we have a fairly explicit method of crafting a directed set of approximating functions for f and f′. The only approximator functions you need to build any f are step functions of the form (U↘q) with q being a finite rational number, and U being selected from the base of X, and q<infx∈K(U)f(x) where K is our compact hull operator. Any function of this form approximates f, and taking arbitrary suprema of them, f itself is produced.
Since functions of this form (and suprema of finite collections of them) make a directed set with a supremum of f or above (f′ or above, respectively), we can isolate a g from the directed set of basis approximators for f, s.t. g⊒f′′, because f′′≪f. And similarly, we can isolate a g′ which approximates f′ s.t. g′⊒f′′′, because f′′′≪f′.
Now, it’s worthwhile to look at the exact form of g and g′. Working just with g (all the same stuff works for g′), it’s a finite supremum of atomic step functions, so
g(x)=supi(Ui↘qi)(x)=supi:x∈Uiqi
and, by how our atomic step functions of the directed set associated with f were built, we know that for all i, qi<infx∈K(Ui)f(x). (remember, the Ui are selected from the base, so we can take the compact hull of them).
There’s a critical split here between the behavior of the [X→[−∞,∞]] type signature, and the [X→[0,1]] type signature.
For the [X→[−∞,∞]] type signature, there’s finitely many i, and everything not in the Ui gets mapped to −∞. Since f′′∈[X→(−∞,∞]], and X is compact, f′′ is bounded below (Lemma 1), so for this finite supremum to exceed f′′, the Ui must cover X.
For the [X→[0,1]] type signature, again there’s finitely many i, but we can’t guarantee that the Ui cover X, because everything not in some Ui gets mapped to 0, and f′′ might not be bounded above 0. So, we’ll add in a “dummy” step function (X↘0). This is just the constant-0 function, which approximates everything, so nothing changes. However, it ensures that X is indeed covered by the open sets of the finite collection of step functions.
So… Given that we have our g,g′, what does the function pg+(1−p)g′ look like? Well, it looks like
We could do this in the [0,1] case from adding in the “dummy” open sets to get the Ui family and Uj family to both cover X.
Anyways, pg+(1−p)g′ equals ⨆i,j(Ui∩Uj↘pqi+(1−p)qj). There’s finitely many i’s and finitely many j’s by how g and g’ were built, so this is a supremum of finitely many atomic step functions. Since pg+(1−p)g′⊒pf′′+(1−p)f′′′, if we can just show that all these atomic step functions are approximators for pf+(1−p)f′ in [X→[−∞,∞]] (or [X→[0,1]]), then pg+(1−p)g′≪pf+(1−p)f′ (supremum of finitely many approximators is an approximator), and we’d show our desired result that pf′′+(1−p)f′′′≪pf+(1−p)f′. So let’s get started on showing that, regardless of i and j,
(Ui∩Uj↘pqi+(1−p)qj)≪pf+(1−p)f′
Now, if Ui∩Uj=∅, then this function maps everything to −∞ or 0, which trivially approximates pf+(1−p)f′. Otherwise, everything in Ui∩Uj gets mapped to pqi+(1−p)qj, and everything not in it gets mapped to −∞ (or 0)
We know by our process of construction for this that qi<infx∈K(Ui)f(x) and qj<infx∈K(Uj)f′(x), so let’s try to work with that. Well, for the [X→[−∞,∞]] case. The [X→[0,1]] case is a bit more delicate since we added those two dummy functions. We have:
The first strict inequality was because of what qi and qj were. Then, just move the constants in. For the first non-strict inquality, it’s because of monotonicity for our compact hull operator, Ui∩Uj⊆Ui, so it has a smaller compact hull. Then the next inequality is just grouping the two infs, and we’re finished.
At this point, the usual argument of “fix a directed set of functions ascending up to pf+(1−p)f′, for each patch of K(Ui∩Uj) you can find a function from your directed set that beats the value pqi+(1−p)qj on that spot, this gets you an open cover of K(Ui∩Uj), it’s compact, isolate a finite subcover ie finitely many functions, take an upper bound in your directed set, bam, it beats the step function on a compact superset of its open set so it beats the step function, and the directed set was arbitrary so pf+(1−p)f′ is approximated by the atomic step function” from Theorem 2 kicks in, to show that the atomic step function approximates pf+(1−p)f′. Since this works for arbitrary i,j, and there are finitely many of these, the finite supremum pg+(1−p)g′ approximates pf+(1−p)f′, and dominates pf′′+(1−p)f′′′, and we’re done.
Now for the [X→[0,1]] case. This case works perfectly fine if our i,j pair has qi and pi being above 0. But what if our Ui has qi being above 0, but our j index is picking that (X↘0) “dummy” approximator? Or vice-versa. Or maybe they’re both the “dummy” approximator.
That last case can be disposed of, because (X∩X↘p⋅0+(1−p)⋅0) is just the constant-0 function, which trivially approximates everything. That leaves the case where just one of the step functions isn’t the constant-0 function. Without loss of generality, assume Ui is paired with qi>0, but the other one is the (X↘0) “dummy” approximator. Then our goal is to show that (Ui∩X↘pqi+(1−p)0)≪pf+(1−p)f′. This can be done because, since p∈(0,1), and qi<infx∈K(Ui)f(x), we have
pqi+(1−p)0<pinfx∈K(Ui)f(x)+(1−p)infx∈K(Uj)f(x)
And the rest of the argument works with no problems from there.
Lemma 8:If a∈(0,∞), and f′≪f, then af′≪af, provided af lies in the function space of interest. f,f′∈[X→(−∞,∞]] (or [X→[0,1]]), and X is compact, LHC, and second-countable.
By our constructions in Theorem 2, we can always write any function as the supremum of atomic step functions below it, (U↘q), as long as q<infx∈K(U)f(x). Since f≪f′, we can find finitely many atomic step functions (Ui↘qi) which all approximate f′ s.t. the function x↦supi:x∈Uiqi exceeds f. In the [X→(−∞,∞]] case, since the finite supremum exceeds f, which is bounded below by Lemma 1, the Ui family covers X.
Now, consider the finite collection of atomic step functions (Ui↘aqi). If we can show that all of these approximate af, and the supremum of them is larger than af′, then, since the supremum of finitely many approximators is an approximator, we’ll have shown that af≫⨆i(Ui↘aqi)⊒af′, establishing approximation. So, let’s show those. For arbitrary i, we have
And so, our result follows. It’s possible that, for the [X→[0,1]] case, x lies in no Ui, but then the supremum of the empty set would be 0, and since the supremum of the (Ui↘qi) exceeds f′, f′(x)=0 as well, so the same inequality holds.
Lemma 9:If q∈R and f′≪f with f,f′∈[X→(−∞,∞]] with X being compact, LHC, and second-countable, then f′+cq≪f+cq.
We can invoke Theorem 2, to write any function as the supremum of atomic step functions below it, (U↘q′), as long as q′<infx∈K(U)f(x). Since f′≪f, we can find finitely many atomic step functions (Ui↘qi) which all approximate f s.t. the function x↦supi:x∈Uiqi exceeds f′. Also, f′ is bounded below by Lemma 1, so the finite family Ui covers the entire space.
Now, consider the finite collection of atomic step functions (Ui↘qi+q). If we can show that all of these approximate f+cq, and the supremum of them is larger than f′+cq, then, since the supremum of finitely many approximators is an approximator, we’ll have shown that f+cq≫⨆i(Ui↘qi+q)⊒f′+cq, establishing approximation. So, let’s show those. For arbitrary i, we have
qi+q<infx∈K(Ui)f(x)+q=infx∈K(Ui)(f+cq)(x)
So these atomic step functions all approximate f+cq. For exceeding f′+cq, we have, for arbitrary x,
Lemma 10:If q∈[−1,1] and f′≪f with f,f′∈[X→[0,1]] with X being compact, LHC, and second-countable, then for all ϵ, c0⊔f′+cq−ϵ≪f+cq, as long as f+cq is also bounded in [0,1].
By Theorem 2, we can always write any function as the supremum of atomic step functions below it, (U↘q′), where q′<infx∈K(O)f(x). Since f≪f′, we can find finitely many atomic step functions (Ui↘qi) which all approximate f s.t. the function x↦supi:x∈Uiqi exceeds f′.
Now, consider the following finite finite collection of atomic step functions. (Ui↘max(qi+q−ϵ,0)), along with (X↘max(q−ϵ,0))
If we can show that all of these approximate f+cq, and the supremum of them is larger than c0⊔(f′+cq−ϵ), then, since the supremum of finitely many approximators is an approximator, we’ll have shown that f+cq≫(⨆i(Ui↘max(qi+q,0)))⊔(X↘max(q−ϵ,0))⊒c0⊔(f′+cq−ϵ), establishing approximation. So, let’s show those.
For arbitrary i, if max(0,qi+q−ϵ)>0, then we have
qi+q−ϵ<infx∈K(Ui)f(x)+q=infx∈K(Ui)(f+cq)(x)
So these atomic step functions all approximate f+cq. If max(0,qi+q−ϵ)=0, then the atomic step function turns into c0, which also approximates f′+cq.
For the one extra step function we’re adding in, we have two possible cases. One is where q−ϵ>0, the other one is where max(q−ϵ,0)=0. In such a case, the function turns into c_{0}, which trivially approximates f+cq. However, if q−ϵ>0, then we have
q−ϵ<q+infx∈Xf(x)≤(f+cq)(x′)
For any x′, so this constant strictly undershoots the minimal value of f+cq across the whole space, and thus approximates f+cq by Lemma 2.
So, this finite collection of functions has its supremum being an approximator of f+cq. Now for showing that the supremum beats c0⊔(f′+cq−ϵ). If x lies in some Ui, then
And we’ve shown that this supremum of step functions beats our new function. If x lies in none of the Ui, then the original supremum of step functions must have mapped x to 0, and that beats f, so f(x)=0. That’s the only possible thing that could have happened. Then
So, this case is taken care of too, and our supremum of step functions beats c0⊔(f′+cq−ϵ). The lemma then finishes up, we have that f′≪f implies that for any ϵ and q s.t.f+cq and f′ and f all lie in [0,1], we have
c0⊔(f′+cq−ϵ)≪f+cq
NOW we can begin proving our theorems!
Theorem 4:The subset of [[X→(−∞,∞]]→[−∞,∞]] (or [[X→[0,1]]→[0,1]]) consisting of inframeasures is an ω-BC domain when X is compact, second-countable, and LHC.
This proof will begin by invoking either Corollary 1 or Proposition 6. Since locally hull-compact implies locally compact, we can invoke these, to get that [[X→(−∞,∞]]→[−∞,∞]], or [[X→[0,1]]→[0,1]] is an ω-BC domain with a top element.
Then, if we can just show that the set of inframeasures is closed under directed suprema and arbitrary nonempty infinima, we can invoke Theorem 3 to get that the space of R-inframeasures (or [0,1]-inframeasures) is an ω-BC domain. This is the tricky part.
For our conventions on working with infinity, we’ll adopt the following. 0 times any number in (−∞,∞] is 0. Infinity times any number in (0,∞] is infinity. Infinity plus any number in (−∞,∞] is infinity. Given two functions f,f′ bounded below, then pf+(1−p)f′ is the function which maps x to pf(x)+(1−p)f′(x), which is always continuous when f,f′ are, due to being monotone, and preserving directed suprema. The distance between points in (−∞,∞], for defining the distance between functions as d(f,f′)=supxd(f(x),f′(x)) is the usual distance when both numbers are finite, 0 when both numbers are ∞ and ∞ when one point is ∞ and the other isn’t.
An inframeasure is a function in [[X→(−∞,∞]]→[−∞,∞]] or [[X→[0,1]]→[0,1]] with the following three properties.
ψ should be 1-Lipschitz, it should map the constant-0 function c0 to 0 or higher, and it should be convex.
The definition of 1-Lipschitzness is that, using using our convention for the distance between points in [−∞,∞] and the distance between functions X→(−∞,∞], we have that for any two functions, d(ψ(f),ψ(f′))≤d(f,f′).
The definition of concavity is that, for all p∈[0,1],f,f′∈[X→(−∞,∞]],
pψ(f)+(1−p)ψ(f′)≤ψ(pf+(1−p)f′)
Admittedly, this might produce situations where we’re adding infinity and negative infinity, which we haven’t defined yet, but this case is ruled out by Lemma 3 and Lemma 1. Any function has a finite lower bound by Lemma 1, and Lemma 3 steps in to ensure that they must have a finite expectation value according to ψ. So, we’ll never have to deal with cases where we’re adding negative infinity to something, we’re working entirely in the realm of real numbers plus infinity.
Now, let’s get started on showing that the directed supremum of 1-Lipschitz, concave functions which map 0 upwards is 1-Lipschitz, concave, and maps 0 upwards.
Let’s start with 1-Lipschitzness. In the case that our functions f,f′ have a distance of infinity from each other, 1-Lipschitness is trivially fulfilled. So we can assume that f,f′ have a finite distance from each other. Observe that it’s impossible that (without loss of generality) (⨆↑Φ)(f) is infinity, while (⨆↑Φ)(f′) is finite. This is because we have
(⨆↑Φ)(f)=⨆↑ψ∈Φψ(f)
So, if that first term is infinite, we can find ψ∈Φ which map f arbitrarily high. And then ψ(f′) would only be a constant away, so the ψ(f′) values would climb arbitrarily high as well, making (⨆↑Φ)(f′) be infinite as well. Then we’d have
d((⨆↑Φ)(f),(⨆↑Φ)(f′))=d(∞,∞)=0≤d(f,f′)
And this would also show 1-Lipschitzness. So now we arrive at our last case where d(f,f′) is finite and the directed suprema are both finite. Then we’d have
and we’re done. That was just unpacking definitions, and then the ≤ was because we can upper-bound the distance between the suprema of the two sets of numbers by the maximum distance between points paired off with each other. Then just apply 1-Lipschitzness at the end.
As for concavity, it’s trivial when p=0 or 1, because then (for p=0 as an example)
p(⨆↑Φ)(f)+(1−p)(⨆↑Φ)(f′)=(⨆↑Φ)(f′)=(⨆↑Φ)(0f+1f′)
So now, let’s start working on the case where p isn’t 0 or 1. We have
First equality is just unpacking definitions. Second equality is because, if the directed supremum is unbounded above, multiplying by a finite constant and taking the directed supremum still gets you to infinity, and it also works for the finite constants. Then, we observe that for any ψ chosen by the first directed supremum and ψ′ chosen by the second, we can take an upper bound of the two to exceed the value. Conversely, any particular ψ can be used for both directed suprema. So we can move the directed suprema to the outside. Then we just apply concavity, and pack back up.
Finally, there’s mapping 0 to above 0. This is easy.
(⨆↑Φ)(c0)=⨆↑ψ∈Φψ(c0)≥⨆↑ψ∈Φ0=0
All these proofs work equally well with both type signatures, so in both type signatures, the set of inframeausures is closed under directed suprema.
Now we’ll show they’re closed under unbounded infinima. This is the super-hard part. First up, 1-Lipschitzness.
1-Lipschitzness is trivial if d(f,f′)=∞, so we can assume that this value is finite. It’s also trivial if (⊓Φ)(f)=∞=(⊓Φ)(f′), because then the distance is 0. So we can assume that one of these values is finite. Without loss of generality, let’s say that (⊓Φ)(f)≥(⊓Φ)(f′). Then, we can go:
Via Lemma 3. Now, at this point, we’re going to need to invoke either Lemma 5 for the R case, or Lemma 6 for the [0,1] case. We’ll treat the two cases identically by using the terminology c⊥⊔(f′′+c−d(f,f′)). In the R case, this is just f′′+c−d(f,f′), because the functions f′′ have a finite lower bound via compactness/Lemma 1, and we’re in the assumed case where d(f,f′)<∞. In the [0,1] case, it’s c0⊔(f′′+c−d(f,f′)), which is a different sort of function. Anyways, since we have f′′≪f, this then means that c⊥⊔(f′′+c−d(f,f′))≪f′ by Lemma 5 or 6 (as the type signature may be). So, swapping out the supremum over f′′′≪f′ with choosing a f′′≪f and evaluating the c⊥⊔(f′′+c−d(f,f′)), makes that second directed supremum go down in value, so the value of the equation as a whole go up. So, in both cases, we have
The starting inequality was already extensively discussed. Then, the shift to the absolute value is because we know the quantity as a whole is positive. The gap between the two suprema is upper-bounded by the maximum gap between the two terms with the same f′′ chosen. Similarly, the gap between thetwo infinima is upper-bounded by the maximum gap between the two terms with the same ψ chosen. Then we just do a quick conversion to distance, and apply 1-Lipschitzness of all the ψ we’re taking the infinimum of. Continuing onwards, we have
Now, we’ve got two possibilities. The first case is where f′′ has range in (−∞,∞]. In such a case, the latter term would turn into −d(f,f′), so the contents as a whole of the absolute value would be d(f,f′). The second case is where f′′ has range in [0,1]. In such a case, the latter term would be as close or closer to f′′(x) than f′′(x)−d(f,f′), so we can upper-bound the contents of the absolute-value chunk by d(f,f′). In both cases, we have
≤⨆f′′≪fsupx∈X|f′′(x)−(f′′(x)−d(f,f′))|=d(f,f′)
And we’re done, 1-Lipschitzness is preserved under arbitrary infinima, for both type signatures.
Now for concavity. Concavity is trivial when p=0 or 1, so let’s assume p∈(0,1), letting us invoke Lemma 7. Let’s show concavity. We have
The first equality was just unpacking via Lemma 3. Then we can move the constant in, group into a single big directed supremum, combine the infs (making the inside bigger), then apply concavity for all our ψ∈Φ. Now it gets interesting. Since f′′≪f and f′′′≪f′ and p∈(0,1) we have that pf′′+(1−p)f′′′≪pf+(1−p)f′ by Lemma 7. Since all possible choices make an approximator for pf+(1−p)f′, we can go...
≤⨆↑f∗≪pf+(1−p)f′⊓ψ∈Φψ(f∗)=(⊓Φ)(pf+(1−p)f′)
And concavity is shown. The proof works equally well for both type signatures.
All that remains is showing that the constant-0 function maps to 0 or higher, which is thankfully, quite easy, compared to concavity and 1-Lipschitzness. This is automatic for the [0,1] case, so we’ll just be dealing with the other type signature.
Remember our result back in Lemma 2 that if you have a constant function cq where q<infx∈Xf(x), then cq≪f. We’ll be using that. Also we’ll be using Lemma 4 that no 1-Lipschitz function mapping c0 to 0 or higher can map a function f any lower than inf(0,infx∈Xf(x)), which is finite, by Lemma 1. So, we have
(⊓Φ)(c0)=⨆↑f≪c0⊓ψ∈Φψ(f)≥⨆↑q<0⊓ψ∈Φψ(cq)≥⨆↑q<0q=0
And we’re done. First equality was just the usual unpacking of infinite inf via Lemma 3, second one was because, if q<0, then cq≪c0, so the function value goes down, and the third inequality was because none of the ψ can map cq below q since q<0. That’s it!
Now, since the space of inframeasures (concave, 1-Lipschitz, weakly-normalized) is closed under directed sup and arbitrary inf in [[X→(−∞,∞]]→[−∞,∞]] and [[X→[0,1]]→[0,1]], we can invoke Theorem 3 to conclude that it makes an ω-BC domain.
Theorem 5:The following properties are all preserved under arbitrary infinima and directed supremum for both R and [0,1]-inframeasures, if X is compact, second-countable, and LHC.
ψ(c0)=0
ψ(c1)≥1
ψ(c1)≤1
ψ(c1)=1
∀q∈R:ψ(cq)=q(supernormalization)
∀a∈[0,∞):ψ(af)=aψ(f) (homogenity)
∀a∈[0,∞):ψ(1−af)=1−a+aψ(f) (cohomogenity)
∀q∈R:ψ(f+cq)=ψ(f)+q (C-additivity)
∀q∈R,a∈[0,∞):ψ(af+cq)=aψ(f)+q (crispness)
So, the deal with this proof is that the directed suprema part works equally well for both cases, but the arbitrary infinima part will need to be done once for the R type signature (where it’s marginally easier), and done a second way for the [0,1] case, where it’s considerably more difficult to accomplish.
First, ψ(c0)=0.
(⨆↑Φ)(c0)=⨆↑ψ∈Φψ(c0)=⨆↑ψ∈Φ0=0
Second, ψ(c1)≥1.
(⨆↑Φ)(c1)=⨆↑ψ∈Φψ(c1)≥⨆↑ψ∈Φ1=1
Third, ψ(c1)≤1.
(⨆↑Φ)(c1)=⨆↑ψ∈Φψ(c1)≤⨆↑ψ∈Φ1=1
Fourth, ψ(c1)=1, which is trivial from second and third.
Ninth, crispness, which follows from six and eight since crispnes is equivalent to the conjunction of homogenity and c-additivity.
Now for arbitrary infinima in the R case. We’ll note when the same proof works for the [0,1] case, and when we have to loop back later with a more sophisticated argument. Lemma 3 is used throughout.
First, preservation of ψ(c0)=0 under infinima. We already know that, by ψ(c0)≥0 being preserved under arbitrary infinima it can’t be below 0, so that leaves showing that it can’t be above 0.
(⊓Φ)(c0)=⨆↑f≪c0⊓ψ∈Φψ(f)≤⊓ψ∈Φψ(c0)=0
This was by monotonicity, since c0⊒f, and then we just apply that ψ(c0)=0. This proof works for the [0,1] type signature.
Second, preservation of ψ(c1)≥1 under infinima. Since cq≪c1 when q<1 via Lemma 2, we can go
And we’re done. In order, that was unpacking the arbitrary inf, the inequality was because we swapped out for a more restricted class of approximators (value decreases), then we reexpressed cq, applied concavity (value decreases), distributed the inf in (value decreases), used that ψ(c0)≥0 always, and our assumption that ψ(c1)≥1 (value decreases), and then we can finish up easily. Same proof works just as well for the [0,1] type signature.
Third, preservation of ψ(c1)≤1 under infinima. Since cq≪c1 when q<1 by Lemma 2, we can go
(⊓Φ)(c1)=⨆↑f≪c1⊓ψ∈Φψ(f)≤⊓ψ∈Φψ(c1)≤1
Due to monotonicity. Same proof works for the [0,1] type signature.
Fourth, preservation of ψ(c1)=1 under infinima. This is trivial since we proved preservation of ≥ and ≤ under infinima already, for both type signatures.
For 5, supernormalization is that ψ(cq)=q for all numbers in (−∞,∞] or [0,1] respectively. We’ll show this is preserved under infinima. We have to prove both directions of this separately. In one direction, we have
q=⊓ψ∈Φψ(cq)≥⨆↑f≪cq⊓ψ∈Φψ(f)=(⊓Φ)(cq)
Which establishes an upper bound. For a lower bound, we have, since p<q implies cp≪cq by Lemma 2,
(⊓Φ)(cq)=⨆↑f≪cq⊓ψ∈Φψ(f)≥⨆↑p<q⊓ψ∈Φψ(cp)=⨆↑p<qp=q
Now, for the [0,1] type signature, the only fiddly bit is the lower-bound argument, because maybe q=0. In that case, you can just apply the fact that ψ(c0)=0 is preserved under infinima, which we proved already.
Number 6: Homogenity, that for any a>0, ψ(af)=aψ(f). We only need to check a>0, because for a=0, it’s saying that ψ(c0)=0, which we’ve already shown is preserved under arbitrary infinima. This will work equally well for both type signatures.
(⊓Φ)(af)=⨆↑f′≪af⊓ψ∈Φψ(f′)
And now, if f′′≪f, then af′′≪af. And similarly, if f′≪af, then (1a)f′≪f, by Lemma 8. So, we can switch to taking approximants of f itself and scaling them, because every approximant of f, scaled by a, is an approximant of af. And every approximant of af, scaled by 1a, is an approximant of f. Due to this, we have
⨆↑f′≪af⊓ψ∈Φψ(f′)=⨆↑f′′≪f⊓ψ∈Φψ(af′′)
And then we can go, by homogenity,
=⨆↑f′′≪f⊓ψ∈Φaψ(f′′)=a⨆↑f′′≪f⊓ψ∈Φψ(f′′)=a(⊓Φ)(f)
And so, homogenity is preserved under arbitrary infinima, for both type signatures.
Seventh, preservation of cohomogenity under infinima. The condition is ψ(c1+a(−f))=1−a+aψ(c1+(−f)), for functions f:X→[−∞,∞) s.t.−f:X→(−∞,∞] is lower-semicontinuous, and a>0. This holds for a=0 as well, in both type signatures, because then it’s just ψ(c1)=1, which we’ve already shown is preserved under arbitrary infinima.
This is exceptionally difficult to show in the [0,1] type signature, so we’ll just show it for the R type signature for now and circle back to it at the end.
(⊓Φ)(c1+a(−f))=⨆↑f′≪c1+a(−f)⊓ψ∈Φψ(f′)
Now things get tricky. First, observe that if f′≪c1+a(−f), then
1af′≪1a(c1+a(−f))=c1a+(−f)
By appealing to Lemma 8, then 9. And then, adding c1−1a to both sides which preserves the approximation by Lemma 9, we’d have
c1−1a+1af′≪c1−1a+c1a+(−f)=c1+(−f)
And we can also get a converse to this. If f′′≪c1+(−f), then, by Lemma 8,
af′′≪a(c1+(−f))=ca+a(−f)
And then, adding c1−a to both sides with Lemma 9, we’d get
c1−a+af′′≪c1+a(−f)
So, given that we can shift approximants back and forth between c1+a(−f) and c1+(−f) freely, we can get
And we’re done, cohomogenity has been proved to be preserved under arbitrary inf for the R type signature. The [0,1] proof is substantially trickier.
Eighth, preservation of C-additivity under infinima. The condition is ψ(f+cq)=q+ψ(f) for all constants q∈R. Again, this is one we’ll have to just prove for the R type signature, and not the [0,1] type signature, as that one takes more work to show. We can start with showing
(⊓Φ)(f+cq)=⨆↑f′≪f+cq⊓ψ∈Φψ(f′)
Now, observe that if f′≪f+cq, then
f′+c−q≪f+cq+c−q=f
By adding a constant and appealing to Lemma 9. And we can also get a converse to this. If f′′≪f, then, by Lemma 9,
f′′+cq≪f+cq
So, given that we can shift approximants back and forth between f+cq and f freely, we can get
And C-additivity is preserved under arbitrary infinima.
That just leaves condition 9, preservation of crispness under infinima. Crispness is, for all q∈R, a∈(0,∞), ψ(cq+af)=q+aψ(f). This is equivalent to the conjunction of C-additivity and homogenity, both of which are preserved under arbitrary infinima, so the same applies to crispness as well. Accordingly, we don’t need to prove this a second time for [0,1], because it will follow for free when we prove that C-additivity is preserved under infinima in the [0,1] case.
Time for the two last difficult cases, showing preservation of cohomogenity under arbitrary infinima for the [0,1] type signature, and doing the same for C-additivity.
For cohomogenity, we want the following to be preserved under infinima.
ψ(c1+a(−f))=1−a+aψ(c1+(−f))
for functions f:X→[0,1] s.t.−f:X→[−1,0] is lower-semicontinuous, and a>0. What we’ll do is just prove it for the case where c1+a(−f) and c1+(−f) are both bounded above 0.
Once we get this case, we can consider making an arbitrarily small perturbation of our function f if one of those two conditions is violated, in order to apply the exact cohomogenity result, and then 1-Lipschitzness (applied to the arbitrary closeness of the perturbed function and the actual function) can be used to show that the cohomogenity equality comes arbitrarily close to holding since the equality holds exactly for the perturbation, so the equality also works for cases where one of the two functions may be 0 in places.
To do this, since the result is trivial for a=1, we’ll prove ψ(c1+a(−f))≥1−a+aψ(c1+(−f)) if a>1. Then we’ll use concavity (preserved under arbitrary infinima) to get the other inequality direction, and show the cohomogenity equality if a>1. Then we’ll use further arguments to show that if the equality holds for a>1, it also holds for a<1.
So, assume a>1, and we’ll show one inequality direction of cohomogenity.
(⊓Φ)(c1+a(−f))=⨆↑f′≪c1+a(−f)⊓ψ∈Φψ(f′)
Observe that if f′′≪c1+(−f), then
af′′≪a(c1+(−f))=ca+a(−f)
by Lemma 8. And then, applying Lemma 10,
c0⊔(c1−a−ϵ+af′′)≪c1−a+ca+a(−f)=c1+a(−f)
Now, the important thing here is that c1+(−f) must be bounded above 1−1a, otherwise c1+a(−f) wouldn’t be bounded above 0, which we’re assuming. Then, since a possible approximator to c1+(−f) would be c1−1a+ϵ (for sufficiently small ϵ that still undershoots the lower bound of c1+(−f)), the directed supremum of approximators to c1+(−f)does contain some f′′ s.t.c1−a−ϵ+af′′ is never below 0 and thus is a legit function. Such a suitable f′′ would be c1−1a+ϵ, because c1−a−ϵ+af′′ would equal c(a−1)ϵ, which is strictly above 0 because a>1 by assumption.
Thus, if we’re taking the directed supremum of f′′≪c1+(−f), we’re guaranteed to have some functions f′′ where, instead of having
c0⊔(c1−a−ϵ+af′′)≪c1+a(−f)
as Lemma 10 says, we can replace it with
c1−a−ϵ+af′′≪c1+a(−f)
Because there are suitably high f′′ s.t. the c0 lower bound never becomes relevant.
Which is one inequality direction of cohomogenity. But it’s proved only for a>1. Assuming a>1, let’s derive the other direction from concavity, known to be preserved under arbitrary infinima.
This is just rewriting things. Since we’re assuming a>1, 1a is a suitable number to use for mixing things together so we can apply concavity. We apply concavity, yielding
≥1−a+a(1a(⊓Φ)(c1−a(−f))+(1−1a)(⊓Φ)(c1))
Now, cohomogenity for the various ψ implies that ψ(c1)=1. We know that ψ(c1)=1 is preserved under arbitary inf, we proved it already. So we can rewrite our previous concavity line as
=1−a+a(1a(⊓Φ)(c1−a(−f))+1−1a)
and then simplify to yield
=(⊓Φ)(c1−a(−f))
And so we’ve derived
1−a+a(⊓Φ)(c1+(−f))≥(⊓Φ)(c1−a(−f))
the other direction of the inequality. So cohomogenity holds for a>1, and when the various functions are bounded away from zero. Now, using that it holds for a>1, we’ll prove it for arbitrary a. Let a<1 now. Then we can go
1−a+a(⊓Φ)(c1+(−f))=1−a+a(⊓Φ)(c1+1aa(−f))
=1−a+a(1−1a+1a(⊓Φ)(c1+a(−f)))=(⊓Φ)(c1+a(−f))
Deriving cohomogenity for the a<1 case, because we can apply cohomogenity to 1a since that’s above 1, and cohomogenity works in that case.
So cohomogenity is preserved overall under arbitrary infinima, as long as the functions are bounded away from 0. Then you can just use a limiting argument with the 1-Lipschitz constant to extend to functions that touch 0.
Time for our last piece, preservation of C-additivity under infinima. The condition is ψ(f+cq)=q+ψ(f) for all constants q∈[−1,1]. The way this will work is we’ll show it for q>0 (the q=0 case is trivial), and then use that to show it for q<0. So, assume q>0
(⊓Φ)(f+cq)=⨆↑f′≪f+cq⊓ψ∈Φψ(f′)
Now, observe that if f′′≪f, then
f′′+cq−ϵ≪f+cq
By adding a constant and appealing to Lemma 10. There are always approximants of this form above zero, because q>0 so we can just take ϵ to be sufficiently small. So, we get
By going to a smaller collection of approximants, then applying 1-Lipschitzness. Anyways, since ϵ is arbitrarily small, we have
(⊓Φ)(f+cq)≥q+(⊓Φ)(f)
And to show equality, observe that if we had a strict > sign for some f, we’d get a violation of 1-Lipschitzness, since f+cq and f are only q distance apart, but our inframeasure (which is 1-Lipschitz) would have the two functions being more than q apart from each other.
Now that it’s proven for q>0, let’s prove it for q<0. We have
q+(⊓Φ)(f)=q+(⊓Φ)(c−q+cq+f)=q−q+(⊓Φ)(cq+f)
Because we can apply C-additivity to −q. Now it’s proven in full generality.
That’s the last of it, pretty much every nice condition we could ask for is preserved under finite suprema and arbitrary infinima.
Proposition 7:The functor mapping D to □MD and g to g∗ is indeed a functor ωBC→ωBC.
So, first-up, we already know that a space of inframeasures over an ω-BC domain is an ω-BC domain by Corollary 1 and Theorem 4 so the objects work out, we don’t need to worry about this part.
Second, we’ll need to show that g∗(ψ) is indeed a legit inframeasure if ψ is.
Third, we’ll need to show that g∗ is not just a function □MD→□ME, it’s also a continuous function, showing that said maybe-functor indeed maps morphisms D→E to morphisms □MD→□ME.
And finally, we’ll need to show that said maybe-functor preserves identity morphisms and morphism composition in order to show it’s a functor.
Starting on that second part (first part is taken care of), the requirements for
g∗(ψ):[E→(−∞,∞]]→[−∞,∞]
to be an inframeasure are Scott-continuity (monotone, and preserving suprema of directed subsets), 1-Lipschitzness, mapping the constant-0 function up, and concavity.
For monotonicity, if f′⊒f, then, by the definition of pushforward, and ψ being monotone (since it’s an inframeasure), we have
(g∗(ψ))(f′)=ψ(f′∘g)⊒ψ(f∘g)=(g∗(ψ))(f)
And monotonicity of g∗(ψ) is shown.
For preservation of directed suprema, letting F be a directed set of Scott-continous functions E→(−∞,∞], we have
And we’re done. The first equality was just reexpressing definitions, then we reexpress the function, use continuity of ψ to shift the directed suprema to the outside, and wrap the definition back up.
And this is shown. We just used definitions of pushforward, then 1-Lipschitzness of ψ and how distances between functions work, then one last inequality and everything works.
Where we used concavity of ψ for the inequality, and then definitions.
So, indeed, g∗(ψ) is always a legit inframeasure.ψ and g was arbitrary, so this shows our functor always maps morphisms between domains to functions between inframeasures over domains. The various other properties like C-additivity and such can be shown if the reader wishes it, they’re all pretty simple to show, if you want a more restricted version of this functor.
We do need to show that g∗ is a continuous function to show that said maybe-functor maps morphisms to morphisms. So, we need monotonicity and preservation of directed suprema of inframeasures.
Assume ψ′⊒ψ∈□MD. Then
(g∗(ψ′))(f)=ψ′(f∘g)≥ψ(f∘g)=(g∗(ψ))(f)
And f was arbitrary, so
g∗(ψ′)⊒g∗(ψ)
And thus, g∗ is indeed a monotone function, regardless of g. Time to show preservation of directed suprema. Let Φ be a directed set of inframeasures in □MD. We have
First equality was using the definition of g∗, second equality was using that f,g were continuous so their composition was continuous, and how directed suprema of continuous functions (the ψ) works. Then there’s just a bit of reexpressing with the definition of g∗, and writing it as a directed suprema of inframeasures. And f was an arbitrary continuous function E→(−∞,∞] (or (−∞,∞]), so we have
g∗(⨆↑Φ)=⨆↑ψ∈Φg∗(ψ)
And thus, g∗ is indeed continuous, since it preserves suprema of directed sets.
Now that we know it maps objects to objects and morphisms to morphisms, we need to check that said maybe-functor preserves identity morphisms and morphism composition.
For identity morphisms, we have
(idD∗(ψ))(f)=ψ(f∘idD)=ψ(f)=(id□MD(ψ))(f)
Now, f was arbitrary, so we have
idD∗(ψ)=id□MD(ψ)
And ψ was arbitrary, so we have idD∗=id□MD. That’s preservation of identity morphisms down.
Now for preservation of composition. Let g:D→E, h:E→F, f:F→(−∞,∞], and ψ:□MD.
Therefore, the functor mapping spaces to their spaces of inframeasures, and functions to their pushforwards is indeed a functor.
Proposition 8:The “space of inframeasures” functor is locally continuous.
Inspired by Lemma 5.2.2 and Definition 5.2.3 from Abramsky, we must check local continuity of the functor. In particular, we must show that the function
g↦g∗:[D→E]→[□MD→□ME]
is continuous. This is done by showing monotonicity and preservation of directed suprema. First up is monotonicity. If g′⊒g, then
(g′∗(ψ))(f)=ψ(f∘g′)≥ψ(f∘g)=(g∗(ψ))(f)
That inequality in the middle was, because g′⊒g, then f∘g′⊒f∘g, and ψ is Scott-continuous, and all Scott-continuous functions are monotone. Since f was arbitrary, we have g′∗(ψ)⊒g∗(ψ) and since ψ was arbitrary, we have g′∗⊒g∗, establishing monotonicity.
Time for establishing preservation of directed suprema. Let G be a directed set of functions [D→E]. Then
We use continuity of functions a few times here, like continuity of ψ to move the directed supremum out of the inside. Now, since f was arbitrary, we have
(⨆↑G)∗(ψ)=(⨆↑g∈Gg∗)(ψ)
And, since ψ was arbitrary, we have
(⨆↑G)∗=⨆↑g∈Gg∗
Which is precisely the continuity of the function which maps a function to its pushforward, and local continuity is established. This can be seen by using the notation □(g) instead of g∗, because then it’s
□(⨆↑G)=⨆↑g∈G□(g)
Which is more clearly preservation of directed suprema of functions.
Proposition 9:The category of ω-BC domains is closed under lifting, +,⊕,×,⊗, function space, strict function space, and bilimit.
We’ll show that given three basic results, we can derive the entire theorem from results in Abramsky and stuff we’ve already proved. Our three basic results we’ll need to show are that:
1: If D is an ω-BC domain, then D⊥ is bounded-complete and has a countable basis.
2: If D,E are ω-BC domains, then D⊕E is bounded-complete and has a countable basis.
3: If D,E are ω-BC domains, then [D→E] has a countable basis.
Assuming these three results work out, we can show the entire theorem, as follows.
For lifting, the table in Summary 3.2.6 indicates that lifting a continuous domain produces a continuous domain, so all that remains is showing bounded-completeness and the existence of a countable basis, to establish that lifting an ω-BC domain produces an ω-BC domain. (Our first thing we need to show)
For coalesced sum ⊕, the table in Summary 3.2.6 indicates that coalesced sum of continuous domains is a continuous domain, so all that remains is showing bounded-completeness and the existence of a countable basis to establish that coalesced sum of ω-BC domains is an ω-BC domain. (Our second thing we need to show)
For sum +, because D+E=D⊥⊕E⊥ and ω-BC domains are closed under lifting and coalesced sum, they’re also closed under sum.
For product ×, Corollary 4.1.6 says that the category BC is cartesian-closed, so we at least know that D×E is a BC-domain. Then, Proposition 3.2.4.2 says that a basis for D×E is the product of the bases for D and E. Both of these are countable, so D×E has a countable basis, and so it’s an ω-BC domain.
For smash product ⊗, D⊗E happens to be a retraction of D×E. Just map all the (d,⊥) and (⊥,e) pairs in D×E to ⊥, and the (d,e) pairs to themselves. This is Scott-continuous, and the corresponding Scott-continuous section D×E→D⊗E is just the obvious injection.
ω-BC domains are closed under product. And, Proposition 4.1.3 says that BC-domains are closed under retracts, so we know that D⊗E is at least a BC-domain. All that remains to check is that D⊗E has a countable basis. This follows by Lemma 3.1.3, that the retraction of the countable basis for D×E is a basis for D⊗E.
For function space, Proposition 4.1.5 says that the category of BC-domains is closed under function space, so we at least know that [D→E] is a BC-domain. Then, it has a countable basis (the third thing we need to show), so it’s an ω-BC domain.
For strict function space, [D→!E] happens to be a retraction of [D→E]. Just map all the f∈[D→E] to the function f⊥ which does the exact same thing but also maps ⊥ to ⊥. This is Scott-continuous, and the corresponding Scott-continuous section [D→!E]→[D→E] is just the obvious injection. ω-BC domains are closed under function space, as we’ve shown. And, Proposition 4.1.3 says that BC-domains are closed under retracts, so we know that [D→!E] is at least a BC-domain. All that remains to check is that [D→!E] has a countable basis. This follows by Lemma 3.1.3, that the retraction of the countable basis for [D→E] is a basis for [D→!E].
For bilimit, Proposition 4.1.6 says that bilimits of BC-domains are BC-domains, and the countability of the basis for the bilimit is taken care of by Theorem 3.3.11.1. And we’re done! As long as we manage to prove the three missing pieces.
Missing piece 1: Try to prove that if D is an ω-BC domain, then D⊥ is bounded-complete and has a countable basis.
Given any two points x,y∈D⊥, one of three cases occurs. First, x and y could both be ⊥. Second, without loss of generality, x=⊥, and y∈D. Third, x,y∈D. Clearly, in the first case, the pairwise supremum exists, it’s just ⊥. And in the second case, the pairwise supremum is just y. In the third case, if x,y∈D, and there’s an upper bound in D⊥, then the same upper bound exists in D, and so we can get a supremum of x and y in D, and then it remains a supremum in D⊥. This establishes bounded-completeness.
As for a countable basis, BD⊥:=BD∪{⊥} is clearly countable. We’ll show that it works as a basis. Given any x∈D⊥,
⨆↑BD⊥x=⨆↑((BD∪{⊥})∩x⇊)=⨆↑((BD∩x⇊)∪({⊥}∩x⇊))
Now, ⊥ approximates all x, so this turns into
=⨆↑((BD∩x⇊)∪{⊥})
And then, we’ve got two possible cases. One is where x=⊥. In this case, x isn’t in the original D, and can’t be in the original basis, so the supremum just turns into ⊥ ie x and we’re done with that case. In the other possible case, x≠⊥, so x is in the original D. Obviously, this supremum is entirely of stuff below x, so the supremum will be equal or less than x. So our new proof target will be that
⨆↑((BD∩x⇊)∪{⊥})⊒x
in order to show equality. So, let’s begin. We have
⨆↑((BD∩x⇊)∪{⊥})⊒⨆↑(BD∩x⇊)=⨆↑BDx=x
Because D is continuous. Since, for any x∈D⊥, we have x=⨆BD⊥x, we’ve shown that BD⊥ is a countable basis for D⊥. That’s one part down.
Missing piece 2: Try to prove that if D,E are ω-BC domains, then D⊕E is bounded-complete and has a countable basis.
Given any two points x,y∈D⊕E, one of two cases occurs. First, x and y could both be in the same original domain, or they could be in different original domains. If there’s an upper bound for x and y, they must be in the same original domain, and then we can appeal to bounded-completeness of D or E (whichever one the two points are in) in order to find a suprema for the two. this establishes bounded-completeness.
As for a countable basis, BD⊕E:=BD∪BE is clearly countable since the two sets we’re unioning are countable. We’ll show that it works as a basis. Given any x∈D⊕E, without loss of generality, we can assume that x∈D, because the same upcoming proof will work if x∈E. Then we have
⨆↑BD⊕Ex=⨆↑((BD∪BE)∩x⇊)=⨆↑((BD∩x⇊)∪(BE∩x⇊))
Now, since x∈D by assumption, the only thing in BE which approximates D is ⊥, so we have
=⨆↑((BD∩x⇊)∪{⊥})
And then, we’ve got two possible cases. One is where x=⊥. In this case, the only thing in BD which approximates x is ⊥ itself, so we get
=⨆↑({⊥}∪{⊥})=⊥=x
and we’re done there. In the second case, x≠⊥. In this case, since ⊥ must be in BD and approximate x, we have
=⨆↑(BD∩x⇊)=⨆↑BDx=x
because D is continuous and we’re done. The same proof works for points in E, so since we have x=⨆BD⊕Ex, we’ve shown that BD⊕E is a countable basis for D⊕E. That’s the second part down.
Missing Piece 3: If D,E are ω-BC domains, then [D→E] has a countable basis.
This will rely on a proof very similar to a stage in our proof of Theorem 2. Our proposed basis for [D→E], given a countable basis for D and E, is all the atomic step functions (d↘e) where d∈BD and e∈BE, and all finite suprema of such which exist. The atomic step functions are, if d′≫d, then (d↘e)(d′)=e, and otherwise (d↘e)(d′)=⊥E. So our task is now to show that f=⨆↑B[D→E]f. Clearly we have one direction of the inequality, that f matches or exceeds the supremum, since everything in there approximates f and thus is less than f. So we’ll focus on the other direction, showing that f⊑⨆↑B[D→E]f. This is doable by showing said result for all d′∈D. So, let d′ be arbitrary in D. We then have
This occurs because D is continuous, so d′=⨆↑BDd′. Then, f is a continuous function, so we can move the directed supremum out of the function. Then, E is continuous, so f(d)=⨆↑BEf(d). For the final step, because d∈BDd′, we have d≪d′, so (d↘e)(d′)=e.
Now, we’ll assume something for now, and loop back to prove it later. We’ll assume that if e≪f(d), then (d↘e)≪f. Assuming this works out, then for any choice of d and e from those two directed sets, since e∈BEf(d), we have e≪f(d), so (d↘e)≪f. Accordingly, we can go
The first inequality happens because every d,e pair for the first two directed suprema corresponds to an atomic step function which approximates f, but there may be other step functions which approximate f. The second inequality happens because B[D→E]f is composed of atomic step functions which approximate f, but also some other stuff. Then we just reexpress things a bit. Assuming this works out, then since d′ is arbitrary, it’d show the other inequality direction, that f⊑⨆↑B[D→E]f and so, we’d have a countable basis for [D→E], showing it’s an ω-BC domain.
We do, however, need to show our key intermediate result that
e≪f(d)→(d↘e)≪f
First, assume e≪f(d). Then, in order to show that (d↘e)≪f, we need to pick an arbitrary directed set of functions, G, with a supremum above f, and show that there’s a function in that collection which lies above (d↘e). This is pretty easy, actually. Pick a suitable directed set G. We have f⊑⨆↑G, so, we have
e≪f(d)⊑⨆↑g∈Gg(d)
because of our starting assumption that e≪f(d), and G being a directed collection of functions which matches or exceeds f. Now, by the fact that e approximates f(d), and the directedness of the g(d), there’s an element of that which exceeds e, which corresponds to some particular function g′∈G. We have ∃g′∈G:e⊑g′(d)
Now, we just need to show (d↘e)⊑g′ and we’ll have shown that for any directed set of functions with supremum above f, it has a function which exceeds (d↘e), and we’ll be done. Let d′ be arbitrary. We can split into two cases. One is d′≫d, and one is d′≫/d. In the first case, we have
(d↘e)(d′)=e⊑g′(d)⊑g′(d′)
Because of how the atomic step function was defined, and we showed that e⊑g′(d) already, and also since d′≫d, we have d′⊒d and Scott-continuous functions (like g′) are monotone. For the second case, we have
(d↘e)(d′)=⊥E⊑g′(d′)
And we’re done, we hit all our proof targets, the theorem follows now.
Infra-Domain Proofs 2
Lemma 7: For f,f′,f′′,f′′′∈[X→(−∞,∞] s.t.f′′≪f and f′′′≪f′, and p∈(0,1), and X is compact, LHC, and second-countable, then pf′′+(1−p)f′′′≪pf+(1−p)f′. The same result holds for [X→[0,1]].
This is the only spot where we need that X is LHC and not merely locally compact.
Here’s how it’s going to work. We’re going to use the details of the proof of Theorem 2 to craft two special functions g and g′ of a certain form, which exceed f′′ and f′′′. Then, of course, pf′′+(1−p)f′′′⊑pg+(1−p)g′ as a result. We’ll then show that pg+(1−p)g′ can be written as the supremum of finitely many functions in [X→[−∞,∞]] (or [X→[0,1]]) which all approximate pf+(1−p)f′, so pf+(1−p)f′≫pg+(1−p)g′⊒pf′′+(1−p)f′′′, establishing that pf+(1−p)f′≫pf′′+(1−p)f′′′.
From our process of constructing a basis for the function space back in Theorem 2, we have a fairly explicit method of crafting a directed set of approximating functions for f and f′. The only approximator functions you need to build any f are step functions of the form (U↘q) with q being a finite rational number, and U being selected from the base of X, and q<infx∈K(U)f(x) where K is our compact hull operator. Any function of this form approximates f, and taking arbitrary suprema of them, f itself is produced.
Since functions of this form (and suprema of finite collections of them) make a directed set with a supremum of f or above (f′ or above, respectively), we can isolate a g from the directed set of basis approximators for f, s.t. g⊒f′′, because f′′≪f. And similarly, we can isolate a g′ which approximates f′ s.t. g′⊒f′′′, because f′′′≪f′.
Now, it’s worthwhile to look at the exact form of g and g′. Working just with g (all the same stuff works for g′), it’s a finite supremum of atomic step functions, so
g(x)=supi(Ui↘qi)(x)=supi:x∈Uiqi
and, by how our atomic step functions of the directed set associated with f were built, we know that for all i, qi<infx∈K(Ui)f(x). (remember, the Ui are selected from the base, so we can take the compact hull of them).
There’s a critical split here between the behavior of the [X→[−∞,∞]] type signature, and the [X→[0,1]] type signature.
For the [X→[−∞,∞]] type signature, there’s finitely many i, and everything not in the Ui gets mapped to −∞. Since f′′∈[X→(−∞,∞]], and X is compact, f′′ is bounded below (Lemma 1), so for this finite supremum to exceed f′′, the Ui must cover X.
For the [X→[0,1]] type signature, again there’s finitely many i, but we can’t guarantee that the Ui cover X, because everything not in some Ui gets mapped to 0, and f′′ might not be bounded above 0. So, we’ll add in a “dummy” step function (X↘0). This is just the constant-0 function, which approximates everything, so nothing changes. However, it ensures that X is indeed covered by the open sets of the finite collection of step functions.
So… Given that we have our g,g′, what does the function pg+(1−p)g′ look like? Well, it looks like
pg(x)+(1−p)g′(x)=psupi:x∈Uiqi+(1−p)supj:x∈Ujqj=supi:x∈Uipqi+supj:x∈Uj(1−p)qj
=supi,j:x∈Ui∩Uj(pqi+(1−p)qj)=supi,j(Ui∩Uj↘pqi+(1−p)qj)(x)
We could do this in the [0,1] case from adding in the “dummy” open sets to get the Ui family and Uj family to both cover X.
Anyways, pg+(1−p)g′ equals ⨆i,j(Ui∩Uj↘pqi+(1−p)qj). There’s finitely many i’s and finitely many j’s by how g and g’ were built, so this is a supremum of finitely many atomic step functions. Since pg+(1−p)g′⊒pf′′+(1−p)f′′′, if we can just show that all these atomic step functions are approximators for pf+(1−p)f′ in [X→[−∞,∞]] (or [X→[0,1]]), then pg+(1−p)g′≪pf+(1−p)f′ (supremum of finitely many approximators is an approximator), and we’d show our desired result that pf′′+(1−p)f′′′≪pf+(1−p)f′. So let’s get started on showing that, regardless of i and j,
(Ui∩Uj↘pqi+(1−p)qj)≪pf+(1−p)f′
Now, if Ui∩Uj=∅, then this function maps everything to −∞ or 0, which trivially approximates pf+(1−p)f′. Otherwise, everything in Ui∩Uj gets mapped to pqi+(1−p)qj, and everything not in it gets mapped to −∞ (or 0)
We know by our process of construction for this that qi<infx∈K(Ui)f(x) and qj<infx∈K(Uj)f′(x), so let’s try to work with that. Well, for the [X→[−∞,∞]] case. The [X→[0,1]] case is a bit more delicate since we added those two dummy functions. We have:
pqi+(1−p)qj<pinfx∈K(Ui)f(x)+(1−p)infx∈K(Uj)f(x)
=infx∈K(Ui)pf(x)+infx∈K(Uj)(1−p)f(x)
≤infx∈K(Ui∩Uj)pf(x)+infx∈K(Ui∩Uj)(1−p)f(x)
≤infx∈K(Ui∩Uj)(pf(x)+(1−p)f′(x))=infx∈K(Ui∩Uj)(pf+(1−p)f′)(x)
The first strict inequality was because of what qi and qj were. Then, just move the constants in. For the first non-strict inquality, it’s because of monotonicity for our compact hull operator, Ui∩Uj⊆Ui, so it has a smaller compact hull. Then the next inequality is just grouping the two infs, and we’re finished.
At this point, the usual argument of “fix a directed set of functions ascending up to pf+(1−p)f′, for each patch of K(Ui∩Uj) you can find a function from your directed set that beats the value pqi+(1−p)qj on that spot, this gets you an open cover of K(Ui∩Uj), it’s compact, isolate a finite subcover ie finitely many functions, take an upper bound in your directed set, bam, it beats the step function on a compact superset of its open set so it beats the step function, and the directed set was arbitrary so pf+(1−p)f′ is approximated by the atomic step function” from Theorem 2 kicks in, to show that the atomic step function approximates pf+(1−p)f′. Since this works for arbitrary i,j, and there are finitely many of these, the finite supremum pg+(1−p)g′ approximates pf+(1−p)f′, and dominates pf′′+(1−p)f′′′, and we’re done.
Now for the [X→[0,1]] case. This case works perfectly fine if our i,j pair has qi and pi being above 0. But what if our Ui has qi being above 0, but our j index is picking that (X↘0) “dummy” approximator? Or vice-versa. Or maybe they’re both the “dummy” approximator.
That last case can be disposed of, because (X∩X↘p⋅0+(1−p)⋅0) is just the constant-0 function, which trivially approximates everything. That leaves the case where just one of the step functions isn’t the constant-0 function. Without loss of generality, assume Ui is paired with qi>0, but the other one is the (X↘0) “dummy” approximator. Then our goal is to show that (Ui∩X↘pqi+(1−p)0)≪pf+(1−p)f′. This can be done because, since p∈(0,1), and qi<infx∈K(Ui)f(x), we have
pqi+(1−p)0<pinfx∈K(Ui)f(x)+(1−p)infx∈K(Uj)f(x)
And the rest of the argument works with no problems from there.
Lemma 8: If a∈(0,∞), and f′≪f, then af′≪af, provided af lies in the function space of interest. f,f′∈[X→(−∞,∞]] (or [X→[0,1]]), and X is compact, LHC, and second-countable.
By our constructions in Theorem 2, we can always write any function as the supremum of atomic step functions below it, (U↘q), as long as q<infx∈K(U)f(x). Since f≪f′, we can find finitely many atomic step functions (Ui↘qi) which all approximate f′ s.t. the function x↦supi:x∈Uiqi exceeds f. In the [X→(−∞,∞]] case, since the finite supremum exceeds f, which is bounded below by Lemma 1, the Ui family covers X.
Now, consider the finite collection of atomic step functions (Ui↘aqi). If we can show that all of these approximate af, and the supremum of them is larger than af′, then, since the supremum of finitely many approximators is an approximator, we’ll have shown that af≫⨆i(Ui↘aqi)⊒af′, establishing approximation. So, let’s show those. For arbitrary i, we have
aqi<a⋅infx∈K(Ui)f(x)=infx∈K(Ui)a⋅f(x)=infx∈K(Ui)(af)(x)
So these atomic step functions all approximate af. For exceeding af′, we have, for arbitrary x,
⊔i(Ui↘aqi)(x)=supi:x∈Ui(aqi)=asupi:x∈Uiqi≥a⋅f′(x)=(af′)(x)
And so, our result follows. It’s possible that, for the [X→[0,1]] case, x lies in no Ui, but then the supremum of the empty set would be 0, and since the supremum of the (Ui↘qi) exceeds f′, f′(x)=0 as well, so the same inequality holds.
Lemma 9: If q∈R and f′≪f with f,f′∈[X→(−∞,∞]] with X being compact, LHC, and second-countable, then f′+cq≪f+cq.
We can invoke Theorem 2, to write any function as the supremum of atomic step functions below it, (U↘q′), as long as q′<infx∈K(U)f(x). Since f′≪f, we can find finitely many atomic step functions (Ui↘qi) which all approximate f s.t. the function x↦supi:x∈Uiqi exceeds f′. Also, f′ is bounded below by Lemma 1, so the finite family Ui covers the entire space.
Now, consider the finite collection of atomic step functions (Ui↘qi+q). If we can show that all of these approximate f+cq, and the supremum of them is larger than f′+cq, then, since the supremum of finitely many approximators is an approximator, we’ll have shown that f+cq≫⨆i(Ui↘qi+q)⊒f′+cq, establishing approximation. So, let’s show those. For arbitrary i, we have
qi+q<infx∈K(Ui)f(x)+q=infx∈K(Ui)(f+cq)(x)
So these atomic step functions all approximate f+cq. For exceeding f′+cq, we have, for arbitrary x,
⊔i(Ui↘qi+q)(x)=supi:x∈Ui(qi+q)=supi:x∈Ui(qi)+q≥f′(x)+q=(f+cq)(x)
And so, our result follows.
Lemma 10: If q∈[−1,1] and f′≪f with f,f′∈[X→[0,1]] with X being compact, LHC, and second-countable, then for all ϵ, c0⊔f′+cq−ϵ≪f+cq, as long as f+cq is also bounded in [0,1].
By Theorem 2, we can always write any function as the supremum of atomic step functions below it, (U↘q′), where q′<infx∈K(O)f(x). Since f≪f′, we can find finitely many atomic step functions (Ui↘qi) which all approximate f s.t. the function x↦supi:x∈Uiqi exceeds f′.
Now, consider the following finite finite collection of atomic step functions. (Ui↘max(qi+q−ϵ,0)), along with (X↘max(q−ϵ,0))
If we can show that all of these approximate f+cq, and the supremum of them is larger than c0⊔(f′+cq−ϵ), then, since the supremum of finitely many approximators is an approximator, we’ll have shown that f+cq≫(⨆i(Ui↘max(qi+q,0)))⊔(X↘max(q−ϵ,0))⊒c0⊔(f′+cq−ϵ), establishing approximation. So, let’s show those.
For arbitrary i, if max(0,qi+q−ϵ)>0, then we have
qi+q−ϵ<infx∈K(Ui)f(x)+q=infx∈K(Ui)(f+cq)(x)
So these atomic step functions all approximate f+cq. If max(0,qi+q−ϵ)=0, then the atomic step function turns into c0, which also approximates f′+cq.
For the one extra step function we’re adding in, we have two possible cases. One is where q−ϵ>0, the other one is where max(q−ϵ,0)=0. In such a case, the function turns into c_{0}, which trivially approximates f+cq. However, if q−ϵ>0, then we have
q−ϵ<q+infx∈Xf(x)≤(f+cq)(x′)
For any x′, so this constant strictly undershoots the minimal value of f+cq across the whole space, and thus approximates f+cq by Lemma 2.
So, this finite collection of functions has its supremum being an approximator of f+cq. Now for showing that the supremum beats c0⊔(f′+cq−ϵ). If x lies in some Ui, then
max(supi:x∈Ui(qi+q−ϵ),q−ϵ,0)=max(supi:x∈Ui(qi+q−ϵ),0)
=max(supi:x∈Ui(qi)+q−ϵ,0)≥max(f′(x)+q−ϵ,0)=(c0⊔(f′+cq−ϵ))(x)
And we’ve shown that this supremum of step functions beats our new function. If x lies in none of the Ui, then the original supremum of step functions must have mapped x to 0, and that beats f, so f(x)=0. That’s the only possible thing that could have happened. Then
max(supi:x∈Ui(qi+q−ϵ),q−ϵ,0)=max(0,q−ϵ)=max(0,0+q−ϵ)=(c0⊔(f′+cq−ϵ))(x)
So, this case is taken care of too, and our supremum of step functions beats c0⊔(f′+cq−ϵ). The lemma then finishes up, we have that f′≪f implies that for any ϵ and q s.t.f+cq and f′ and f all lie in [0,1], we have
c0⊔(f′+cq−ϵ)≪f+cq
NOW we can begin proving our theorems!
Theorem 4: The subset of [[X→(−∞,∞]]→[−∞,∞]] (or [[X→[0,1]]→[0,1]]) consisting of inframeasures is an ω-BC domain when X is compact, second-countable, and LHC.
This proof will begin by invoking either Corollary 1 or Proposition 6. Since locally hull-compact implies locally compact, we can invoke these, to get that [[X→(−∞,∞]]→[−∞,∞]], or [[X→[0,1]]→[0,1]] is an ω-BC domain with a top element.
Then, if we can just show that the set of inframeasures is closed under directed suprema and arbitrary nonempty infinima, we can invoke Theorem 3 to get that the space of R-inframeasures (or [0,1]-inframeasures) is an ω-BC domain. This is the tricky part.
For our conventions on working with infinity, we’ll adopt the following. 0 times any number in (−∞,∞] is 0. Infinity times any number in (0,∞] is infinity. Infinity plus any number in (−∞,∞] is infinity. Given two functions f,f′ bounded below, then pf+(1−p)f′ is the function which maps x to pf(x)+(1−p)f′(x), which is always continuous when f,f′ are, due to being monotone, and preserving directed suprema. The distance between points in (−∞,∞], for defining the distance between functions as d(f,f′)=supxd(f(x),f′(x)) is the usual distance when both numbers are finite, 0 when both numbers are ∞ and ∞ when one point is ∞ and the other isn’t.
An inframeasure is a function in [[X→(−∞,∞]]→[−∞,∞]] or [[X→[0,1]]→[0,1]] with the following three properties.
ψ should be 1-Lipschitz, it should map the constant-0 function c0 to 0 or higher, and it should be convex.
The definition of 1-Lipschitzness is that, using using our convention for the distance between points in [−∞,∞] and the distance between functions X→(−∞,∞], we have that for any two functions, d(ψ(f),ψ(f′))≤d(f,f′).
The definition of concavity is that, for all p∈[0,1],f,f′∈[X→(−∞,∞]],
pψ(f)+(1−p)ψ(f′)≤ψ(pf+(1−p)f′)
Admittedly, this might produce situations where we’re adding infinity and negative infinity, which we haven’t defined yet, but this case is ruled out by Lemma 3 and Lemma 1. Any function has a finite lower bound by Lemma 1, and Lemma 3 steps in to ensure that they must have a finite expectation value according to ψ. So, we’ll never have to deal with cases where we’re adding negative infinity to something, we’re working entirely in the realm of real numbers plus infinity.
Now, let’s get started on showing that the directed supremum of 1-Lipschitz, concave functions which map 0 upwards is 1-Lipschitz, concave, and maps 0 upwards.
Let’s start with 1-Lipschitzness. In the case that our functions f,f′ have a distance of infinity from each other, 1-Lipschitness is trivially fulfilled. So we can assume that f,f′ have a finite distance from each other. Observe that it’s impossible that (without loss of generality) (⨆↑Φ)(f) is infinity, while (⨆↑Φ)(f′) is finite. This is because we have
(⨆↑Φ)(f)=⨆↑ψ∈Φψ(f)
So, if that first term is infinite, we can find ψ∈Φ which map f arbitrarily high. And then ψ(f′) would only be a constant away, so the ψ(f′) values would climb arbitrarily high as well, making (⨆↑Φ)(f′) be infinite as well. Then we’d have
d((⨆↑Φ)(f),(⨆↑Φ)(f′))=d(∞,∞)=0≤d(f,f′)
And this would also show 1-Lipschitzness. So now we arrive at our last case where d(f,f′) is finite and the directed suprema are both finite. Then we’d have
d((⨆↑Φ)(f),(⨆↑Φ)(f′))=∣∣(⨆↑Φ)(f)−(⨆↑Φ)(f′)∣∣
=∣∣⨆↑ψ∈Φψ(f)−⨆↑ψ∈Φψ(f′)∣∣≤⨆ψ∈Φ|ψ(f)−ψ(f′)|≤d(f,f′)
and we’re done. That was just unpacking definitions, and then the ≤ was because we can upper-bound the distance between the suprema of the two sets of numbers by the maximum distance between points paired off with each other. Then just apply 1-Lipschitzness at the end.
As for concavity, it’s trivial when p=0 or 1, because then (for p=0 as an example)
p(⨆↑Φ)(f)+(1−p)(⨆↑Φ)(f′)=(⨆↑Φ)(f′)=(⨆↑Φ)(0f+1f′)
So now, let’s start working on the case where p isn’t 0 or 1. We have
p(⨆↑Φ)(f)+(1−p)(⨆↑Φ)(f′)=p⨆↑ψ∈Φψ(f)+(1−p)⨆↑ψ∈Φψ(f′)
=⨆↑ψ∈Φpψ(f)+⨆↑ψ∈Φ(1−p)ψ(f′)=⨆↑ψ∈Φ(pψ(f)+(1−p)ψ(f′))
≤⨆↑ψ∈Φψ(pf+(1−p)f′)=(⨆↑Φ)(pf+(1−p)f′)
First equality is just unpacking definitions. Second equality is because, if the directed supremum is unbounded above, multiplying by a finite constant and taking the directed supremum still gets you to infinity, and it also works for the finite constants. Then, we observe that for any ψ chosen by the first directed supremum and ψ′ chosen by the second, we can take an upper bound of the two to exceed the value. Conversely, any particular ψ can be used for both directed suprema. So we can move the directed suprema to the outside. Then we just apply concavity, and pack back up.
Finally, there’s mapping 0 to above 0. This is easy.
(⨆↑Φ)(c0)=⨆↑ψ∈Φψ(c0)≥⨆↑ψ∈Φ0=0
All these proofs work equally well with both type signatures, so in both type signatures, the set of inframeausures is closed under directed suprema.
Now we’ll show they’re closed under unbounded infinima. This is the super-hard part. First up, 1-Lipschitzness.
1-Lipschitzness is trivial if d(f,f′)=∞, so we can assume that this value is finite. It’s also trivial if (⊓Φ)(f)=∞=(⊓Φ)(f′), because then the distance is 0. So we can assume that one of these values is finite. Without loss of generality, let’s say that (⊓Φ)(f)≥(⊓Φ)(f′). Then, we can go:
d((⊓Φ)(f),(⊓Φ)(f′))=(⊓Φ)(f)−(⊓Φ)(f′)=⨆↑f′′≪f⊓ψ∈Φψ(f′′)−⨆↑f′′′≪f′⊓ψ∈Φψ(f′′′)
Via Lemma 3. Now, at this point, we’re going to need to invoke either Lemma 5 for the R case, or Lemma 6 for the [0,1] case. We’ll treat the two cases identically by using the terminology c⊥⊔(f′′+c−d(f,f′)). In the R case, this is just f′′+c−d(f,f′), because the functions f′′ have a finite lower bound via compactness/Lemma 1, and we’re in the assumed case where d(f,f′)<∞. In the [0,1] case, it’s c0⊔(f′′+c−d(f,f′)), which is a different sort of function. Anyways, since we have f′′≪f, this then means that c⊥⊔(f′′+c−d(f,f′))≪f′ by Lemma 5 or 6 (as the type signature may be). So, swapping out the supremum over f′′′≪f′ with choosing a f′′≪f and evaluating the c⊥⊔(f′′+c−d(f,f′)), makes that second directed supremum go down in value, so the value of the equation as a whole go up. So, in both cases, we have
⨆↑f′′≪f⊓ψ∈Φψ(f′′)−⨆↑f′′′≪f′⊓ψ∈Φψ(f′′′)
≤⨆↑f′′≪f⊓ψ∈Φψ(f′′)−⨆↑f′′≪f⊓ψ∈Φψ(c⊥⊔(f′′+c−d(f,f′)))
=∣∣⨆↑f′′≪f⊓ψ∈Φψ(f′′)−⨆↑f′′≪f⊓ψ∈Φψ(c⊥⊔(f′′+c−d(f,f′)))∣∣
≤⨆f′′≪f|⊓ψ∈Φψ(f′′)−⊓ψ∈Φψ(c⊥⊔(f′′+c−d(f,f′)))|
≤⨆f′′≪f⨆ψ∈Φ|ψ(f′′)−ψ(c⊥⊔(f′′+c−d(f,f′)))|
=⨆f′′≪f⨆ψ∈Φd(ψ(f′′),ψ(c⊥⊔(f′′+c−d(f,f′))))
≤⨆f′′≪f⨆ψ∈Φd(f′′,c⊥⊔(f′′+c−d(f,f′)))
The starting inequality was already extensively discussed. Then, the shift to the absolute value is because we know the quantity as a whole is positive. The gap between the two suprema is upper-bounded by the maximum gap between the two terms with the same f′′ chosen. Similarly, the gap between thetwo infinima is upper-bounded by the maximum gap between the two terms with the same ψ chosen. Then we just do a quick conversion to distance, and apply 1-Lipschitzness of all the ψ we’re taking the infinimum of. Continuing onwards, we have
=⨆f′′≪fd(f′′,c⊥⊔(f′′+c−d(f,f′)))=⨆f′′≪fsupx∈X|f′′(x)−sup(⊥,f′′(x)−d(f,f′))|
Now, we’ve got two possibilities. The first case is where f′′ has range in (−∞,∞]. In such a case, the latter term would turn into −d(f,f′), so the contents as a whole of the absolute value would be d(f,f′). The second case is where f′′ has range in [0,1]. In such a case, the latter term would be as close or closer to f′′(x) than f′′(x)−d(f,f′), so we can upper-bound the contents of the absolute-value chunk by d(f,f′). In both cases, we have
≤⨆f′′≪fsupx∈X|f′′(x)−(f′′(x)−d(f,f′))|=d(f,f′)
And we’re done, 1-Lipschitzness is preserved under arbitrary infinima, for both type signatures.
Now for concavity. Concavity is trivial when p=0 or 1, so let’s assume p∈(0,1), letting us invoke Lemma 7. Let’s show concavity. We have
p(⊓Φ)(f)+(1−p)(⊓Φ)(f′)=p⨆↑f′′≪f⊓ψ∈Φψ(f′′)+(1−p)⨆↑f′′′≪f′⊓ψ∈Φψ(f′′′)
=⨆↑f′′≪f⊓ψ∈Φpψ(f′′)+⨆↑f′′′≪f′⊓ψ∈Φ(1−p)ψ(f′′′)
=⨆↑f′′≪f,f′′′≪f′(⊓ψ∈Φpψ(f′′)+⊓ψ∈Φ(1−p)ψ(f′′′))
≤⨆↑f′′≪f,f′′′≪f′⊓ψ∈Φ(pψ(f′′)+(1−p)ψ(f′′′))
≤⨆↑f′′≪f,f′′′≪f′⊓ψ∈Φψ(pf′′+(1−p)f′′′)
The first equality was just unpacking via Lemma 3. Then we can move the constant in, group into a single big directed supremum, combine the infs (making the inside bigger), then apply concavity for all our ψ∈Φ. Now it gets interesting. Since f′′≪f and f′′′≪f′ and p∈(0,1) we have that pf′′+(1−p)f′′′≪pf+(1−p)f′ by Lemma 7. Since all possible choices make an approximator for pf+(1−p)f′, we can go...
≤⨆↑f∗≪pf+(1−p)f′⊓ψ∈Φψ(f∗)=(⊓Φ)(pf+(1−p)f′)
And concavity is shown. The proof works equally well for both type signatures.
All that remains is showing that the constant-0 function maps to 0 or higher, which is thankfully, quite easy, compared to concavity and 1-Lipschitzness. This is automatic for the [0,1] case, so we’ll just be dealing with the other type signature.
Remember our result back in Lemma 2 that if you have a constant function cq where q<infx∈Xf(x), then cq≪f. We’ll be using that. Also we’ll be using Lemma 4 that no 1-Lipschitz function mapping c0 to 0 or higher can map a function f any lower than inf(0,infx∈Xf(x)), which is finite, by Lemma 1. So, we have
(⊓Φ)(c0)=⨆↑f≪c0⊓ψ∈Φψ(f)≥⨆↑q<0⊓ψ∈Φψ(cq)≥⨆↑q<0q=0
And we’re done. First equality was just the usual unpacking of infinite inf via Lemma 3, second one was because, if q<0, then cq≪c0, so the function value goes down, and the third inequality was because none of the ψ can map cq below q since q<0. That’s it!
Now, since the space of inframeasures (concave, 1-Lipschitz, weakly-normalized) is closed under directed sup and arbitrary inf in [[X→(−∞,∞]]→[−∞,∞]] and [[X→[0,1]]→[0,1]], we can invoke Theorem 3 to conclude that it makes an ω-BC domain.
Theorem 5: The following properties are all preserved under arbitrary infinima and directed supremum for both R and [0,1]-inframeasures, if X is compact, second-countable, and LHC.
ψ(c0)=0
ψ(c1)≥1
ψ(c1)≤1
ψ(c1)=1
∀q∈R:ψ(cq)=q(supernormalization)
∀a∈[0,∞):ψ(af)=aψ(f) (homogenity)
∀a∈[0,∞):ψ(1−af)=1−a+aψ(f) (cohomogenity)
∀q∈R:ψ(f+cq)=ψ(f)+q (C-additivity)
∀q∈R,a∈[0,∞):ψ(af+cq)=aψ(f)+q (crispness)
So, the deal with this proof is that the directed suprema part works equally well for both cases, but the arbitrary infinima part will need to be done once for the R type signature (where it’s marginally easier), and done a second way for the [0,1] case, where it’s considerably more difficult to accomplish.
First, ψ(c0)=0.
(⨆↑Φ)(c0)=⨆↑ψ∈Φψ(c0)=⨆↑ψ∈Φ0=0
Second, ψ(c1)≥1.
(⨆↑Φ)(c1)=⨆↑ψ∈Φψ(c1)≥⨆↑ψ∈Φ1=1
Third, ψ(c1)≤1.
(⨆↑Φ)(c1)=⨆↑ψ∈Φψ(c1)≤⨆↑ψ∈Φ1=1
Fourth, ψ(c1)=1, which is trivial from second and third.
Fifth, supernormalization.
(⨆↑Φ)(cq)=⨆↑ψ∈Φψ(cq)=⨆↑ψ∈Φq=q
Sixth, homogenity.
(⨆↑Φ)(af)=⨆↑ψ∈Φψ(af)=⨆↑ψ∈Φaψ(f)=a⨆↑ψ∈Φψ(f)=a(⨆↑Φ)(f)
Seventh, cohomogenity.
(⨆↑Φ)(c1+a(−f))=⨆↑ψ∈Φψ(c1+a(−f))=⨆↑ψ∈Φ(1−a+aψ(1+(−f)))
=1−a+a⨆↑ψ∈Φψ(1+(−f))=1−a+a(⨆↑Φ)(1+(−f))
Eighth, C-additivity.
(⨆↑Φ)(f+cq)=⨆↑ψ∈Φψ(f+cq)=⨆↑ψ∈Φ(q+ψ(f))=q+⨆↑ψ∈Φψ(f)=q+(⨆↑Φ)(f)
Ninth, crispness, which follows from six and eight since crispnes is equivalent to the conjunction of homogenity and c-additivity.
Now for arbitrary infinima in the R case. We’ll note when the same proof works for the [0,1] case, and when we have to loop back later with a more sophisticated argument. Lemma 3 is used throughout.
First, preservation of ψ(c0)=0 under infinima. We already know that, by ψ(c0)≥0 being preserved under arbitrary infinima it can’t be below 0, so that leaves showing that it can’t be above 0.
(⊓Φ)(c0)=⨆↑f≪c0⊓ψ∈Φψ(f)≤⊓ψ∈Φψ(c0)=0
This was by monotonicity, since c0⊒f, and then we just apply that ψ(c0)=0. This proof works for the [0,1] type signature.
Second, preservation of ψ(c1)≥1 under infinima. Since cq≪c1 when q<1 via Lemma 2, we can go
(⊓Φ)(c1)=⨆↑f≪c1⊓ψ∈Φψ(f)≥⨆↑q∈[0,1)⊓ψ∈Φψ(cq)=⨆↑q∈[0,1)⊓ψ∈Φψ(qc1+(1−q)c0)
≥⨆↑q∈[0,1)⊓ψ∈Φ(qψ(c1)+(1−q)ψ(c0))≥⨆↑q∈[0,1)(q⊓ψ∈Φψ(c1)+(1−q)⊓ψ∈Φψ(c0))
≥⨆↑q∈[0,1)(q⋅1+(1−q)⋅0)=⨆↑q∈[0,1)q=1
And we’re done. In order, that was unpacking the arbitrary inf, the inequality was because we swapped out for a more restricted class of approximators (value decreases), then we reexpressed cq, applied concavity (value decreases), distributed the inf in (value decreases), used that ψ(c0)≥0 always, and our assumption that ψ(c1)≥1 (value decreases), and then we can finish up easily. Same proof works just as well for the [0,1] type signature.
Third, preservation of ψ(c1)≤1 under infinima. Since cq≪c1 when q<1 by Lemma 2, we can go
(⊓Φ)(c1)=⨆↑f≪c1⊓ψ∈Φψ(f)≤⊓ψ∈Φψ(c1)≤1
Due to monotonicity. Same proof works for the [0,1] type signature.
Fourth, preservation of ψ(c1)=1 under infinima. This is trivial since we proved preservation of ≥ and ≤ under infinima already, for both type signatures.
For 5, supernormalization is that ψ(cq)=q for all numbers in (−∞,∞] or [0,1] respectively. We’ll show this is preserved under infinima. We have to prove both directions of this separately. In one direction, we have
q=⊓ψ∈Φψ(cq)≥⨆↑f≪cq⊓ψ∈Φψ(f)=(⊓Φ)(cq)
Which establishes an upper bound. For a lower bound, we have, since p<q implies cp≪cq by Lemma 2,
(⊓Φ)(cq)=⨆↑f≪cq⊓ψ∈Φψ(f)≥⨆↑p<q⊓ψ∈Φψ(cp)=⨆↑p<qp=q
Now, for the [0,1] type signature, the only fiddly bit is the lower-bound argument, because maybe q=0. In that case, you can just apply the fact that ψ(c0)=0 is preserved under infinima, which we proved already.
Number 6: Homogenity, that for any a>0, ψ(af)=aψ(f). We only need to check a>0, because for a=0, it’s saying that ψ(c0)=0, which we’ve already shown is preserved under arbitrary infinima. This will work equally well for both type signatures.
(⊓Φ)(af)=⨆↑f′≪af⊓ψ∈Φψ(f′)
And now, if f′′≪f, then af′′≪af. And similarly, if f′≪af, then (1a)f′≪f, by Lemma 8. So, we can switch to taking approximants of f itself and scaling them, because every approximant of f, scaled by a, is an approximant of af. And every approximant of af, scaled by 1a, is an approximant of f. Due to this, we have
⨆↑f′≪af⊓ψ∈Φψ(f′)=⨆↑f′′≪f⊓ψ∈Φψ(af′′)
And then we can go, by homogenity,
=⨆↑f′′≪f⊓ψ∈Φaψ(f′′)=a⨆↑f′′≪f⊓ψ∈Φψ(f′′)=a(⊓Φ)(f)
And so, homogenity is preserved under arbitrary infinima, for both type signatures.
Seventh, preservation of cohomogenity under infinima. The condition is ψ(c1+a(−f))=1−a+aψ(c1+(−f)), for functions f:X→[−∞,∞) s.t.−f:X→(−∞,∞] is lower-semicontinuous, and a>0. This holds for a=0 as well, in both type signatures, because then it’s just ψ(c1)=1, which we’ve already shown is preserved under arbitrary infinima.
This is exceptionally difficult to show in the [0,1] type signature, so we’ll just show it for the R type signature for now and circle back to it at the end.
(⊓Φ)(c1+a(−f))=⨆↑f′≪c1+a(−f)⊓ψ∈Φψ(f′)
Now things get tricky. First, observe that if f′≪c1+a(−f), then
1af′≪1a(c1+a(−f))=c1a+(−f)
By appealing to Lemma 8, then 9. And then, adding c1−1a to both sides which preserves the approximation by Lemma 9, we’d have
c1−1a+1af′≪c1−1a+c1a+(−f)=c1+(−f)
And we can also get a converse to this. If f′′≪c1+(−f), then, by Lemma 8,
af′′≪a(c1+(−f))=ca+a(−f)
And then, adding c1−a to both sides with Lemma 9, we’d get
c1−a+af′′≪c1+a(−f)
So, given that we can shift approximants back and forth between c1+a(−f) and c1+(−f) freely, we can get
⨆↑f′≪c1+a(−f)⊓ψ∈Φψ(f′)=⨆↑f′′≪c1+(−f)⊓ψ∈Φψ(c1−a+af′′)
And then, since c1−a+af′′ rewrites as c1+a(f′′+c−1), we can go
=⨆↑f′′≪c1+(−f)⊓ψ∈Φψ(c1+a(f′′+c−1))
=⨆↑f′′≪c1+(−f)⊓ψ∈Φ(1−a+aψ(c1+(f′′+c−1)))
(by cohomogenity) and then rewrite as
=⨆↑f′′≪c1+(−f)⊓ψ∈Φ(1−a+aψ(f′′))
=1−a+a(⨆↑f′′≪c1+(−f)⊓ψ∈Φψ(f′′))=1−a+a(⊓Φ)(c1+(−f))
And we’re done, cohomogenity has been proved to be preserved under arbitrary inf for the R type signature. The [0,1] proof is substantially trickier.
Eighth, preservation of C-additivity under infinima. The condition is ψ(f+cq)=q+ψ(f) for all constants q∈R. Again, this is one we’ll have to just prove for the R type signature, and not the [0,1] type signature, as that one takes more work to show. We can start with showing
(⊓Φ)(f+cq)=⨆↑f′≪f+cq⊓ψ∈Φψ(f′)
Now, observe that if f′≪f+cq, then
f′+c−q≪f+cq+c−q=f
By adding a constant and appealing to Lemma 9. And we can also get a converse to this. If f′′≪f, then, by Lemma 9,
f′′+cq≪f+cq
So, given that we can shift approximants back and forth between f+cq and f freely, we can get
⨆↑f′≪f+cq⊓ψ∈Φψ(f′)=⨆↑f′′≪f⊓ψ∈Φψ(f′′+cq)=⨆↑f′′≪f⊓ψ∈Φ(q+ψ(f′′))
=q+⨆↑f′′≪f⊓ψ∈Φψ(f′′)=q+(⊓Φ)(f)
And C-additivity is preserved under arbitrary infinima.
That just leaves condition 9, preservation of crispness under infinima. Crispness is, for all q∈R, a∈(0,∞), ψ(cq+af)=q+aψ(f). This is equivalent to the conjunction of C-additivity and homogenity, both of which are preserved under arbitrary infinima, so the same applies to crispness as well. Accordingly, we don’t need to prove this a second time for [0,1], because it will follow for free when we prove that C-additivity is preserved under infinima in the [0,1] case.
Time for the two last difficult cases, showing preservation of cohomogenity under arbitrary infinima for the [0,1] type signature, and doing the same for C-additivity.
For cohomogenity, we want the following to be preserved under infinima.
ψ(c1+a(−f))=1−a+aψ(c1+(−f))
for functions f:X→[0,1] s.t.−f:X→[−1,0] is lower-semicontinuous, and a>0. What we’ll do is just prove it for the case where c1+a(−f) and c1+(−f) are both bounded above 0.
Once we get this case, we can consider making an arbitrarily small perturbation of our function f if one of those two conditions is violated, in order to apply the exact cohomogenity result, and then 1-Lipschitzness (applied to the arbitrary closeness of the perturbed function and the actual function) can be used to show that the cohomogenity equality comes arbitrarily close to holding since the equality holds exactly for the perturbation, so the equality also works for cases where one of the two functions may be 0 in places.
To do this, since the result is trivial for a=1, we’ll prove ψ(c1+a(−f))≥1−a+aψ(c1+(−f)) if a>1. Then we’ll use concavity (preserved under arbitrary infinima) to get the other inequality direction, and show the cohomogenity equality if a>1. Then we’ll use further arguments to show that if the equality holds for a>1, it also holds for a<1.
So, assume a>1, and we’ll show one inequality direction of cohomogenity.
(⊓Φ)(c1+a(−f))=⨆↑f′≪c1+a(−f)⊓ψ∈Φψ(f′)
Observe that if f′′≪c1+(−f), then
af′′≪a(c1+(−f))=ca+a(−f)
by Lemma 8. And then, applying Lemma 10,
c0⊔(c1−a−ϵ+af′′)≪c1−a+ca+a(−f)=c1+a(−f)
Now, the important thing here is that c1+(−f) must be bounded above 1−1a, otherwise c1+a(−f) wouldn’t be bounded above 0, which we’re assuming. Then, since a possible approximator to c1+(−f) would be c1−1a+ϵ (for sufficiently small ϵ that still undershoots the lower bound of c1+(−f)), the directed supremum of approximators to c1+(−f) does contain some f′′ s.t.c1−a−ϵ+af′′ is never below 0 and thus is a legit function. Such a suitable f′′ would be c1−1a+ϵ, because c1−a−ϵ+af′′ would equal c(a−1)ϵ, which is strictly above 0 because a>1 by assumption.
Thus, if we’re taking the directed supremum of f′′≪c1+(−f), we’re guaranteed to have some functions f′′ where, instead of having
c0⊔(c1−a−ϵ+af′′)≪c1+a(−f)
as Lemma 10 says, we can replace it with
c1−a−ϵ+af′′≪c1+a(−f)
Because there are suitably high f′′ s.t. the c0 lower bound never becomes relevant.
Anyways, after all these details, we can go
⨆↑f′≪c1+a(−f)⊓ψ∈Φψ(f′)≥⨆↑f′′≪c1+(−f)⊓ψ∈Φψ(c1−a−ϵ+af′′)
Because those things in ψ of the second line are a subset of all the stuff that approximates c1+a(−f). Now, we can apply 1-Lipschitzness to get
⨆↑f′′≪c1+(−f)⊓ψ∈Φψ(c1−a−ϵ+af′′)≥⨆↑f′′≪c1+(−f)⊓ψ∈Φ(ψ(c1−a+af′′)−ϵ)
And then rewrite a bit and apply cohomogenity for the ψ to get
=⨆↑f′′≪c1+(−f)⊓ψ∈Φ(ψ(c1−a(c1−f′′))−ϵ)
=⨆↑f′′≪c1+(−f)⊓ψ∈Φ(1−a+aψ(c1−(c1−f′′))−ϵ)
=⨆↑f′′≪c1+(−f)⊓ψ∈Φ(1−a+aψ(f′′)−ϵ)
=1−a−ϵ+a(⨆↑f′′≪c1+(−f)⊓ψ∈Φψ(f′′))=1−a−ϵ+a(⊓Φ)(c1+(−f))
And then, since we have
(⊓Φ)(c1+a(−f))≥1−a−ϵ+a(⊓Φ)(c1+(−f))
but ϵ can be arbitrarily small, we get
(⊓Φ)(c1+a(−f))≥1−a+a(⊓Φ)(c1+(−f))
Which is one inequality direction of cohomogenity. But it’s proved only for a>1. Assuming a>1, let’s derive the other direction from concavity, known to be preserved under arbitrary infinima.
1−a+a(⊓Φ)(c1+(−f))=1−a+a(⊓Φ)(1a(c1+a(−f))+(1−1a)⋅c1)
This is just rewriting things. Since we’re assuming a>1, 1a is a suitable number to use for mixing things together so we can apply concavity. We apply concavity, yielding
≥1−a+a(1a(⊓Φ)(c1−a(−f))+(1−1a)(⊓Φ)(c1))
Now, cohomogenity for the various ψ implies that ψ(c1)=1. We know that ψ(c1)=1 is preserved under arbitary inf, we proved it already. So we can rewrite our previous concavity line as
=1−a+a(1a(⊓Φ)(c1−a(−f))+1−1a)
and then simplify to yield
=(⊓Φ)(c1−a(−f))
And so we’ve derived
1−a+a(⊓Φ)(c1+(−f))≥(⊓Φ)(c1−a(−f))
the other direction of the inequality. So cohomogenity holds for a>1, and when the various functions are bounded away from zero. Now, using that it holds for a>1, we’ll prove it for arbitrary a. Let a<1 now. Then we can go
1−a+a(⊓Φ)(c1+(−f))=1−a+a(⊓Φ)(c1+1aa(−f))
=1−a+a(1−1a+1a(⊓Φ)(c1+a(−f)))=(⊓Φ)(c1+a(−f))
Deriving cohomogenity for the a<1 case, because we can apply cohomogenity to 1a since that’s above 1, and cohomogenity works in that case.
So cohomogenity is preserved overall under arbitrary infinima, as long as the functions are bounded away from 0. Then you can just use a limiting argument with the 1-Lipschitz constant to extend to functions that touch 0.
Time for our last piece, preservation of C-additivity under infinima. The condition is ψ(f+cq)=q+ψ(f) for all constants q∈[−1,1]. The way this will work is we’ll show it for q>0 (the q=0 case is trivial), and then use that to show it for q<0. So, assume q>0
(⊓Φ)(f+cq)=⨆↑f′≪f+cq⊓ψ∈Φψ(f′)
Now, observe that if f′′≪f, then
f′′+cq−ϵ≪f+cq
By adding a constant and appealing to Lemma 10. There are always approximants of this form above zero, because q>0 so we can just take ϵ to be sufficiently small. So, we get
⨆↑f′≪f+cq⊓ψ∈Φψ(f′)≥⨆↑f′′≪f⊓ψ∈Φψ(f′′+cq−ϵ)≥⨆↑f′′≪f⊓ψ∈Φ(ψ(f′′+cq)−ϵ)
=⨆↑f′′≪f⊓ψ∈Φ(q+ψ(f′′)−ϵ)=q−ϵ+⨆↑f′′≪f⊓ψ∈Φψ(f′′)=q−ϵ+(⊓Φ)(f)
By going to a smaller collection of approximants, then applying 1-Lipschitzness. Anyways, since ϵ is arbitrarily small, we have
(⊓Φ)(f+cq)≥q+(⊓Φ)(f)
And to show equality, observe that if we had a strict > sign for some f, we’d get a violation of 1-Lipschitzness, since f+cq and f are only q distance apart, but our inframeasure (which is 1-Lipschitz) would have the two functions being more than q apart from each other.
Now that it’s proven for q>0, let’s prove it for q<0. We have
q+(⊓Φ)(f)=q+(⊓Φ)(c−q+cq+f)=q−q+(⊓Φ)(cq+f)
Because we can apply C-additivity to −q. Now it’s proven in full generality.
That’s the last of it, pretty much every nice condition we could ask for is preserved under finite suprema and arbitrary infinima.
Proposition 7: The functor mapping D to □MD and g to g∗ is indeed a functor ωBC→ωBC.
So, first-up, we already know that a space of inframeasures over an ω-BC domain is an ω-BC domain by Corollary 1 and Theorem 4 so the objects work out, we don’t need to worry about this part.
Second, we’ll need to show that g∗(ψ) is indeed a legit inframeasure if ψ is.
Third, we’ll need to show that g∗ is not just a function □MD→□ME, it’s also a continuous function, showing that said maybe-functor indeed maps morphisms D→E to morphisms □MD→□ME.
And finally, we’ll need to show that said maybe-functor preserves identity morphisms and morphism composition in order to show it’s a functor.
Starting on that second part (first part is taken care of), the requirements for
g∗(ψ):[E→(−∞,∞]]→[−∞,∞]
to be an inframeasure are Scott-continuity (monotone, and preserving suprema of directed subsets), 1-Lipschitzness, mapping the constant-0 function up, and concavity.
For monotonicity, if f′⊒f, then, by the definition of pushforward, and ψ being monotone (since it’s an inframeasure), we have
(g∗(ψ))(f′)=ψ(f′∘g)⊒ψ(f∘g)=(g∗(ψ))(f)
And monotonicity of g∗(ψ) is shown.
For preservation of directed suprema, letting F be a directed set of Scott-continous functions E→(−∞,∞], we have
(g∗(ψ))(⨆↑F)=ψ(⨆↑F∘g)=ψ(⨆↑f∈F(f∘g))=⨆↑f∈Fψ(f∘g)=⨆↑f∈F(g∗(ψ))(f)
And we’re done. The first equality was just reexpressing definitions, then we reexpress the function, use continuity of ψ to shift the directed suprema to the outside, and wrap the definition back up.
For 1-Lipschitzness of g∗(ψ), we have
|(g∗(ψ))(f)−(g∗(ψ))(f′)|=|ψ(f∘g)−ψ(f′∘g)|≤supx∈D|f(g(x))−f′(g(x))|
≤supy∈E|f(y)−f′(y)|=d(f,f′)
And this is shown. We just used definitions of pushforward, then 1-Lipschitzness of ψ and how distances between functions work, then one last inequality and everything works.
For mapping 0 above 0, we have
(g∗(ψ))(c0)=ψ(c0∘g)=ψ(c0)≥0
And finally, for concavity,
p(g∗(ψ))(f)+(1−p)(g∗(ψ))(f′)=pψ(f∘g)+(1−p)ψ(f∘g)
≤ψ(p⋅(f∘g)+(1−p)⋅(f′∘g))=ψ((pf+(1−p)f′)∘g)=(g∗(ψ))(pf+(1−p)f′)
Where we used concavity of ψ for the inequality, and then definitions.
So, indeed, g∗(ψ) is always a legit inframeasure.ψ and g was arbitrary, so this shows our functor always maps morphisms between domains to functions between inframeasures over domains. The various other properties like C-additivity and such can be shown if the reader wishes it, they’re all pretty simple to show, if you want a more restricted version of this functor.
We do need to show that g∗ is a continuous function to show that said maybe-functor maps morphisms to morphisms. So, we need monotonicity and preservation of directed suprema of inframeasures.
Assume ψ′⊒ψ∈□MD. Then
(g∗(ψ′))(f)=ψ′(f∘g)≥ψ(f∘g)=(g∗(ψ))(f)
And f was arbitrary, so
g∗(ψ′)⊒g∗(ψ)
And thus, g∗ is indeed a monotone function, regardless of g. Time to show preservation of directed suprema. Let Φ be a directed set of inframeasures in □MD. We have
(g∗(⨆↑Φ))(f)=(⨆↑Φ)(f∘g)=⨆↑ψ∈Φψ(f∘g)=⨆↑ψ∈Φ(g∗(ψ))(f)=(⨆↑ψ∈Φg∗(ψ))(f)
First equality was using the definition of g∗, second equality was using that f,g were continuous so their composition was continuous, and how directed suprema of continuous functions (the ψ) works. Then there’s just a bit of reexpressing with the definition of g∗, and writing it as a directed suprema of inframeasures. And f was an arbitrary continuous function E→(−∞,∞] (or (−∞,∞]), so we have
g∗(⨆↑Φ)=⨆↑ψ∈Φg∗(ψ)
And thus, g∗ is indeed continuous, since it preserves suprema of directed sets.
Now that we know it maps objects to objects and morphisms to morphisms, we need to check that said maybe-functor preserves identity morphisms and morphism composition.
For identity morphisms, we have
(idD∗(ψ))(f)=ψ(f∘idD)=ψ(f)=(id□MD(ψ))(f)
Now, f was arbitrary, so we have
idD∗(ψ)=id□MD(ψ)
And ψ was arbitrary, so we have idD∗=id□MD. That’s preservation of identity morphisms down.
Now for preservation of composition. Let g:D→E, h:E→F, f:F→(−∞,∞], and ψ:□MD.
((h∘g)∗(ψ))(f)=ψ(f∘(h∘g))=ψ((f∘h)∘g)=(g∗(ψ))(f∘h)=(h∗(g∗(ψ)))(f)
Since f was arbitrary, we have
(h∘g)∗(ψ)=h∗(g∗(ψ))
And so, since ψ was arbitrary, we have
(h∘g)∗=h∗∘g∗
Therefore, the functor mapping spaces to their spaces of inframeasures, and functions to their pushforwards is indeed a functor.
Proposition 8: The “space of inframeasures” functor is locally continuous.
Inspired by Lemma 5.2.2 and Definition 5.2.3 from Abramsky, we must check local continuity of the functor. In particular, we must show that the function
g↦g∗:[D→E]→[□MD→□ME]
is continuous. This is done by showing monotonicity and preservation of directed suprema. First up is monotonicity. If g′⊒g, then
(g′∗(ψ))(f)=ψ(f∘g′)≥ψ(f∘g)=(g∗(ψ))(f)
That inequality in the middle was, because g′⊒g, then f∘g′⊒f∘g, and ψ is Scott-continuous, and all Scott-continuous functions are monotone. Since f was arbitrary, we have g′∗(ψ)⊒g∗(ψ) and since ψ was arbitrary, we have g′∗⊒g∗, establishing monotonicity.
Time for establishing preservation of directed suprema. Let G be a directed set of functions [D→E]. Then
((⨆↑G)∗(ψ))(f)=ψ(f∘(⨆↑G))=ψ(⨆↑g∈G(f∘g))=⨆↑g∈Gψ(f∘g)
=⨆↑g∈G(g∗(ψ))(f)=(⨆↑g∈Gg∗(ψ))(f)=((⨆↑g∈Gg∗)(ψ))(f)
We use continuity of functions a few times here, like continuity of ψ to move the directed supremum out of the inside. Now, since f was arbitrary, we have
(⨆↑G)∗(ψ)=(⨆↑g∈Gg∗)(ψ)
And, since ψ was arbitrary, we have
(⨆↑G)∗=⨆↑g∈Gg∗
Which is precisely the continuity of the function which maps a function to its pushforward, and local continuity is established. This can be seen by using the notation □(g) instead of g∗, because then it’s
□(⨆↑G)=⨆↑g∈G□(g)
Which is more clearly preservation of directed suprema of functions.
Proposition 9: The category of ω-BC domains is closed under lifting, +,⊕,×,⊗, function space, strict function space, and bilimit.
We’ll show that given three basic results, we can derive the entire theorem from results in Abramsky and stuff we’ve already proved. Our three basic results we’ll need to show are that:
1: If D is an ω-BC domain, then D⊥ is bounded-complete and has a countable basis.
2: If D,E are ω-BC domains, then D⊕E is bounded-complete and has a countable basis.
3: If D,E are ω-BC domains, then [D→E] has a countable basis.
Assuming these three results work out, we can show the entire theorem, as follows.
For lifting, the table in Summary 3.2.6 indicates that lifting a continuous domain produces a continuous domain, so all that remains is showing bounded-completeness and the existence of a countable basis, to establish that lifting an ω-BC domain produces an ω-BC domain. (Our first thing we need to show)
For coalesced sum ⊕, the table in Summary 3.2.6 indicates that coalesced sum of continuous domains is a continuous domain, so all that remains is showing bounded-completeness and the existence of a countable basis to establish that coalesced sum of ω-BC domains is an ω-BC domain. (Our second thing we need to show)
For sum +, because D+E=D⊥⊕E⊥ and ω-BC domains are closed under lifting and coalesced sum, they’re also closed under sum.
For product ×, Corollary 4.1.6 says that the category BC is cartesian-closed, so we at least know that D×E is a BC-domain. Then, Proposition 3.2.4.2 says that a basis for D×E is the product of the bases for D and E. Both of these are countable, so D×E has a countable basis, and so it’s an ω-BC domain.
For smash product ⊗, D⊗E happens to be a retraction of D×E. Just map all the (d,⊥) and (⊥,e) pairs in D×E to ⊥, and the (d,e) pairs to themselves. This is Scott-continuous, and the corresponding Scott-continuous section D×E→D⊗E is just the obvious injection.
ω-BC domains are closed under product. And, Proposition 4.1.3 says that BC-domains are closed under retracts, so we know that D⊗E is at least a BC-domain. All that remains to check is that D⊗E has a countable basis. This follows by Lemma 3.1.3, that the retraction of the countable basis for D×E is a basis for D⊗E.
For function space, Proposition 4.1.5 says that the category of BC-domains is closed under function space, so we at least know that [D→E] is a BC-domain. Then, it has a countable basis (the third thing we need to show), so it’s an ω-BC domain.
For strict function space, [D→!E] happens to be a retraction of [D→E]. Just map all the f∈[D→E] to the function f⊥ which does the exact same thing but also maps ⊥ to ⊥. This is Scott-continuous, and the corresponding Scott-continuous section [D→!E]→[D→E] is just the obvious injection. ω-BC domains are closed under function space, as we’ve shown. And, Proposition 4.1.3 says that BC-domains are closed under retracts, so we know that [D→!E] is at least a BC-domain. All that remains to check is that [D→!E] has a countable basis. This follows by Lemma 3.1.3, that the retraction of the countable basis for [D→E] is a basis for [D→!E].
For bilimit, Proposition 4.1.6 says that bilimits of BC-domains are BC-domains, and the countability of the basis for the bilimit is taken care of by Theorem 3.3.11.1. And we’re done! As long as we manage to prove the three missing pieces.
Missing piece 1: Try to prove that if D is an ω-BC domain, then D⊥ is bounded-complete and has a countable basis.
Given any two points x,y∈D⊥, one of three cases occurs. First, x and y could both be ⊥. Second, without loss of generality, x=⊥, and y∈D. Third, x,y∈D. Clearly, in the first case, the pairwise supremum exists, it’s just ⊥. And in the second case, the pairwise supremum is just y. In the third case, if x,y∈D, and there’s an upper bound in D⊥, then the same upper bound exists in D, and so we can get a supremum of x and y in D, and then it remains a supremum in D⊥. This establishes bounded-completeness.
As for a countable basis, BD⊥:=BD∪{⊥} is clearly countable. We’ll show that it works as a basis. Given any x∈D⊥,
⨆↑BD⊥x=⨆↑((BD∪{⊥})∩x⇊)=⨆↑((BD∩x⇊)∪({⊥}∩x⇊))
Now, ⊥ approximates all x, so this turns into
=⨆↑((BD∩x⇊)∪{⊥})
And then, we’ve got two possible cases. One is where x=⊥. In this case, x isn’t in the original D, and can’t be in the original basis, so the supremum just turns into ⊥ ie x and we’re done with that case. In the other possible case, x≠⊥, so x is in the original D. Obviously, this supremum is entirely of stuff below x, so the supremum will be equal or less than x. So our new proof target will be that
⨆↑((BD∩x⇊)∪{⊥})⊒x
in order to show equality. So, let’s begin. We have
⨆↑((BD∩x⇊)∪{⊥})⊒⨆↑(BD∩x⇊)=⨆↑BDx=x
Because D is continuous. Since, for any x∈D⊥, we have x=⨆BD⊥x, we’ve shown that BD⊥ is a countable basis for D⊥. That’s one part down.
Missing piece 2: Try to prove that if D,E are ω-BC domains, then D⊕E is bounded-complete and has a countable basis.
Given any two points x,y∈D⊕E, one of two cases occurs. First, x and y could both be in the same original domain, or they could be in different original domains. If there’s an upper bound for x and y, they must be in the same original domain, and then we can appeal to bounded-completeness of D or E (whichever one the two points are in) in order to find a suprema for the two. this establishes bounded-completeness.
As for a countable basis, BD⊕E:=BD∪BE is clearly countable since the two sets we’re unioning are countable. We’ll show that it works as a basis. Given any x∈D⊕E, without loss of generality, we can assume that x∈D, because the same upcoming proof will work if x∈E. Then we have
⨆↑BD⊕Ex=⨆↑((BD∪BE)∩x⇊)=⨆↑((BD∩x⇊)∪(BE∩x⇊))
Now, since x∈D by assumption, the only thing in BE which approximates D is ⊥, so we have
=⨆↑((BD∩x⇊)∪{⊥})
And then, we’ve got two possible cases. One is where x=⊥. In this case, the only thing in BD which approximates x is ⊥ itself, so we get
=⨆↑({⊥}∪{⊥})=⊥=x
and we’re done there. In the second case, x≠⊥. In this case, since ⊥ must be in BD and approximate x, we have
=⨆↑(BD∩x⇊)=⨆↑BDx=x
because D is continuous and we’re done. The same proof works for points in E, so since we have x=⨆BD⊕Ex, we’ve shown that BD⊕E is a countable basis for D⊕E. That’s the second part down.
Missing Piece 3: If D,E are ω-BC domains, then [D→E] has a countable basis.
This will rely on a proof very similar to a stage in our proof of Theorem 2. Our proposed basis for [D→E], given a countable basis for D and E, is all the atomic step functions (d↘e) where d∈BD and e∈BE, and all finite suprema of such which exist. The atomic step functions are, if d′≫d, then (d↘e)(d′)=e, and otherwise (d↘e)(d′)=⊥E. So our task is now to show that f=⨆↑B[D→E]f. Clearly we have one direction of the inequality, that f matches or exceeds the supremum, since everything in there approximates f and thus is less than f. So we’ll focus on the other direction, showing that f⊑⨆↑B[D→E]f. This is doable by showing said result for all d′∈D. So, let d′ be arbitrary in D. We then have
f(d′)=f(⨆↑BDd′)=⨆↑d∈BDd′f(d)=⨆↑d∈BDd′⨆↑e∈BEf(d)e=⨆↑d∈BDd′⨆↑e∈BEf(d)(d↘e)(d′)
This occurs because D is continuous, so d′=⨆↑BDd′. Then, f is a continuous function, so we can move the directed supremum out of the function. Then, E is continuous, so f(d)=⨆↑BEf(d). For the final step, because d∈BDd′, we have d≪d′, so (d↘e)(d′)=e.
Now, we’ll assume something for now, and loop back to prove it later. We’ll assume that if e≪f(d), then (d↘e)≪f. Assuming this works out, then for any choice of d and e from those two directed sets, since e∈BEf(d), we have e≪f(d), so (d↘e)≪f. Accordingly, we can go
⨆↑d∈BDd′⨆↑e∈BEf(d)(d↘e)(d′)⊑⨆(d↘e)≪f:(d,e)∈BD×BE(d↘e)(d′)⊑⨆↑g∈B[D→E]fg(d′)
=(⨆↑B[D→E]f)(d′)
The first inequality happens because every d,e pair for the first two directed suprema corresponds to an atomic step function which approximates f, but there may be other step functions which approximate f. The second inequality happens because B[D→E]f is composed of atomic step functions which approximate f, but also some other stuff. Then we just reexpress things a bit. Assuming this works out, then since d′ is arbitrary, it’d show the other inequality direction, that f⊑⨆↑B[D→E]f and so, we’d have a countable basis for [D→E], showing it’s an ω-BC domain.
We do, however, need to show our key intermediate result that
e≪f(d)→(d↘e)≪f
First, assume e≪f(d). Then, in order to show that (d↘e)≪f, we need to pick an arbitrary directed set of functions, G, with a supremum above f, and show that there’s a function in that collection which lies above (d↘e). This is pretty easy, actually. Pick a suitable directed set G. We have f⊑⨆↑G, so, we have
e≪f(d)⊑⨆↑g∈Gg(d)
because of our starting assumption that e≪f(d), and G being a directed collection of functions which matches or exceeds f. Now, by the fact that e approximates f(d), and the directedness of the g(d), there’s an element of that which exceeds e, which corresponds to some particular function g′∈G. We have ∃g′∈G:e⊑g′(d)
Now, we just need to show (d↘e)⊑g′ and we’ll have shown that for any directed set of functions with supremum above f, it has a function which exceeds (d↘e), and we’ll be done. Let d′ be arbitrary. We can split into two cases. One is d′≫d, and one is d′≫/d. In the first case, we have
(d↘e)(d′)=e⊑g′(d)⊑g′(d′)
Because of how the atomic step function was defined, and we showed that e⊑g′(d) already, and also since d′≫d, we have d′⊒d and Scott-continuous functions (like g′) are monotone. For the second case, we have
(d↘e)(d′)=⊥E⊑g′(d′)
And we’re done, we hit all our proof targets, the theorem follows now.