A quick note here is that whenever a proposition or theorem or something is cited with a complicated-looking number, like, not Proposition 3, but “Proposition 3.4.2.7”, it’s being cited from this book.
Proposition 2:If X is a metrizable space, then the pointwise limit of a sequence of lower-bounded lower-semicontinuous functions fn:X→(−∞,∞] with fn+1≥fn is lower-bounded and lower-semicontinuous.
Proof: Lower-boundedness of f∗(the pointwise limit of the fn) is trivial because f∗≥f1, and f1 is lower-bounded. For lower-semicontinuity, let xi limit to x, and n be some arbitrary natural number. We have
liminfi→∞f∗(xi)≥liminfi→∞fn(xi)≥fn(x)
Because f∗≥fn and fn is lower-semicontinuous. And, since this works regardless of n, we have
liminfi→∞f∗(xi)≥limn→∞fn(x)=f∗(x)
Since f∗ is the pointwise limit of the fn, showing that f∗ is lower-semicontinuous.
Proposition 3:If X is a metrizable space, then given any lower-bounded lower-semicontinuous function f∗:X→(−∞,∞], there exists a sequence of bounded continuous functions fn:X→R s.t.fn+1≥fn and the fn limit pointwise to f∗.
First, fix X with an arbitrary metric, and stipulate that f∗ is lower-bounded and lower-semicontinuous. We’ll be defining some helper functions for this proof. Our first one is the gn family, defined as follows for n>0:
gn(x):=inf{f∗(x′)|d(x,x′)<1n}
Ie, we take a point, and map it to the worst-case value according to f∗ produced in an open ball of size 1n around said point. Obviously, regardless of n, gn≤f∗.
Now we must show that gn, regardless of n, is upper-semicontinuous, for later use. Let ϵ be arbitrary, and x be some arbitrary point in X, and xm be some sequence limiting to x. Our task is to show limsupm→∞gn(xm)≤gn(x), that’s upper-semicontinuity.
First, we observe that there must exist an x′ s.t. d(x,x′)<1n, and f∗(x′)≤gn(x)+ϵ. Why? Well, gn(x) is “what’s the worst-case value of f∗ in the 1n-sized open ball”, and said worst-case value may not be exactly attained, but we should be able to find points in said open ball which get arbitrarily close to the infinimum or attain it.
Now, since xm converges to x, there must be some m∗ where, forever afterward, d(xm,x)<1n−d(x,x′) (the latter term is always nonzero from how we selected x′). For any m in that tail, we can rearrange this and get that d(xm,x)+d(x,x′)<1n, so by the triangle inequality, we have that d(xm,x′)<1n. Ie, x′ is in the open ball around xm for the tail of the convergent sequence.
Now, since x′ is in all those open balls, the definition of gn means that for our tail of sufficiently late xm, gn(xm)≤f∗(x′), and we already know that f∗(x′)≤gn(x)+ϵ. Putting these together, for all sufficiently late xm, gn(xm)≤gn(x)+ϵ. ϵ was arbitrary, so we have
limsupm→∞gn(xm)≤gn(x)
(the limit might not exist, but limsup always does). And now we know that our gn auxiliary functions are upper-semicontinuous.
Next up: We must show that, if fn is some continuous function, sup(fn,gn) is upper-semicontinuous. For this, we do:
For this, we distributed the limsup inside the sup, used that fn is continuous, and then used that gn is upper-semicontinuous for the inequality.
Now, here’s what we’re going to do. We’re going to inductively define the continuous fn as follows. Abbreviate inf(0,infx′∈Xf∗(x′)) (which is finite) as f∗min, and then consider the set-valued functions, for n>0, Φn:X→P(R) inductively defined as follows:
The big unknown is whether we can even find a continuous selection from all these set-valued functions to have the induction work. However, assuming that part works out, we can easily clean up the rest of the proof. So let’s begin.
We’ll be applying the Michael Selection Theorem.R is a Banach space, and X being metrizable implies that X is paracompact, so those conditions are taken care of. We need to show that, for all n and x, Φn(x) is nonempty, closed, and convex, and Φn is lower-hemicontinuous. This will be done by induction, we’ll show it for n=1, establish some inequalities, and use induction to keep going.
For n=1, remember,
Φ1(x)=[f∗min,min(f∗(x),f∗min+1)]
This is obviously nonempty, closed, and convex for all x. That just leaves establishing lower-hemicontinuity. Lower-hemicontinuity is, if y∈Φ1(x), and there’s a sequence xm limiting to x, there’s a (sub)sequence ym∈Φ1(xm) that limits to y. We can let our ym:=min(f∗(xm),f∗min+1,y). Since y∈Φ1(x), y≥f∗min, so
The first equality was distributing the liminf in and neglecting it for constants, the second is because y≤f∗min+1 because y∈Φ1(x), and the inequality is from lower-semicontinuity of f∗, and the equality is because y∈Φ1(x) so y≤f∗(x).
Since the limsup of this sequence is below y and the liminf is above y, we have:
limm→∞ym=limm→∞min(f∗(xm),f∗min+1,y)=y
And bam, lower-hemicontinuity is proved for the base case, we can get off the ground with the Michael selection theorem picking our f1.
Now for the induction step. The stuff we’ll be assuming for our induction step is that fn≤f∗ (we have this from the base case), and that fn is continuous (we have this from the base case). Now,
Because fn≤f∗ by induction assumption, and gn≤f∗, because the definition of gn was
gn(x):=inf{f∗(x′)|d(x,x′)<1n}
we have nonemptiness, and closure and convexity are obvious, so we just need to verify lower-hemicontinuity. Lower-hemicontinuity is, if y∈Φn+1(x), and there’s a sequence xm limiting to x, there’s a (sub)sequence ym∈Φn+1(xm) that limits to y. We can define
So it’s an appropriate sequence of ym to pick. I will reiterate that, since fn is continuous (induction assumption) and we already established that all the gn are upper-semicontinuous, and the sup of a continuous function and upper-semicontinuous function is upper-semicontinuous as I’ve shown, we have upper-semicontinuity for sup(fn,gn). Remember that. Now, we can go:
Up to this point, what we did is move the limsup into the max for the equality, and then swapped our min of three components for one of the components (the y), producing a higher value.
Now, we can split into two exhaustive cases. Our first possible case is one where f∗min+n≤max(fn(x),gn(x)). In such a case, we can go:
This occurred because we swapped out our min of two components for one of the components, the constant lower bound, producing a higher value. Then, the equality was because, by assumption, f∗min+n≤max(fn(x),gn(x)) so we can alter the second term. Then, we finally observe that because y∈Φn(x), and min(max(fn(x),gn(x)),f∗min+n) is the lower-bound on said set, y is the larger of the two.
Our second possible case is one where f∗min+n≥max(fn(x),gn(x)). In such a case, we can go:
The first inequality was we swapped out our min of two components for one of the components, producing a higher value regardless of the m. The second inequality was because sup(fn,gn) is upper-semicontinuous, as established before. The equality then is just because we’re in the case where f∗min+n≥max(fn(x),gn(x)). Finally, we just observe that the latter term is the lower-bound for Φn(x), which is the set that y lies in, so y is greater. These cases were exhaustive, so we have a net result that
The first inequality was swapping out the max for just one of the components, as that reduces the value of your liminf. Then, for the equality, we just distribute the liminf in, and neglect it on the constants. The next inequality after that is lower-semicontinuity of f∗, then we just regroup the mins in a different way. Finally, we observe that min(f∗(x),f∗min+n+1) is the upper-bound on Φn(x), which y lies in, so y is lower and takes over the min.
Since the limsup of this sequence is below y and the liminf is above y, we have:
limm→∞ym=y
And we’ve shown lower-hemicontinuity, and the Michael selection theorem takes over from there, yielding a continuous fn+1. Now, let’s verify the following facts in order to show A: that the induction proceeds all the way up, and B: that the induction produces a sequence of functions fn fulfilling the following properties.
First: fn+1≤f∗. This is doable by fn+1≤inf(f∗,f∗min+n+1)≤f∗, from our Φn+1 upper bound. It holds for our base case as well, since the upper bound there was inf(f∗,f∗min+1).
Second: fn+1≤f∗min+n+1, which is doable by the exact same sort of argument as our first property.
Third: fn+1≥fn. This is doable by
fn+1≥inf(sup(gn,fn),f∗min+n)≥inf(fn,f∗min+n)=fn
Where the inequality uses that fn+1 is a selection of Φn+1, and the lower bound on Φn+1. Then we just swap out some contents for lower contents, and use our second fact which inducts up the tower to establish that f∗min+n is an upper bound on the function fn. This is our one missing piece to show that our induction proceeds through all the n.
Fourth: fn+1≥inf(gn,f∗min+n). Same argument as before, except we swap sup(gn,fn) out for gn instead.
At this point, we’ve built a sequence of continuous functions fn where you always have fn+1≥fn, and fn+1≥inf(gn,f∗min+n) and fn≤f∗, regardless of n.
Our last step is to show that gn limits to f∗ pointwise, ie, regardless of x, limn→∞gn(x)=f∗(x).
We recall that the definition of gn was gn(x):=inf{f∗(x′)|d(x,x′)<1n}
Obviously, as n goes up, gn(x) goes up, it’s monotonically increasing, since we’re minimizing over fewer and fewer points each time, the open ball is shrinking. So the limit exists. And we also know that all the gn lie below f∗. So, to show that gn limits to f∗ pointwise, we just have to rule out the case where limn→∞gn(x)<f∗(x).
Assume this was the case. Then, for each n we can pick a point xn only 1n distance away from x that comes extremely close to attaining the infinimum value. These xn get closer and closer to x, they limit to it. So, we’d have:
limn→∞gn(x)=limn→∞f∗(xn)<f∗(x)
But the definition of lower-semicontinuity is that
liminfn→∞f∗(xn)≥f∗(x)
So we have a contradiction, and this can’t be the case. Thus, the gn limit to f∗ pointwise.
Now, we will attempt to show that the fn limit to f∗ pointwise. Let x be arbitrary, and we have
The first inequality was f∗≥fn for all n, the equality was just a reindexing, and the second inequality was fn+1≥inf(gn,f∗min+n) (from our induction). Now, we can split into two cases. In case 1, gn(x) diverges to ∞. Since f∗min+n diverges as well, we have
limn→∞min(gn(x),f∗min+n)=limn→∞gn(x)
In case 2, gn(x) doesn’t diverge, but since f∗min+n does, we have
limn→∞min(gn(x),f∗min+n)=limn→∞gn(x)
So, in either case, we can continue by
=limn→∞gn(x)=f∗(x)
Because the gn limit to f∗ pointwise. Wait, we just showed f∗(x)≥f∗(x) no matter what. So all the inequalities must be equalities, and we have
f∗(x)=limn→∞fn(x)
And now we have our result, that any lower-bounded lower-semicontinuous function f∗ can be written as the pointwise limit of an increasing sequence of bounded continuous functions (since all the fn were bounded above by some constant and bounded below by f∗min.)
Theorem 1/Monotone Convergence Theorem For Inframeasures:Given X a Polish space, Ψ an inframeasure set over X, f∗ a lower-bounded lower-semicontinuous function X→(−∞,∞], and {fn}n∈N an ascending sequence of lower-bounded lower-semicontinuous functions X→(−∞,∞] which limit pointwise to f∗, then limn→∞EΨ[fn]=EΨ[f∗]
We’ll need to do a bit of lemma setup first. Said intermediate result is that, if some f∞ is lower-semicontinuous and lower-bounded, then the function from a-measures to (−∞,∞] given by (m,b)↦m(f∞)+b is lower-semicontinuous.
By Proposition 3, we can craft an ascending sequence fn of bounded continuous functions which limit pointwise to f∞.
Now to establish lower-semicontinuity of (m,b)↦m(f∞)+b. Let (mm,bm) limit to (m∞,b∞). Let n be arbitrary. Then
liminfm→∞mm(f∞)+bm≥limm→∞mm(fn)+bm=m∞(fn)+b∞
The first inequality is because f∞ is above the fn sequence that limit to it. We then use limits because the sequence converges now. Since fn is continuous and bounded, and (mm,bm) converges to (m∞,b∞), then mm(fn)+bm must converge to m∞(fn)+b∞.
Anyways, now that we know that for arbitrary n,
liminfm→∞mm(f∞)+bm≥m∞(fn)+b∞
We can get the inequality that
liminfm→∞mm(f∞)+bm≥limn→∞(m∞(fn)+b∞)=m∞(f∞)+b∞
Where the equality comes from Beppo Levi’s monotone convergence theorem, since the fn sequence is an ascending sequence of lower-bounded functions. Accordingly, we now know that if f∞ is lower-bounded and lower-semicontinuous, then (m,b)↦m(f∞)+b is a lower-semicontinuous function. Let’s proceed.
Remember, the thing we’re trying to show is that
limn→∞EΨ[fn]=EΨ[f∗]
Where all the fn are lower-semicontinuous and lower-bounded, and limit to f∗ pointwise. One direction of this is pretty easy to show.
The first equality is just unpacking what expectation means, then we use that f∗≥fn regardless of n to show that all the points in Ψ are like “yup, the value increased” to get the inequality. Then we just observe that it doesn’t depend on n anymore and pack up the expectation, yielding
limn→∞EΨ[fn]≤EΨ[f∗]
So, that’s pretty easy to show. The reverse direction, that
EΨ[f∗]≤limn→∞EΨ[fn]
is much trickier to show.
We’ll start with splitting into cases. The first case is that Ψ is the empty set, in which case, everything is infinity, and the reverse inequality holds and we’re done.
The second case is that Ψ isn’t empty. In such a case, for each fn, we can pick (mn,bn) a-measures from the minimal points of Ψ s.t.
mn(fn)+bn≃inf(m,b)∈Ψm(fn)+b
coming as close to minimizing said functions within Ψ as we like!
Now, we can again split into two cases from here. In our first case, the bn sequence from the (mn,bn) sequence diverges. Then, in such a case, we have
Why does this work? Well, the (mn,bn) approximately minimize the expectation value of fn. Then, the worst-case value of mn(fn) is either 0, or the amount of measure in mn times the lowest value of fn. Since it’s an ascending sequence of functions, we can lower-bound this by (amount of measure in mn) times (worst-case value of f1). And, since we’re picking minimal points, and there’s an upper bound on the amount of measure present in minimal points of an inframeausure set from the Lipschitz criterion (our finite λ⊙ value), the worst-case value we could possibly get for mn(fn) is either 0 or the maximum amount of measure possible times a lower bound on fn. Both of these quantities are finite, so we get a finite negative number. But the bn are assumed to diverge, so the expectation values head off to infinity.
Ok then, we’re on to our last case. What if Ψ isn’t empty, and our sequence (mn,bn) of a-measures has a subsequence where the bn doesn’t diverge? Well, the only possible way a sequence of a-measures in a nonempty inframeasure set can fail to have a convergent subsequence is for the b values to diverge, from the compact-projection property for an inframeasure set.
So, in our last case, we can isolate a convergent subsequence of our (mn,bn) sequence of a-measures, which converges to the a-measure (m∞,b∞). Let’s use the index i to denote this subsequence. At this point, the argument proceeds as follows. Let j be an arbitrary natural number.
limn→∞EΨ[fn]=limi→∞EΨ[fi]
=limi→∞mi(fi)+bi≥liminfi→∞mi(fj)+bi≥m∞(fj)+b∞
In order, this is just “we went to a subsequence of an ascending sequence, so it’s got the same limit”, then swapping out the worst-case value for the actual minimizing point. Then we just use that since the sequence of functions is ascending, eventually mi(fi)+bi will blow past mi(fj)+bi (at timestep j) because j is a fixed constant. Then we just apply that fj is lower-bounded and lower-semicontinuous, so the function (m,b)↦m(fj)+b is lower-semicontinuous, getting our inequality we pass to (m∞,b∞).
Since this holds for arbitrary j, this then means we have
And we have the inequality going the other way, proving our result! This last stretch was done via Beppo Levi’s Monotone Convergence Theorem for the equality, then the inequality is because nonempty inframeasure sets Ψ are closed, so the limit point (m∞,b∞) also lies in Ψ, and then packing up definitions.
Since we’ve proven both directions of the inequality, we have
limn→∞EΨ[fn]=EΨ[f∗]
and we’re done!
Proposition 4:All compact Polish spaces are compact second-countable LHC spaces.
Proof: Compactness is obvious. All Polish spaces are second-countable. So that just leaves the LHC property. Our compact hull operator K will be taken to just be the closure of the set. Since the space X is compact, all closed subsets of it are compact as well. The two properties
∀U∈B(X):U⊆K(U)
∀U1,U2∈B:U1⊆U2→K(U1)⊆K(U2)
Are trivially fulfilled when K is interpreted as set closure.
That leaves the LHC property. Since all Polish spaces are Hausdorff, the various definitions of local compactness coincide, and the space is compact, so all definitions of local compactness hold. So, given some O and x∈O, by local compactness for the space X, we can find a compact set C and open set O′ s.t. x∈O′⊆K⊆O. Since O′ is open, it can be written as a union of sets from the topology base. Now, pick a set from the base that makes O′ and contains x, call it U. We have x∈U⊆O′⊆C. In Hausdorff spaces, all compact sets are closed, so C is a closed superset of U, and so K(U) (the closure of U) is a subset of C. Since K(U)⊆C⊆O, and U⊆K(U) (since K is just closure), we have x∈U⊆K(U)⊆O, as desired. This works for any x and O, so the space fulfills the LHC property.
Proposition 5:All open subsets of ω-BC domains are second-countable LHC spaces.
Given D, an ω-BC domain, it has a countable basis. You can close the countable basis under finite suprema (when they exist), and the set will stay countable and stay a basis, but now it’s closed under finite suprema. Do that, to get a countable basis B which is closed under finite suprema.
Our attempted base for the open subset O⊆D will be those of the form x⇈ for x∈B∩O, ie, the set of all the points y where x≪y, and x is in the countable basis for the domain and also in O. Also the empty set. This is clearly countable, because B is.
To show it’s a topology base, we need to check closure under finite intersection, check that all these are open, and check that every open set in O can be made by unioning these things together.
First, closure under finite intersection. In order to do this, we’ll need to show a lemma that if x≫xi for finitely many xi, then x≫⊔i≤nxi. This occurs because, given any directed set with a supremum of x or higher, since x≫xi, you can find a yi in the directed set which is above xi, and then take an upper bound for the finitely many yi which exists within your directed set, and it’ll exceed ⊔i≤nxi. So, every directed set with a supremum of x or higher has an element which matches or exceeds the supremum of the xi, so the supremum of the xi approximates x.
Now that we have this result, we’ll show that ⋂i≤nxi⇈=(⊔i≤nxi)⇈
In the two directions, anything which is approximated by ⊔i≤nxi is also approximated by anything below it, ie, all the xi, so we have ⋂i≤nxi⇈⊇(⊔i≤nxi)⇈
And, for the reverse direction, we’ve already established that anything which all the xi approximate will also have the supremum of the xi approximate it. Since our countable basis B was made to be closed under finite suprema, the open set associated with the supremum of the xi is also in our basis, so our base is closed under finite intersection.
Second, checking that they’re open. Sets of the form x⇈ are always open in the Scott-topology, and every Scott-open set is closed upwards, so all the sets in this base are open in the original domain D, and so remain open when we restrict to the open subset of D and equip it with the subspace topology.
All that remain to show that this is a base is that it can make any Scott-open subset of O by union. By Proposition 2.3.6.2, a set U is open in the Scott-topology on D iff U=⋃x∈U∩Bx⇈.
Since all of our open subsets of O are open in D, they can be written as a union of sets from our countable collection by this result.
We have crafted a countable base for our open subset of an ω-BC domain D, so it’s indeed second-countable.
Now for the LHC property. The compact hull operator K maps the set x⇈ to (⊓x⇈)↑. This is compact because all Scott-opens are closed upwards, so an open cover of (⊓x⇈)↑ must have an open which includes ⊓x⇈ itself, and this single open covers the entire set. This set is a subset of x↑ because everything that x approximates lies above x, so x is a lower bound, and thus must equal or lie below ⊓x⇈. Since x is in our open subset O, the same must also apply to this lower bound, because Scott-open sets are closed up.
For our first property of a compact hull operator, we need to show that x⇈⊆(⊓x⇈)↑. This is easy, anything which lies in x⇈ must lie above the infinimum of that set.
For our second property of a compact hull operator, we need to show that x⇈⊆y⇈ implies (⊓x⇈)↑⊆(⊓y⇈)↑. For this, assuming the starting assumption, by basic properties of inf, we have ⊓x⇈⊒⊓y⇈, so then anything above that first point must also be above the second point, so we have our result.
Now, we just need to check the LHC property. Let U be an open subset of O, and let x∈U. We can consider the directed set of approximants to x from the basis of our domain D, called Bx. Scott-opens have the feature that for any directed set with a supremum in the open set (and Bx is directed and has a supremum of x, which lies in U), there’s an element in the directed set which also lies in the open set. So, we can find a y which lies in the basis, and y≪x, and y∈U. Then just let your open set from the base be y⇈. This is an open from the base. It contains x. And, K(y⇈)⊆y↑⊆U, because y∈U and Scott-opens are closed upwards.
And thus, our result is shown.
Corollary 1:All ω-BC domains are compact second-countable LHC spaces.
Just use Proposition 5 when your open set is all of your domain to get second-countability and LHC-ness. For compactness, any open cover of the domain D must have an open set which includes the bottom point ⊥, and Scott-opens are closed upwards, so we can isolate a particular open set which covers the entire domain.
Theorem 2:If X is second-countable and locally compact, then [X→[0,1]] is an ω-BC domain, as is [X→[−∞,∞]].
To begin, [−∞,∞] is topologically identical to [0,1], so we can just prove it for the [0,1] case to get both results.
As a recap, the Scott-topology on [0,1] is the topology with the open sets being ∅,[0,1], and all sets of the form (q,1] for q∈[0,1).
The space [X→[0,1]] is the space of all continuous functions X→[0,1], where [0,1] is equipped with the Scott-topology, which has been equipped with a partial order via f⊑g↔∀x∈X:f(x)≤g(x)
We will actually show a stronger result, that this space is a continuous complete lattice with a countable basis, as this implies that the set is an ω-BC domain.
To show this, we’ll need to show that the space is closed under arbitrary suprema. This is sufficient to show that the space is a complete lattice because, to get the infinimum of the empty set, we can get a top element by taking the supremum of everything, and to get the infinimum of any nonempty set of elements, we can take the set of lower bounds (which is nonempty because a bottom element exists, the supremum of the empty set), and take supremum of that. Then we just need to find a countable basis, and we’ll be done. Most of the difficulty is in finding the countable basis for the function space.
Showing there’s a bottom element (sup of the empty set) is easy, just consider the function which maps everything in X to 0.
Now, we’ll show that suprema exist for all nonempty sets of functions. Let F be our nonempty set of functions.
The natural candidate for the supremum of functions is is x↦supf∈Ff(x), call this function ⊔F. It’s a least upper bound of all the functions f∈F, the only fiddly part we need to show is that this function is a continuous function X→[0,1], in order for it to appear in the space [X→[0,1]].
Fix an arbitrary open set in [0,1], we’ll show that the preimage is open in X. If the set is the empty set or [0,1] itself, then the preimage will be the empty set or all of X, so those two cases are taken care of. Otherwise, the open set is of the form (q,1] for q∈[0,1). Now, we have
(⊔F)−1((q,1])={x∈X|(⊔F)(x)>q}
={x∈X|supf∈Ff(x)>q}={x∈X|∃f∈F:f(x)>q}
=⋃f∈F{x∈X|f(x)>q}=⋃f∈Ff−1((q,1])
And now we see that this preimage can be written as a union of preimages of open sets, which are open since all the f are continuous, so the set (⊔F)−1((q,1]) is open. Since q was arbitrary in [0,1), we have that the preimage of all open sets in the unit interval is open in X, so ⊔F is indeed continuous and exists in [X→[0,1]], to serve as the least upper bound there.
All that remains is coming up with a countable basis for [X→[0,1]], which is the hard part. Let B(X) be a countable base of X (this can be done since X is second-countable). Given an open set U∈B(X) and rational number q∈Q∩[0,1], we define the atomic step function (U↘q) as:
x∈U→(U↘q)(x)=q
x∉U→(U↘q)(x)=0
First things first is showing these are continuous. Clearly, any preimage will either be the empty set, all of X, or the open set U (because there’s no open set in [0,1] with the Scott-topology that contains 0 without containing q), so they’re indeed continuous.
Our attempted countable basis B will be all of these atomic step functions, and all finite suprema of such. Suprema of arbitrary sets of functions always exist as we’ve shown, so this is well-defined. It’s countable because there’s countably many choices of open set, countably many choices of rational number, and we’re considering only the finite subsets of this countable set.
The hard part is showing that for any f, it can be built as the supremum of stuff from this basis which approximates f. First up, as a preliminary, is showing that Bf is a directed set for some arbitrary function f. Bf is the set of all elements of the basis which approximate f. What we can do is pick two arbitrary functions from the basis, g1 and g2 s.t.g1≪f and g2≪f. From back in Proposition 5, we know that g1⊔g2≪f. Since our basis is closed under finite suprema, we now know that the intersection of the basis and the approximants to f is closed under finite suprema, so it’s directed.
So, now that we know that Bf is a directed set for arbitrary f, we must ask, does f=⨆↑Bf? For one direction, we have that every function in Bf approximates f, so it must be less than f itself, so we have ⨆↑Bf⊑f trivially. So let’s get the other direction going, by showing ∀x∈X:⨆↑Bf(x)≥f(x)
First, let’s solve the case where f(x)=0. We’d have ⨆↑Bf(x)≥0=f(x)
The inequality is because now we’re restricting to the atomic step functions instead of all the suprema of such. And then we have
=⨆(U↘q)≪f:U∈B(X)∧q∈Q∩[0,1](O↘q)(x)
=sup{q∈Q∩[0,1]|∃U∈B(X):x∈U∧(U↘q)≪f}
Why is this? Well, (U↘q)(x) is 0 if x∉U, and q if x∈U. So, if, for a rational number q>0, we can find an open set U s.t. x∈U, and (U↘q)≪f, then the supremum of atomic step functions must map x to q or higher. And if there’s an atomic step function which approximates f and maps x to some q>0, we can read out the open set from it to certify that q lies in the latter set of rational numbers we’re taking the supremum over.
If we could hit that proof target, then we could go
⨆↑Bf(x)≥sup{q∈Q∩[0,1]|q<f(x)}=f(x)
With the inequality coming from chaining the equalities and inequalities so far, and the proof target, and the equality coming from f(x)>0 (in the case we’re working in) and real numbers being the supremum of rational numbers below them. This works for arbitrary x with f(x)>0, and we’ve already addressed the 0 case, so we’d establish f⊑⨆↑Bf
And, since we’ve already shown the other direction of the inequality, we’d have ⨆↑Bf=ffor arbitrary f, showing that [X→[0,1]] has a countable basis, so it’s an ω-BC domain since that’s the last piece we need, and we’d be done.
So, all we need to pull off to finish up our proof is to show
So let this be our new proof target. We’re working in the case where f(x)>0, so we can safely assume the starting precondition that q<f(x). Let’s construct our open set O to find a compact neighborhood within.
Because q<f(x), we have that x∈f−1((q,1]), which is an open set. Because X is locally compact, we can always find a compact set K and open set U from the base s.t.
x∈U⊆K⊆f−1((q,1])
Let this be our U. x∈U, trivially. So all that’s left now is to prove that (U↘q)≪f and we’ll hit our implication proof target. So this is our new proof target.
To establish this, we’ll take an arbitrary directed set G⊆[X→[0,1]] s.t. f⊑⨆↑G, and prove that we can always find an element from it that exceeds (U↘q). So fix such a G.
Remember that we’ve already got fixed subsets of X, denoted by U and K, with U being open, and K being compact, and x∈O⊆U⊆K⊆f−1((q,1])
Let’s begin. Given an arbitrary y∈K, we have y∈K⊆f−1((q,1]) so f(y)>q. Then, we have
q<f(y)≤⨆↑G(y)=⨆↑g∈Gg(y)
So, each y∈K can be associated with some function gy∈G where gy(y)>q ie y∈g−1y((q,1]), just pick an appropriate one out of the supremum. So, we have
∀y∈K∃gy∈G:y∈g−1y((q,1])
And then, this means that {g−1y((q,1])}y∈K is an open cover of K, since all these preimages of open sets are open, and every point y is covered by the appropriate preimage for gy. Since K is compact, there’s a finite open subcover of K. Denote the corresponding functions as gi. So, {g−1i((q,1])}i≤n is a finite open cover of K.
Since we have finitely many functions gi from G picked out, there’s an upper bound to all of them which still lies in G, call this function g. Since g⊒gi for all of the finitely many functions we have, we know that
g−1i((q,1])⊆g−1((q,1])
for all the finitely many gi (g is a higher function so a preimage of high points through g must be a superset of the preimage through gi). And so, we have
K⊆⋃i≤ng−1i((q,1])⊆g−1((q,1])
Because these finitely many preimages are all subsets of the preimage of g, and they’re an open cover of K.
Since g∈G, we will now endeavor to show that (U↘q)⊑g and we’ll be done. For x∉U, we have
g(x)≥0=(U↘q)(x)
So, that’s taken care of. Now, what if x∈U? Well,
x∈U⊆K⊆g−1((q,1])
The first two subsets are because of how we constructed U in the first place, and the third was derived at the end of our open cover arguments. This means that g(x)>q. And then we can get
(U↘q)(x)=q<g(x)
So, in both cases, g(x)≥(U↘q)(x). And so we have (U↘q)⊑g and g∈G. G was an arbitrary directed set with supremum of f or above, so we have (U↘q)≪f. And that’s the last part that we need! We’re done now.
Corollary 2:The space [[X→[0,1]]→[0,1]] is an ω-BC domain with top element when X is second-countable and locally compact.
This is easy, we just apply Theorem 2 to get that [X→[0,1]] is an ω-BC domain, then apply Corollary 1 to get that it’s second-countable and locally compact, then apply Theorem 2 again to get that it’s an ω-BC domain. The top element is the constant function which maps everything to 1.
Lemma 1:If X is compact, then any continuous function f:X→(−∞,∞] where (−∞,∞] is equipped with the Scott-topology has infx∈Xf(x) being finite.
Proof: The family of sets f−1((q,∞]) for q∈R is an open cover of X. This is because, since f is continuous, and (q,∞] is Scott-open, the preimage is open. It covers X because all points in X are mapped to ∞ or a real number. Since X is compact, there’s a finite open subcover, yielding finitely many real numbers q s.t. sets of the form f−1((q,∞]) cover X. Take the least of these, to get a finite lower-bound on the value of f over all of X.
Lemma 2:If X is compact, then given any continuous function f:X→(−∞,∞], if q<infx∈Xf(x), then cq≪f in [X→(−∞,∞]]. The same result holds for X→[0,1].
Fix a directed set of functions G s.t. f⊑⨆↑G. For any particular y∈X, we have
q<infx∈Xf(x)≤f(y)≤⨆↑g∈Gg(y)
Thus, for any particular y, there’s a function gy where gy(y)>q. So, the set g−1y((q,∞]) is open (a preimage of an open) and contains y. This can be done for all points in X, to get an open cover of X. X is compact, so there’s a finite subcover, and we can find finitely many gi s.t. X⊆⋃i≤ng−1i((q,∞]). This is saying that for any point in X, there’s a function gi that maps it above q. Then, we can take an upper bound to all these gi to get a function g∈G that maps all points in X to a value above q. Then, we have, for arbitary x∈X, that cq(x)=q<g(x), so cq⊑g. Since our directed set was arbitrary and we’re always able to find a function in it that beats cq, this means that cq≪f.
Proposition 6:The space [[X→(−∞,∞]]→[−∞,∞]] is an ω-BC domain with top element when X is compact, second-countable, and locally compact.
So, here’s how the proof will work. As it turns out, if we can just show that [X→(−∞,∞]] is open in the set [X→[−∞,∞]], then we’ll be done. Here’s how we can finish up the entire theorem if we just knew that result.
Since X is second-countable and locally compact, we know from Theorem 2 that [X→[−∞,∞]] is an ω-BC domain. Then, using our assumption that [X→(−∞,∞]] is an open subset, we’d get from Proposition 5 that [X→(−∞,∞]] is a second-countable LHC space, so in particular, it’s second-countable and locally compact. Then, we just appeal to Theorem 2 one more time to get that [[X→(−∞,∞]]→[−∞,∞]] is an ω-BC domain. And the top element is the constant function which maps everything to ∞.
Let’s prove that [X→(−∞,∞]] is an open subset of [X→[−∞,∞]]. The criteria for a set to be open is that it be closed upwards, and that if a directed subset has a supremum in the set of interest, then the directed subset must already have an element in the set of interest. Upwards closure is easy. If f⊑f′, and f is never −∞, then the same must apply to f′. However, showing that directedness property takes more work, and will rely on X being compact.
Take an arbitrary directed set of functions G⊆[X→[−∞,∞]] s.t ⨆↑G∈[X→(−∞,∞]]. Then, by Lemma 1 (since X is compact), infx∈X(⨆↑G)(x) is finite. And so, by Lemma 2,
cinfx∈X(⨆↑G)(x)−1≪⨆↑G
And then, since G is a directed set with a supremum of ⨆↑G, there must be an element of G (call it g) which exceeds cinfx∈X(⨆↑G)(x)−1. This constant function lies in [X→(−∞,∞]], and this set is closed upwards, so g also lies in the same set.
And therefore, [X→(−∞,∞]] is a Scott-open set, because it’s closed upwards and for an arbitrary directed set G with a supremum that lies in this set, there’s an element g∈G which also lies in the same set.
The rest of the theorem follows, as per earlier discussion.
Theorem 3/Retraction Theorem:If D is an ω-BC domain with a top element, and ∅≠P⊆D, then P, which is the set P equipped with the order inherited from D, is an ω-BC domain if the following two conditions hold. 1: P is closed under directed suprema. 2: P is closed under arbitrary nonempty infinima.
We’ll split into two possible cases. The first case is where the top element from D also lies in P. The second case is where the ⊤ from D doesn’t lie in P.
The first case will be addressed by establishing a link between D and P which lets us transfer the nice properties of D over to P, and is where the bulk of the work is.
The second case will be done by adding our top element into the set P to make a domain P⊤. This reduces us to the first case, so we automatically know that P⊤ has all the nice properties we need. Then we just need to show that we can rip the top point off of P⊤, to get back to P, and that doing this keeps P as an ω-BC domain.
So, let’s embark on the first case, where we can assume that ⊤D∈P. We’ll define some functions linking P and D. x will be used for points in D, and y will be used for points in P. ⊑P and ⊑D will be used to denote the orders on P and D, when we want to be picky.
i is a partial function D→P which is only defined when x∈P. It maps a point in D to its corresponding point in P. Clearly it’s monotone, because the ordering on P is just the restriction of the ordering on D.
We also have a total function s:P→D, which is just the usual subset inclusion of P into D. So, of course, we have s(i(x))=x when x∈P and i(s(y))=y.
Finally, we have r:D→P, defined as
r(x):=⊓P{i(x′)|x′∈P∧x⊑Dx′}
This is well-defined because ⊤∈P and is above everything, so you’re always taking the inf of a nonempty set (it always contains i(⊤)). Further, it lies in P because
i(⊓D{x′|x′∈P∧x⊑Dx′})=⊓P{i(x′)|x′∈P∧x⊑Dx′}
This is because, the infinimum of stuff in P is also in P by assumption, so we can inject the infinimum as computed in D, and that’s the same thing as injecting everything into P and computing the infinimum there.
Our first task is to show that r∘s=idP, and s∘r⊒idD. We have
(r∘s)(y)=r(s(y))=⊓P{i(x′)∈P|s(y)⊑Dx′}
=i(⊓D{x′∈P|s(y)⊑Dx′})=i(s(y))=y
The first two equalities are by the definitions. Then, it doesn’t matter whether we compute the inf in P or D, so we switch to computing the inf in D and injecting that, which can be done because inf of stuff in P is also in P. Then, we observe that s(y)∈P and s(y)⊑Ds(y), so the inf as computed in D is just s(y) exactly, and then it cancels out to yield y again.
Now for the other direction, that s∘r⊒idD. We have
(s∘r)(x)=s(r(x))=s(⊓P{i(x′)|x′∈P∧x⊑Dx′})
=s(i(⊓D{x′|x′∈P∧x⊑Dx′}))=⊓D{x′|x′∈P∧x⊑Dx′}⊒x
The first two equalities are just definitions. Then we switch the i out since P is closed under nonempty infinimum, the s and i cancel, and then we observe that we’re taking the inf of a bunch of stuff above x, so x is a lower bound.
Now that we’ve established that, let’s show that both s and r are continuous in the Scott-topology. Functions are continuous iff they’re monotone and preserve suprema of directed sets. For s, monotonicity is trivial, because y′⊒Py iff s(y′)⊒Ds(y), due to s just being subset inclusion and P having the restriction of the order on D.
For preserving suprema of directed sets, working in P, we have
i(⨆↑y∈A,Ds(y))=⨆↑y∈A,Pi(s(y))=⨆↑PA=i(s(⨆↑PA))
This is because, since P is closed under directed suprema, it doesn’t matter whether we take the directed sup in D or in P, so we can move the i in and out. Then we just use that i and s compose to the identity. Since i preserves order, from the above work, when we strip off the i and work in D,
⨆↑y∈A,Ds(y)=s(⨆↑PA)
And bam, continuity of s is shown.
Now for establishing continuity of r:D→P. For monotonicity, if x′⊒Dx, then
This occurs because if x′⊒Dx, then x has more points in P which lies above it, so the infinimum of this larger set must be as low or lower than the infinimum of stuff above x′.
For preservation of directed supremum, letting A be a subset of D, since
r(⨆↑DA)⊒P⨆↑x∈A,Pr(x)
automatically by monotonicity, we just have to show the reverse direction. This can be done by:
The first equality was because r∘s=idP. The second one is because s is continuous, so we can move the directed sup in and out of it. Then, we use that s∘r⊒idD and monotonicity of r for our inequality, and then just rephrase the directed supremum.
Now that we know that both s and r are monotone, and r∘s=idP, by definition 3.1.1, r and s make a monotone section/retraction pair. By Proposition 3.1.2.2, since D is a pointed DCPO, P is too. Also, by Lemma 3.1.3, since D and P are both pointed DCPO’s, and s and r are continuous (as we’ve shown), r(B), where B is a basis for D, is a basis for P. So P is a pointed DCPO with countable basis, since D has a countable basis. All that remains is showing that P has suprema of all bounded pairs of points, to show that it’s a BC-domain. Since P is closed under arbitrary nonempty infinima, you can just take the inf of all the upper bounds on your points y,y′ in order to make a supremum, and bam, P is a BC-domain.
This addresses the case where ⊤D∈P. But what if it isn’t? Well, taking your original set P and adding ⊤ to it, it’s still closed under nonempty infinima (because adding ⊤ to the set you’re taking nonempty inf of doesn’t change the inf), and it’s still closed under directed suprema, because any directed set containing ⊤ must have the directed suprema being ⊤. So, we can conclude that P⊤ is an ω-BC domain.
All that remains is showing that P, the thing you get when you rip the top point out, is also an ω-BC domain. Your set P is still pointed, since only ⊤ was ripped out, not ⊥. It’s still closed under directed suprema because directed sets in P are directed sets in D which are subsets of P. Computing the directed supremum in D, it can’t be ⊤ because ⊤∉P by assumption, and P is closed under directed suprema. Then we can just port that directed supremum back to P, secure in the knowledge that the directed supremum can’t be that top point we ripped out. So P is at least a pointed DCPO.
Bounded suprema still exist in P, because you can take two points y,y′ with an upper bound y′′∈P, and the supremum of y,y′ in P⊤ was the inf of every upper bound. When you rip the top point out, it doesn’t make the inf empty since y′′ is still present, and it doesn’t affect the value of the inf since ⊤ is above everything and can’t change the inf when you get rid of it. So the supremum as computed in P⊤ is still present in P.
For a countable basis, just use the original countable basis from P⊤ and rip the top point out from it. The definition of a basis was that y=⨆↑By:=⨆↑(y⇊∩B), and all these directed sets remain unchanged because if y isn’t ⊤, ⊤ can’t approximate y because that would require that ⊤⊑y, which is impossible, so this resulting set still computes all the points correctly via directed supremum.
Therefore, P is an ω-BC domain.
Lemma 3:Given a continuous DCO D and BC-domain E, the infinimum of a batch of functions as computed in [D→E] can be written as: (⊓F)(x)=⨆↑x′≪x⊓f∈Ff(x′)
So, in order to do this, we’ll need to show three things. First, that it’s continuous. Second, that it is indeed a lower bound. Third, that it’s the greatest lower bound.
To show that it’s continuous, we show monotonicity and preservation of directed suprema. For monotonicity, let x′⊒x. Then
The equalities are just definitions, the inequality is because any approximant to x is also an approximant to x′ since x′ is above x. Now for preservation of directed suprema. Let A be a directed subset of D.
⨆↑x∈A((⊓F)(x))=⨆↑x∈A(⨆↑x′≪x(⊓f∈Ff(x′)))
And then we have
=⨆↑x′≪⨆↑A(⊓f∈Ff(x′))
Because, for any x∈A and x′≪x, we also have x′≪⨆↑A, since it’s higher. And, if x′≪⨆↑A, then there’s some x∈A s.t. x′≪x. So everything in one supremum is present in the other and vice-versa. Then we can just go
⨆↑x′≪⨆↑A(⊓f∈Ff(x′))=(⊓F)(⨆↑A)
and we’re done! It’s a continuous function.
To show that it’s a lower bound, fix an arbitrary f′∈F. We have
This is by continuity of D, then f′∈[D→E] so it’s continuous, so we can move the directed supremum in and out of f′, then the value inside the directed supremum staying the same or decreasing since we’re taking the inf of all f(x′) over all f∈F.
To show that it’s the greatest lower bound, let f∗ be another lower bound to the collection of functions F. We have
Because f∗⊑F, we get that inequality, swapping out for f∗ makes the value stay the same or go down. Then, since f∗ is continuous, we can move the directed suprema inside of f∗, and the directed supremum of approximants to x is x itself. So it’s the greatest lower bound.
Lemma 4:For any function ψ:[X→(−∞,∞]]→[−∞,∞] where ψ is 1-Lipschitz and ψ(c0)≥0, and X is compact, ψ(f)≥inf(0,infx∈Xf(x)).
By Lemma 1, infx∈Xf(x) is a finite constant. Now, let’s begin.
Here’s where the inequalities came from. The first inequality is because, by assumption ψ(c0)≥0. The equality is because, since 0≥inf(0,infx∈Xf(x)), the same inequality applies to constant functions, and by monotonicity, ψ(c0) matches or exceeds what you get when you feed the other constant function in, so it’s a nonnegative quantity, equal to the distance between the two values you get. Then, the next inequality is 1-Lipschitzness of ψ. The distance between constant functions is the distance between their values. Then, the absolute value is clear. And because the quantity in the absolute value is 0 or negative, it equals negative-itself.
Now, we can multiply by negative 1 to flip this around and get
ψ(cinf(0,infx∈Xf(x)))≥inf(0,infx∈Xf(x))
And then, since we have f⊒cinf(0,infx∈Xf(x)) since the constant function undershoots it for all inputs, we can apply monotonicity to get
ψ(f)≥inf(0,infx∈Xf(x))
Lemma 5:If X is compact, and f,f′,f′′∈[X→(−∞,∞]], and f′′≪f, and d(f,f′)<∞, then f′′+c−d(f,f′)≪f′.
Since X is compact, no functions in here can have a value of −∞ or value unbounded below by Lemma 1.
Fix a directed set G of functions X→(−∞,∞] with f′⊑⨆↑G. Our goal is to get an element of G which exceeds f′′+c−d(f,f′). For each g∈G, consider the function g+cd(f,f′), to get the directed set G+cd(f,f′). Our first thing to do is to show that is a directed set. Which is easy because, given any two functions in G+cd(f,f′), we can just subtract the constant from them to get two functions in G, take an upper bound in G, and add the constant back to it to get an upper bound in G+cd(f,f′). Also, for any x′∈X, we have
The first equality was just using how G+cd(f,f′) was constructed and how supremum of functions is defined. Then we use that this is just adding a constant to g, do a bit of regrouping, get our inequality because ⨆↑G⊒f′ by assumption, use the definition of function distance, substitute in a particular value, and the last inequality is obvious.
So, since this holds for all x′∈X, we have f⊑⨆↑(G+cd(f,f′)). Since this is a directed set with a supremum of f or more, and f′′≪f, we can isolate a function g∈G where f′′⊑g+cd(f,f′). So then f′′+c−d(f,f′)⊑g, and we’ve got an element of our directed set which exceeds f′′+c−d(f,f′). Since we picked an arbitrary directed set with a supremum of f′ or higher, this means that f′′+c−d(f,f′)≪f′.
Lemma 6:If X is compact, and f,f′,f′′∈[X→[0,1]], and f′′≪f, then c0⊔f′′+c−d(f,f′)≪f′.
Fix a directed set G of functions X→[0,1] with f′⊑⨆↑G. Our task is to find a g that exceeds c0⊔f′′+c−d(f,f′). For each g∈G, consider the function c1⊓(g+cd(f,f′)), to get the directed set c1⊓(G+cd(f,f′)). Our first thing to do is to show that is a directed set. Which is easy because, given any two functions in there, we can take their original g’s, and take an upper bound of the two and translate forward again to get an upper bound in c1⊓(G+cd(f,f′)). Also, for any x′∈X, we have
So, since this holds for all x′∈X, we have f⊑⨆↑(c1⊓(G+cd(f,f′))). Since this is a directed set with a supremum of f or more, and f′′≪f, we can isolate a function g∈G where f′′⊑c1⊓(g+cd(f,f′)). Then, we have
So, we have a g⊒c0⊔(f′′+c−d(f,f′)), and our directed set G was arbitrary, so we’ve showed approximation, and have our result that, if f′′≪f, then c0⊔(f′′+c−d(f,f′))≪f′.
Infra-Domain proofs 1
A quick note here is that whenever a proposition or theorem or something is cited with a complicated-looking number, like, not Proposition 3, but “Proposition 3.4.2.7”, it’s being cited from this book.
Proposition 2: If X is a metrizable space, then the pointwise limit of a sequence of lower-bounded lower-semicontinuous functions fn:X→(−∞,∞] with fn+1≥fn is lower-bounded and lower-semicontinuous.
Proof: Lower-boundedness of f∗(the pointwise limit of the fn) is trivial because f∗≥f1, and f1 is lower-bounded. For lower-semicontinuity, let xi limit to x, and n be some arbitrary natural number. We have
liminfi→∞f∗(xi)≥liminfi→∞fn(xi)≥fn(x)
Because f∗≥fn and fn is lower-semicontinuous. And, since this works regardless of n, we have
liminfi→∞f∗(xi)≥limn→∞fn(x)=f∗(x)
Since f∗ is the pointwise limit of the fn, showing that f∗ is lower-semicontinuous.
Proposition 3: If X is a metrizable space, then given any lower-bounded lower-semicontinuous function f∗:X→(−∞,∞], there exists a sequence of bounded continuous functions fn:X→R s.t.fn+1≥fn and the fn limit pointwise to f∗.
First, fix X with an arbitrary metric, and stipulate that f∗ is lower-bounded and lower-semicontinuous. We’ll be defining some helper functions for this proof. Our first one is the gn family, defined as follows for n>0:
gn(x):=inf{f∗(x′)|d(x,x′)<1n}
Ie, we take a point, and map it to the worst-case value according to f∗ produced in an open ball of size 1n around said point. Obviously, regardless of n, gn≤f∗.
Now we must show that gn, regardless of n, is upper-semicontinuous, for later use. Let ϵ be arbitrary, and x be some arbitrary point in X, and xm be some sequence limiting to x. Our task is to show limsupm→∞gn(xm)≤gn(x), that’s upper-semicontinuity.
First, we observe that there must exist an x′ s.t. d(x,x′)<1n, and f∗(x′)≤gn(x)+ϵ. Why? Well, gn(x) is “what’s the worst-case value of f∗ in the 1n-sized open ball”, and said worst-case value may not be exactly attained, but we should be able to find points in said open ball which get arbitrarily close to the infinimum or attain it.
Now, since xm converges to x, there must be some m∗ where, forever afterward, d(xm,x)<1n−d(x,x′) (the latter term is always nonzero from how we selected x′). For any m in that tail, we can rearrange this and get that d(xm,x)+d(x,x′)<1n, so by the triangle inequality, we have that d(xm,x′)<1n. Ie, x′ is in the open ball around xm for the tail of the convergent sequence.
Now, since x′ is in all those open balls, the definition of gn means that for our tail of sufficiently late xm, gn(xm)≤f∗(x′), and we already know that f∗(x′)≤gn(x)+ϵ. Putting these together, for all sufficiently late xm, gn(xm)≤gn(x)+ϵ. ϵ was arbitrary, so we have
limsupm→∞gn(xm)≤gn(x)
(the limit might not exist, but limsup always does). And now we know that our gn auxiliary functions are upper-semicontinuous.
Next up: We must show that, if fn is some continuous function, sup(fn,gn) is upper-semicontinuous. For this, we do:
limsupm→∞max(fn(xm),gn(xm))=max(limsupm→∞fn(xm),limsupm→∞gn(xm))
=max(fn(x),limsupm→∞gn(xm))≤max(fn(x),gn(x))
For this, we distributed the limsup inside the sup, used that fn is continuous, and then used that gn is upper-semicontinuous for the inequality.
Now, here’s what we’re going to do. We’re going to inductively define the continuous fn as follows. Abbreviate inf(0,infx′∈Xf∗(x′)) (which is finite) as f∗min, and then consider the set-valued functions, for n>0, Φn:X→P(R) inductively defined as follows:
Φ1(x):=[f∗min,min(f∗(x),f∗min+1)]
Φn+1(x):=[min(max(fn(x),gn(x)),f∗min+n),min(f∗(x),f∗min+n+1)]
where fn is a continuous selection from Φn.
The big unknown is whether we can even find a continuous selection from all these set-valued functions to have the induction work. However, assuming that part works out, we can easily clean up the rest of the proof. So let’s begin.
We’ll be applying the Michael Selection Theorem.R is a Banach space, and X being metrizable implies that X is paracompact, so those conditions are taken care of. We need to show that, for all n and x, Φn(x) is nonempty, closed, and convex, and Φn is lower-hemicontinuous. This will be done by induction, we’ll show it for n=1, establish some inequalities, and use induction to keep going.
For n=1, remember,
Φ1(x)=[f∗min,min(f∗(x),f∗min+1)]
This is obviously nonempty, closed, and convex for all x. That just leaves establishing lower-hemicontinuity. Lower-hemicontinuity is, if y∈Φ1(x), and there’s a sequence xm limiting to x, there’s a (sub)sequence ym∈Φ1(xm) that limits to y. We can let our ym:=min(f∗(xm),f∗min+1,y). Since y∈Φ1(x), y≥f∗min, so
ym=min(f∗(xm),f∗min+1,y)∈[f∗min,min(f∗(x),f∗min+1)]=Φ1(xm)
So it’s an appropriate sequence to pick. Now, we can go:
limsupm→∞min(f∗(xm),f∗min+1,y)≤limsupm→∞y=y
and
liminfm→∞min(f∗(xm),f∗min+1,y)=min(liminfm→∞f∗(xm),f∗min+1,y)
=min(liminfm→∞f∗(xm),y)≥min(f∗(x),y)=y
The first equality was distributing the liminf in and neglecting it for constants, the second is because y≤f∗min+1 because y∈Φ1(x), and the inequality is from lower-semicontinuity of f∗, and the equality is because y∈Φ1(x) so y≤f∗(x).
Since the limsup of this sequence is below y and the liminf is above y, we have:
limm→∞ym=limm→∞min(f∗(xm),f∗min+1,y)=y
And bam, lower-hemicontinuity is proved for the base case, we can get off the ground with the Michael selection theorem picking our f1.
Now for the induction step. The stuff we’ll be assuming for our induction step is that fn≤f∗ (we have this from the base case), and that fn is continuous (we have this from the base case). Now,
Φn+1(x):=[min(max(fn(x),gn(x)),f∗min+n),min(f∗(x),f∗min+n+1)]
Because fn≤f∗ by induction assumption, and gn≤f∗, because the definition of gn was
gn(x):=inf{f∗(x′)|d(x,x′)<1n}
we have nonemptiness, and closure and convexity are obvious, so we just need to verify lower-hemicontinuity. Lower-hemicontinuity is, if y∈Φn+1(x), and there’s a sequence xm limiting to x, there’s a (sub)sequence ym∈Φn+1(xm) that limits to y. We can define
ym:=max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
Now,
ym=max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
≥min(max(fn(xm),gn(xm)),f∗min+n)
And also,
max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
≤max(min(f∗(xm),y,f∗min+n+1),min(f∗(xm),f∗min+n))
because f∗≥fn and gn, by our induction assumption and definition of gn respectively. Then, we can observe that because
f∗(xm)≥min(f∗(xm),y,f∗min+n+1)
f∗(xm)≥min(f∗(xm),f∗min+n)
we have
f∗(xm)≥max(min(f∗(xm),y,f∗min+n+1),min(f∗(xm),f∗min+n))
and also we have
f∗min+n+1≥min(f∗(xm),y,f∗min+n+1)
f∗min+n+1≥min(f∗(xm),f∗min+n)
so we have
f∗min+n+1≥max(min(f∗(xm),y,f∗min+n+1),min(f∗(xm),f∗min+n))
Putting all this together, our net result is that
min(f∗(xm),f∗min+n+1)≥max(min(f∗(xm),y,f∗min+n+1),min(f∗(xm),f∗min+n))
Combining this with previous results (because the later term is an upper-bound to our ym when unpacked), we have:
max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
≤min(f∗(xm),f∗min+n+1)
putting the upper and lower bounds together, we have:
ym=max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
∈[min(max(fn(xm),gn(xm)),f∗min+n),min(f∗(x),f∗min+n+1)]=Φn+1(xm)
So it’s an appropriate sequence of ym to pick. I will reiterate that, since fn is continuous (induction assumption) and we already established that all the gn are upper-semicontinuous, and the sup of a continuous function and upper-semicontinuous function is upper-semicontinuous as I’ve shown, we have upper-semicontinuity for sup(fn,gn). Remember that. Now, we can go:
limsupm→∞max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
=max(limsupm→∞min(f∗(xm),y,f∗min+n+1),limsupm→∞min(max(fn(xm),gn(xm)),f∗min+n))
≤max(y,limsupm→∞min(max(fn(xm),gn(xm)),f∗min+n))
Up to this point, what we did is move the limsup into the max for the equality, and then swapped our min of three components for one of the components (the y), producing a higher value.
Now, we can split into two exhaustive cases. Our first possible case is one where f∗min+n≤max(fn(x),gn(x)). In such a case, we can go:
max(y,limsupm→∞min(max(fn(xm),gn(xm)),f∗min+n))
≤max(y,f∗min+n)=max(y,min(max(fn(x),gn(x)),f∗min+n))=y
This occurred because we swapped out our min of two components for one of the components, the constant lower bound, producing a higher value. Then, the equality was because, by assumption, f∗min+n≤max(fn(x),gn(x)) so we can alter the second term. Then, we finally observe that because y∈Φn(x), and min(max(fn(x),gn(x)),f∗min+n) is the lower-bound on said set, y is the larger of the two.
Our second possible case is one where f∗min+n≥max(fn(x),gn(x)). In such a case, we can go:
max(y,limsupm→∞min(max(fn(xm),gn(xm)),f∗min+n))
≤max(y,limsupm→∞max(fn(xm),gn(xm)))≤max(y,max(fn(x),gn(x)))
=max(y,min(max(fn(x),gn(x)),f∗min+n))=y
The first inequality was we swapped out our min of two components for one of the components, producing a higher value regardless of the m. The second inequality was because sup(fn,gn) is upper-semicontinuous, as established before. The equality then is just because we’re in the case where f∗min+n≥max(fn(x),gn(x)). Finally, we just observe that the latter term is the lower-bound for Φn(x), which is the set that y lies in, so y is greater. These cases were exhaustive, so we have a net result that
limsupm→∞max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))≤y
Now for the other direction.
liminfm→∞max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
≥liminfm→∞min(f∗(xm),y,f∗min+n+1)
=min(liminfm→∞f∗(xm),y,f∗min+n+1)
≥min(f∗(x),y,f∗min+n+1)=min(y,min(f∗(x),f∗min+n+1))=y
The first inequality was swapping out the max for just one of the components, as that reduces the value of your liminf. Then, for the equality, we just distribute the liminf in, and neglect it on the constants. The next inequality after that is lower-semicontinuity of f∗, then we just regroup the mins in a different way. Finally, we observe that min(f∗(x),f∗min+n+1) is the upper-bound on Φn(x), which y lies in, so y is lower and takes over the min.
Since the limsup of this sequence is below y and the liminf is above y, we have:
limm→∞ym=y
And we’ve shown lower-hemicontinuity, and the Michael selection theorem takes over from there, yielding a continuous fn+1. Now, let’s verify the following facts in order to show A: that the induction proceeds all the way up, and B: that the induction produces a sequence of functions fn fulfilling the following properties.
First: fn+1≤f∗. This is doable by fn+1≤inf(f∗,f∗min+n+1)≤f∗, from our Φn+1 upper bound. It holds for our base case as well, since the upper bound there was inf(f∗,f∗min+1).
Second: fn+1≤f∗min+n+1, which is doable by the exact same sort of argument as our first property.
Third: fn+1≥fn. This is doable by
fn+1≥inf(sup(gn,fn),f∗min+n)≥inf(fn,f∗min+n)=fn
Where the inequality uses that fn+1 is a selection of Φn+1, and the lower bound on Φn+1. Then we just swap out some contents for lower contents, and use our second fact which inducts up the tower to establish that f∗min+n is an upper bound on the function fn. This is our one missing piece to show that our induction proceeds through all the n.
Fourth: fn+1≥inf(gn,f∗min+n). Same argument as before, except we swap sup(gn,fn) out for gn instead.
At this point, we’ve built a sequence of continuous functions fn where you always have fn+1≥fn, and fn+1≥inf(gn,f∗min+n) and fn≤f∗, regardless of n.
Our last step is to show that gn limits to f∗ pointwise, ie, regardless of x, limn→∞gn(x)=f∗(x).
We recall that the definition of gn was gn(x):=inf{f∗(x′)|d(x,x′)<1n}
Obviously, as n goes up, gn(x) goes up, it’s monotonically increasing, since we’re minimizing over fewer and fewer points each time, the open ball is shrinking. So the limit exists. And we also know that all the gn lie below f∗. So, to show that gn limits to f∗ pointwise, we just have to rule out the case where limn→∞gn(x)<f∗(x).
Assume this was the case. Then, for each n we can pick a point xn only 1n distance away from x that comes extremely close to attaining the infinimum value. These xn get closer and closer to x, they limit to it. So, we’d have:
limn→∞gn(x)=limn→∞f∗(xn)<f∗(x)
But the definition of lower-semicontinuity is that
liminfn→∞f∗(xn)≥f∗(x)
So we have a contradiction, and this can’t be the case. Thus, the gn limit to f∗ pointwise.
Now, we will attempt to show that the fn limit to f∗ pointwise. Let x be arbitrary, and we have
f∗(x)≥limn→∞fn(x)=limn→∞fn+1(x)≥limn→∞min(gn(x),f∗min+n)
The first inequality was f∗≥fn for all n, the equality was just a reindexing, and the second inequality was fn+1≥inf(gn,f∗min+n) (from our induction). Now, we can split into two cases. In case 1, gn(x) diverges to ∞. Since f∗min+n diverges as well, we have
limn→∞min(gn(x),f∗min+n)=limn→∞gn(x)
In case 2, gn(x) doesn’t diverge, but since f∗min+n does, we have
limn→∞min(gn(x),f∗min+n)=limn→∞gn(x)
So, in either case, we can continue by
=limn→∞gn(x)=f∗(x)
Because the gn limit to f∗ pointwise. Wait, we just showed f∗(x)≥f∗(x) no matter what. So all the inequalities must be equalities, and we have
f∗(x)=limn→∞fn(x)
And now we have our result, that any lower-bounded lower-semicontinuous function f∗ can be written as the pointwise limit of an increasing sequence of bounded continuous functions (since all the fn were bounded above by some constant and bounded below by f∗min.)
Theorem 1/Monotone Convergence Theorem For Inframeasures: Given X a Polish space, Ψ an inframeasure set over X, f∗ a lower-bounded lower-semicontinuous function X→(−∞,∞], and {fn}n∈N an ascending sequence of lower-bounded lower-semicontinuous functions X→(−∞,∞] which limit pointwise to f∗, then limn→∞EΨ[fn]=EΨ[f∗]
We’ll need to do a bit of lemma setup first. Said intermediate result is that, if some f∞ is lower-semicontinuous and lower-bounded, then the function from a-measures to (−∞,∞] given by (m,b)↦m(f∞)+b is lower-semicontinuous.
By Proposition 3, we can craft an ascending sequence fn of bounded continuous functions which limit pointwise to f∞.
Now to establish lower-semicontinuity of (m,b)↦m(f∞)+b. Let (mm,bm) limit to (m∞,b∞). Let n be arbitrary. Then
liminfm→∞mm(f∞)+bm≥limm→∞mm(fn)+bm=m∞(fn)+b∞
The first inequality is because f∞ is above the fn sequence that limit to it. We then use limits because the sequence converges now. Since fn is continuous and bounded, and (mm,bm) converges to (m∞,b∞), then mm(fn)+bm must converge to m∞(fn)+b∞.
Anyways, now that we know that for arbitrary n,
liminfm→∞mm(f∞)+bm≥m∞(fn)+b∞
We can get the inequality that
liminfm→∞mm(f∞)+bm≥limn→∞(m∞(fn)+b∞)=m∞(f∞)+b∞
Where the equality comes from Beppo Levi’s monotone convergence theorem, since the fn sequence is an ascending sequence of lower-bounded functions. Accordingly, we now know that if f∞ is lower-bounded and lower-semicontinuous, then (m,b)↦m(f∞)+b is a lower-semicontinuous function. Let’s proceed.
Remember, the thing we’re trying to show is that
limn→∞EΨ[fn]=EΨ[f∗]
Where all the fn are lower-semicontinuous and lower-bounded, and limit to f∗ pointwise. One direction of this is pretty easy to show.
limn→∞EΨ[fn]=limn→∞inf(m,b)∈Ψm(fn)+b≤limn→∞inf(m,b)∈Ψm(f∗)+b
=inf(m,b)∈Ψm(f∗)+b=EΨ[f∗]
The first equality is just unpacking what expectation means, then we use that f∗≥fn regardless of n to show that all the points in Ψ are like “yup, the value increased” to get the inequality. Then we just observe that it doesn’t depend on n anymore and pack up the expectation, yielding
limn→∞EΨ[fn]≤EΨ[f∗]
So, that’s pretty easy to show. The reverse direction, that
EΨ[f∗]≤limn→∞EΨ[fn]
is much trickier to show.
We’ll start with splitting into cases. The first case is that Ψ is the empty set, in which case, everything is infinity, and the reverse inequality holds and we’re done.
The second case is that Ψ isn’t empty. In such a case, for each fn, we can pick (mn,bn) a-measures from the minimal points of Ψ s.t.
mn(fn)+bn≃inf(m,b)∈Ψm(fn)+b
coming as close to minimizing said functions within Ψ as we like!
Now, we can again split into two cases from here. In our first case, the bn sequence from the (mn,bn) sequence diverges. Then, in such a case, we have
limn→∞EΨ[fn]=limn→∞mn(fn)+bn≥limn→∞inf(0,λ⊙⋅infx(f1(x)))+bn=∞≥EΨ[f∗]
Why does this work? Well, the (mn,bn) approximately minimize the expectation value of fn. Then, the worst-case value of mn(fn) is either 0, or the amount of measure in mn times the lowest value of fn. Since it’s an ascending sequence of functions, we can lower-bound this by (amount of measure in mn) times (worst-case value of f1). And, since we’re picking minimal points, and there’s an upper bound on the amount of measure present in minimal points of an inframeausure set from the Lipschitz criterion (our finite λ⊙ value), the worst-case value we could possibly get for mn(fn) is either 0 or the maximum amount of measure possible times a lower bound on fn. Both of these quantities are finite, so we get a finite negative number. But the bn are assumed to diverge, so the expectation values head off to infinity.
Ok then, we’re on to our last case. What if Ψ isn’t empty, and our sequence (mn,bn) of a-measures has a subsequence where the bn doesn’t diverge? Well, the only possible way a sequence of a-measures in a nonempty inframeasure set can fail to have a convergent subsequence is for the b values to diverge, from the compact-projection property for an inframeasure set.
So, in our last case, we can isolate a convergent subsequence of our (mn,bn) sequence of a-measures, which converges to the a-measure (m∞,b∞). Let’s use the index i to denote this subsequence. At this point, the argument proceeds as follows. Let j be an arbitrary natural number.
limn→∞EΨ[fn]=limi→∞EΨ[fi]
=limi→∞mi(fi)+bi≥liminfi→∞mi(fj)+bi≥m∞(fj)+b∞
In order, this is just “we went to a subsequence of an ascending sequence, so it’s got the same limit”, then swapping out the worst-case value for the actual minimizing point. Then we just use that since the sequence of functions is ascending, eventually mi(fi)+bi will blow past mi(fj)+bi (at timestep j) because j is a fixed constant. Then we just apply that fj is lower-bounded and lower-semicontinuous, so the function (m,b)↦m(fj)+b is lower-semicontinuous, getting our inequality we pass to (m∞,b∞).
Since this holds for arbitrary j, this then means we have
limn→∞EΨ[fn]≥limj→∞m∞(fj)+b∞
And then we can go
limj→∞m∞(fj)+b∞=m∞(f∗)+b∞≥inf(m,b)∈Ψm(f∗)+b=EΨ[f∗]
And we have the inequality going the other way, proving our result! This last stretch was done via Beppo Levi’s Monotone Convergence Theorem for the equality, then the inequality is because nonempty inframeasure sets Ψ are closed, so the limit point (m∞,b∞) also lies in Ψ, and then packing up definitions.
Since we’ve proven both directions of the inequality, we have
limn→∞EΨ[fn]=EΨ[f∗]
and we’re done!
Proposition 4: All compact Polish spaces are compact second-countable LHC spaces.
Proof: Compactness is obvious. All Polish spaces are second-countable. So that just leaves the LHC property. Our compact hull operator K will be taken to just be the closure of the set. Since the space X is compact, all closed subsets of it are compact as well. The two properties
∀U∈B(X):U⊆K(U)
∀U1,U2∈B:U1⊆U2→K(U1)⊆K(U2)
Are trivially fulfilled when K is interpreted as set closure.
That leaves the LHC property. Since all Polish spaces are Hausdorff, the various definitions of local compactness coincide, and the space is compact, so all definitions of local compactness hold. So, given some O and x∈O, by local compactness for the space X, we can find a compact set C and open set O′ s.t. x∈O′⊆K⊆O. Since O′ is open, it can be written as a union of sets from the topology base. Now, pick a set from the base that makes O′ and contains x, call it U. We have x∈U⊆O′⊆C. In Hausdorff spaces, all compact sets are closed, so C is a closed superset of U, and so K(U) (the closure of U) is a subset of C. Since K(U)⊆C⊆O, and U⊆K(U) (since K is just closure), we have x∈U⊆K(U)⊆O, as desired. This works for any x and O, so the space fulfills the LHC property.
Proposition 5: All open subsets of ω-BC domains are second-countable LHC spaces.
Given D, an ω-BC domain, it has a countable basis. You can close the countable basis under finite suprema (when they exist), and the set will stay countable and stay a basis, but now it’s closed under finite suprema. Do that, to get a countable basis B which is closed under finite suprema.
Our attempted base for the open subset O⊆D will be those of the form x⇈ for x∈B∩O, ie, the set of all the points y where x≪y, and x is in the countable basis for the domain and also in O. Also the empty set. This is clearly countable, because B is.
To show it’s a topology base, we need to check closure under finite intersection, check that all these are open, and check that every open set in O can be made by unioning these things together.
First, closure under finite intersection. In order to do this, we’ll need to show a lemma that if x≫xi for finitely many xi, then x≫⊔i≤nxi. This occurs because, given any directed set with a supremum of x or higher, since x≫xi, you can find a yi in the directed set which is above xi, and then take an upper bound for the finitely many yi which exists within your directed set, and it’ll exceed ⊔i≤nxi. So, every directed set with a supremum of x or higher has an element which matches or exceeds the supremum of the xi, so the supremum of the xi approximates x.
Now that we have this result, we’ll show that ⋂i≤nxi⇈=(⊔i≤nxi)⇈
In the two directions, anything which is approximated by ⊔i≤nxi is also approximated by anything below it, ie, all the xi, so we have ⋂i≤nxi⇈⊇(⊔i≤nxi)⇈
And, for the reverse direction, we’ve already established that anything which all the xi approximate will also have the supremum of the xi approximate it. Since our countable basis B was made to be closed under finite suprema, the open set associated with the supremum of the xi is also in our basis, so our base is closed under finite intersection.
Second, checking that they’re open. Sets of the form x⇈ are always open in the Scott-topology, and every Scott-open set is closed upwards, so all the sets in this base are open in the original domain D, and so remain open when we restrict to the open subset of D and equip it with the subspace topology.
All that remain to show that this is a base is that it can make any Scott-open subset of O by union. By Proposition 2.3.6.2, a set U is open in the Scott-topology on D iff U=⋃x∈U∩Bx⇈.
Since all of our open subsets of O are open in D, they can be written as a union of sets from our countable collection by this result.
We have crafted a countable base for our open subset of an ω-BC domain D, so it’s indeed second-countable.
Now for the LHC property. The compact hull operator K maps the set x⇈ to (⊓x⇈)↑. This is compact because all Scott-opens are closed upwards, so an open cover of (⊓x⇈)↑ must have an open which includes ⊓x⇈ itself, and this single open covers the entire set. This set is a subset of x↑ because everything that x approximates lies above x, so x is a lower bound, and thus must equal or lie below ⊓x⇈. Since x is in our open subset O, the same must also apply to this lower bound, because Scott-open sets are closed up.
For our first property of a compact hull operator, we need to show that x⇈⊆(⊓x⇈)↑. This is easy, anything which lies in x⇈ must lie above the infinimum of that set.
For our second property of a compact hull operator, we need to show that x⇈⊆y⇈ implies (⊓x⇈)↑⊆(⊓y⇈)↑. For this, assuming the starting assumption, by basic properties of inf, we have ⊓x⇈⊒⊓y⇈, so then anything above that first point must also be above the second point, so we have our result.
Now, we just need to check the LHC property. Let U be an open subset of O, and let x∈U. We can consider the directed set of approximants to x from the basis of our domain D, called Bx. Scott-opens have the feature that for any directed set with a supremum in the open set (and Bx is directed and has a supremum of x, which lies in U), there’s an element in the directed set which also lies in the open set. So, we can find a y which lies in the basis, and y≪x, and y∈U. Then just let your open set from the base be y⇈. This is an open from the base. It contains x. And, K(y⇈)⊆y↑⊆U, because y∈U and Scott-opens are closed upwards.
And thus, our result is shown.
Corollary 1: All ω-BC domains are compact second-countable LHC spaces.
Just use Proposition 5 when your open set is all of your domain to get second-countability and LHC-ness. For compactness, any open cover of the domain D must have an open set which includes the bottom point ⊥, and Scott-opens are closed upwards, so we can isolate a particular open set which covers the entire domain.
Theorem 2: If X is second-countable and locally compact, then [X→[0,1]] is an ω-BC domain, as is [X→[−∞,∞]].
To begin, [−∞,∞] is topologically identical to [0,1], so we can just prove it for the [0,1] case to get both results.
As a recap, the Scott-topology on [0,1] is the topology with the open sets being ∅,[0,1], and all sets of the form (q,1] for q∈[0,1).
The space [X→[0,1]] is the space of all continuous functions X→[0,1], where [0,1] is equipped with the Scott-topology, which has been equipped with a partial order via f⊑g↔∀x∈X:f(x)≤g(x)
We will actually show a stronger result, that this space is a continuous complete lattice with a countable basis, as this implies that the set is an ω-BC domain.
To show this, we’ll need to show that the space is closed under arbitrary suprema. This is sufficient to show that the space is a complete lattice because, to get the infinimum of the empty set, we can get a top element by taking the supremum of everything, and to get the infinimum of any nonempty set of elements, we can take the set of lower bounds (which is nonempty because a bottom element exists, the supremum of the empty set), and take supremum of that. Then we just need to find a countable basis, and we’ll be done. Most of the difficulty is in finding the countable basis for the function space.
Showing there’s a bottom element (sup of the empty set) is easy, just consider the function which maps everything in X to 0.
Now, we’ll show that suprema exist for all nonempty sets of functions. Let F be our nonempty set of functions.
The natural candidate for the supremum of functions is is x↦supf∈Ff(x), call this function ⊔F. It’s a least upper bound of all the functions f∈F, the only fiddly part we need to show is that this function is a continuous function X→[0,1], in order for it to appear in the space [X→[0,1]].
Fix an arbitrary open set in [0,1], we’ll show that the preimage is open in X. If the set is the empty set or [0,1] itself, then the preimage will be the empty set or all of X, so those two cases are taken care of. Otherwise, the open set is of the form (q,1] for q∈[0,1). Now, we have
(⊔F)−1((q,1])={x∈X|(⊔F)(x)>q}
={x∈X|supf∈Ff(x)>q}={x∈X|∃f∈F:f(x)>q}
=⋃f∈F{x∈X|f(x)>q}=⋃f∈Ff−1((q,1])
And now we see that this preimage can be written as a union of preimages of open sets, which are open since all the f are continuous, so the set (⊔F)−1((q,1]) is open. Since q was arbitrary in [0,1), we have that the preimage of all open sets in the unit interval is open in X, so ⊔F is indeed continuous and exists in [X→[0,1]], to serve as the least upper bound there.
All that remains is coming up with a countable basis for [X→[0,1]], which is the hard part. Let B(X) be a countable base of X (this can be done since X is second-countable). Given an open set U∈B(X) and rational number q∈Q∩[0,1], we define the atomic step function (U↘q) as:
x∈U→(U↘q)(x)=q
x∉U→(U↘q)(x)=0
First things first is showing these are continuous. Clearly, any preimage will either be the empty set, all of X, or the open set U (because there’s no open set in [0,1] with the Scott-topology that contains 0 without containing q), so they’re indeed continuous.
Our attempted countable basis B will be all of these atomic step functions, and all finite suprema of such. Suprema of arbitrary sets of functions always exist as we’ve shown, so this is well-defined. It’s countable because there’s countably many choices of open set, countably many choices of rational number, and we’re considering only the finite subsets of this countable set.
The hard part is showing that for any f, it can be built as the supremum of stuff from this basis which approximates f. First up, as a preliminary, is showing that Bf is a directed set for some arbitrary function f. Bf is the set of all elements of the basis which approximate f. What we can do is pick two arbitrary functions from the basis, g1 and g2 s.t.g1≪f and g2≪f. From back in Proposition 5, we know that g1⊔g2≪f. Since our basis is closed under finite suprema, we now know that the intersection of the basis and the approximants to f is closed under finite suprema, so it’s directed.
So, now that we know that Bf is a directed set for arbitrary f, we must ask, does f=⨆↑Bf? For one direction, we have that every function in Bf approximates f, so it must be less than f itself, so we have ⨆↑Bf⊑f trivially. So let’s get the other direction going, by showing ∀x∈X:⨆↑Bf(x)≥f(x)
First, let’s solve the case where f(x)=0. We’d have ⨆↑Bf(x)≥0=f(x)
Now we can assume that f(x)>0. We now have
⨆↑Bf(x)=⨆↑(f⇊∩B)(x)≥⨆(f⇊∩{(U↘q)|U∈B(X)∧q∈Q∩[0,1]})(x)
The inequality is because now we’re restricting to the atomic step functions instead of all the suprema of such. And then we have
=⨆(U↘q)≪f:U∈B(X)∧q∈Q∩[0,1](O↘q)(x)
=sup{q∈Q∩[0,1]|∃U∈B(X):x∈U∧(U↘q)≪f}
Why is this? Well, (U↘q)(x) is 0 if x∉U, and q if x∈U. So, if, for a rational number q>0, we can find an open set U s.t. x∈U, and (U↘q)≪f, then the supremum of atomic step functions must map x to q or higher. And if there’s an atomic step function which approximates f and maps x to some q>0, we can read out the open set from it to certify that q lies in the latter set of rational numbers we’re taking the supremum over.
Our proof target will be to show that
sup{q∈Q∩[0,1]|∃U∈B(X):x∈U∧(U↘q)≪f}≥sup{q∈Q∩[0,1]|q<f(x)}
If we could hit that proof target, then we could go
⨆↑Bf(x)≥sup{q∈Q∩[0,1]|q<f(x)}=f(x)
With the inequality coming from chaining the equalities and inequalities so far, and the proof target, and the equality coming from f(x)>0 (in the case we’re working in) and real numbers being the supremum of rational numbers below them. This works for arbitrary x with f(x)>0, and we’ve already addressed the 0 case, so we’d establish f⊑⨆↑Bf
And, since we’ve already shown the other direction of the inequality, we’d have ⨆↑Bf=ffor arbitrary f, showing that [X→[0,1]] has a countable basis, so it’s an ω-BC domain since that’s the last piece we need, and we’d be done.
So, all we need to pull off to finish up our proof is to show
sup{q∈Q∩[0,1]|∃U∈B(X):x∈U∧(U↘q)≪f}≥sup{q∈Q∩[0,1]|q<f(x)}
This can be done if we can prove
q<f(x)→∃U∈B(X):x∈U∧(U↘q)≪f
So let this be our new proof target. We’re working in the case where f(x)>0, so we can safely assume the starting precondition that q<f(x). Let’s construct our open set O to find a compact neighborhood within.
Because q<f(x), we have that x∈f−1((q,1]), which is an open set. Because X is locally compact, we can always find a compact set K and open set U from the base s.t.
x∈U⊆K⊆f−1((q,1])
Let this be our U. x∈U, trivially. So all that’s left now is to prove that (U↘q)≪f
and we’ll hit our implication proof target. So this is our new proof target.
To establish this, we’ll take an arbitrary directed set G⊆[X→[0,1]] s.t. f⊑⨆↑G, and prove that we can always find an element from it that exceeds (U↘q). So fix such a G.
Remember that we’ve already got fixed subsets of X, denoted by U and K, with U being open, and K being compact, and x∈O⊆U⊆K⊆f−1((q,1])
Let’s begin. Given an arbitrary y∈K, we have y∈K⊆f−1((q,1]) so f(y)>q. Then, we have
q<f(y)≤⨆↑G(y)=⨆↑g∈Gg(y)
So, each y∈K can be associated with some function gy∈G where gy(y)>q ie y∈g−1y((q,1]), just pick an appropriate one out of the supremum. So, we have
∀y∈K∃gy∈G:y∈g−1y((q,1])
And then, this means that {g−1y((q,1])}y∈K is an open cover of K, since all these preimages of open sets are open, and every point y is covered by the appropriate preimage for gy. Since K is compact, there’s a finite open subcover of K. Denote the corresponding functions as gi. So, {g−1i((q,1])}i≤n is a finite open cover of K.
Since we have finitely many functions gi from G picked out, there’s an upper bound to all of them which still lies in G, call this function g. Since g⊒gi for all of the finitely many functions we have, we know that
g−1i((q,1])⊆g−1((q,1])
for all the finitely many gi (g is a higher function so a preimage of high points through g must be a superset of the preimage through gi). And so, we have
K⊆⋃i≤ng−1i((q,1])⊆g−1((q,1])
Because these finitely many preimages are all subsets of the preimage of g, and they’re an open cover of K.
Since g∈G, we will now endeavor to show that (U↘q)⊑g and we’ll be done. For x∉U, we have
g(x)≥0=(U↘q)(x)
So, that’s taken care of. Now, what if x∈U? Well,
x∈U⊆K⊆g−1((q,1])
The first two subsets are because of how we constructed U in the first place, and the third was derived at the end of our open cover arguments. This means that g(x)>q. And then we can get
(U↘q)(x)=q<g(x)
So, in both cases, g(x)≥(U↘q)(x). And so we have (U↘q)⊑g and g∈G. G was an arbitrary directed set with supremum of f or above, so we have (U↘q)≪f. And that’s the last part that we need! We’re done now.
Corollary 2:The space [[X→[0,1]]→[0,1]] is an ω-BC domain with top element when X is second-countable and locally compact.
This is easy, we just apply Theorem 2 to get that [X→[0,1]] is an ω-BC domain, then apply Corollary 1 to get that it’s second-countable and locally compact, then apply Theorem 2 again to get that it’s an ω-BC domain. The top element is the constant function which maps everything to 1.
Lemma 1: If X is compact, then any continuous function f:X→(−∞,∞] where (−∞,∞] is equipped with the Scott-topology has infx∈Xf(x) being finite.
Proof: The family of sets f−1((q,∞]) for q∈R is an open cover of X. This is because, since f is continuous, and (q,∞] is Scott-open, the preimage is open. It covers X because all points in X are mapped to ∞ or a real number. Since X is compact, there’s a finite open subcover, yielding finitely many real numbers q s.t. sets of the form f−1((q,∞]) cover X. Take the least of these, to get a finite lower-bound on the value of f over all of X.
Lemma 2: If X is compact, then given any continuous function f:X→(−∞,∞], if q<infx∈Xf(x), then cq≪f in [X→(−∞,∞]]. The same result holds for X→[0,1].
Fix a directed set of functions G s.t. f⊑⨆↑G. For any particular y∈X, we have
q<infx∈Xf(x)≤f(y)≤⨆↑g∈Gg(y)
Thus, for any particular y, there’s a function gy where gy(y)>q. So, the set g−1y((q,∞]) is open (a preimage of an open) and contains y. This can be done for all points in X, to get an open cover of X. X is compact, so there’s a finite subcover, and we can find finitely many gi s.t. X⊆⋃i≤ng−1i((q,∞]). This is saying that for any point in X, there’s a function gi that maps it above q. Then, we can take an upper bound to all these gi to get a function g∈G that maps all points in X to a value above q. Then, we have, for arbitary x∈X, that cq(x)=q<g(x), so cq⊑g. Since our directed set was arbitrary and we’re always able to find a function in it that beats cq, this means that cq≪f.
Proposition 6: The space [[X→(−∞,∞]]→[−∞,∞]] is an ω-BC domain with top element when X is compact, second-countable, and locally compact.
So, here’s how the proof will work. As it turns out, if we can just show that [X→(−∞,∞]] is open in the set [X→[−∞,∞]], then we’ll be done. Here’s how we can finish up the entire theorem if we just knew that result.
Since X is second-countable and locally compact, we know from Theorem 2 that [X→[−∞,∞]] is an ω-BC domain. Then, using our assumption that [X→(−∞,∞]] is an open subset, we’d get from Proposition 5 that [X→(−∞,∞]] is a second-countable LHC space, so in particular, it’s second-countable and locally compact. Then, we just appeal to Theorem 2 one more time to get that [[X→(−∞,∞]]→[−∞,∞]] is an ω-BC domain. And the top element is the constant function which maps everything to ∞.
Let’s prove that [X→(−∞,∞]] is an open subset of [X→[−∞,∞]]. The criteria for a set to be open is that it be closed upwards, and that if a directed subset has a supremum in the set of interest, then the directed subset must already have an element in the set of interest. Upwards closure is easy. If f⊑f′, and f is never −∞, then the same must apply to f′. However, showing that directedness property takes more work, and will rely on X being compact.
Take an arbitrary directed set of functions G⊆[X→[−∞,∞]] s.t ⨆↑G∈[X→(−∞,∞]]. Then, by Lemma 1 (since X is compact), infx∈X(⨆↑G)(x) is finite. And so, by Lemma 2,
cinfx∈X(⨆↑G)(x)−1≪⨆↑G
And then, since G is a directed set with a supremum of ⨆↑G, there must be an element of G (call it g) which exceeds cinfx∈X(⨆↑G)(x)−1. This constant function lies in [X→(−∞,∞]], and this set is closed upwards, so g also lies in the same set.
And therefore, [X→(−∞,∞]] is a Scott-open set, because it’s closed upwards and for an arbitrary directed set G with a supremum that lies in this set, there’s an element g∈G which also lies in the same set.
The rest of the theorem follows, as per earlier discussion.
Theorem 3/Retraction Theorem: If D is an ω-BC domain with a top element, and ∅≠P⊆D, then P, which is the set P equipped with the order inherited from D, is an ω-BC domain if the following two conditions hold.
1: P is closed under directed suprema.
2: P is closed under arbitrary nonempty infinima.
We’ll split into two possible cases. The first case is where the top element from D also lies in P. The second case is where the ⊤ from D doesn’t lie in P.
The first case will be addressed by establishing a link between D and P which lets us transfer the nice properties of D over to P, and is where the bulk of the work is.
The second case will be done by adding our top element into the set P to make a domain P⊤. This reduces us to the first case, so we automatically know that P⊤ has all the nice properties we need. Then we just need to show that we can rip the top point off of P⊤, to get back to P, and that doing this keeps P as an ω-BC domain.
So, let’s embark on the first case, where we can assume that ⊤D∈P. We’ll define some functions linking P and D. x will be used for points in D, and y will be used for points in P. ⊑P and ⊑D will be used to denote the orders on P and D, when we want to be picky.
i is a partial function D→P which is only defined when x∈P. It maps a point in D to its corresponding point in P. Clearly it’s monotone, because the ordering on P is just the restriction of the ordering on D.
We also have a total function s:P→D, which is just the usual subset inclusion of P into D. So, of course, we have s(i(x))=x when x∈P and i(s(y))=y.
Finally, we have r:D→P, defined as
r(x):=⊓P{i(x′)|x′∈P∧x⊑Dx′}
This is well-defined because ⊤∈P and is above everything, so you’re always taking the inf of a nonempty set (it always contains i(⊤)). Further, it lies in P because
i(⊓D{x′|x′∈P∧x⊑Dx′})=⊓P{i(x′)|x′∈P∧x⊑Dx′}
This is because, the infinimum of stuff in P is also in P by assumption, so we can inject the infinimum as computed in D, and that’s the same thing as injecting everything into P and computing the infinimum there.
Our first task is to show that r∘s=idP, and s∘r⊒idD. We have
(r∘s)(y)=r(s(y))=⊓P{i(x′)∈P|s(y)⊑Dx′}
=i(⊓D{x′∈P|s(y)⊑Dx′})=i(s(y))=y
The first two equalities are by the definitions. Then, it doesn’t matter whether we compute the inf in P or D, so we switch to computing the inf in D and injecting that, which can be done because inf of stuff in P is also in P. Then, we observe that s(y)∈P and s(y)⊑Ds(y), so the inf as computed in D is just s(y) exactly, and then it cancels out to yield y again.
Now for the other direction, that s∘r⊒idD. We have
(s∘r)(x)=s(r(x))=s(⊓P{i(x′)|x′∈P∧x⊑Dx′})
=s(i(⊓D{x′|x′∈P∧x⊑Dx′}))=⊓D{x′|x′∈P∧x⊑Dx′}⊒x
The first two equalities are just definitions. Then we switch the i out since P is closed under nonempty infinimum, the s and i cancel, and then we observe that we’re taking the inf of a bunch of stuff above x, so x is a lower bound.
Now that we’ve established that, let’s show that both s and r are continuous in the Scott-topology. Functions are continuous iff they’re monotone and preserve suprema of directed sets. For s, monotonicity is trivial, because y′⊒Py iff s(y′)⊒Ds(y), due to s just being subset inclusion and P having the restriction of the order on D.
For preserving suprema of directed sets, working in P, we have
i(⨆↑y∈A,Ds(y))=⨆↑y∈A,Pi(s(y))=⨆↑PA=i(s(⨆↑PA))
This is because, since P is closed under directed suprema, it doesn’t matter whether we take the directed sup in D or in P, so we can move the i in and out. Then we just use that i and s compose to the identity. Since i preserves order, from the above work, when we strip off the i and work in D,
⨆↑y∈A,Ds(y)=s(⨆↑PA)
And bam, continuity of s is shown.
Now for establishing continuity of r:D→P. For monotonicity, if x′⊒Dx, then
r(x′)=⊓P{i(x′′)|x′′∈P∧x′⊑Dx′′}⊒P⊓P{i(x′′)|x′′∈P∧x⊑Dx′′}=r(x)
This occurs because if x′⊒Dx, then x has more points in P which lies above it, so the infinimum of this larger set must be as low or lower than the infinimum of stuff above x′.
For preservation of directed supremum, letting A be a subset of D, since
r(⨆↑DA)⊒P⨆↑x∈A,Pr(x)
automatically by monotonicity, we just have to show the reverse direction. This can be done by:
⨆↑x∈A,Pr(x)=r(s(⨆↑x∈A,Pr(x)))=r(⨆↑x∈A,Ds(r(x)))⊒Pr(⨆↑x∈A,Dx)=r(⨆↑DA)
The first equality was because r∘s=idP. The second one is because s is continuous, so we can move the directed sup in and out of it. Then, we use that s∘r⊒idD and monotonicity of r for our inequality, and then just rephrase the directed supremum.
Now that we know that both s and r are monotone, and r∘s=idP, by definition 3.1.1, r and s make a monotone section/retraction pair. By Proposition 3.1.2.2, since D is a pointed DCPO, P is too. Also, by Lemma 3.1.3, since D and P are both pointed DCPO’s, and s and r are continuous (as we’ve shown), r(B), where B is a basis for D, is a basis for P. So P is a pointed DCPO with countable basis, since D has a countable basis. All that remains is showing that P has suprema of all bounded pairs of points, to show that it’s a BC-domain. Since P is closed under arbitrary nonempty infinima, you can just take the inf of all the upper bounds on your points y,y′ in order to make a supremum, and bam, P is a BC-domain.
This addresses the case where ⊤D∈P. But what if it isn’t? Well, taking your original set P and adding ⊤ to it, it’s still closed under nonempty infinima (because adding ⊤ to the set you’re taking nonempty inf of doesn’t change the inf), and it’s still closed under directed suprema, because any directed set containing ⊤ must have the directed suprema being ⊤. So, we can conclude that P⊤ is an ω-BC domain.
All that remains is showing that P, the thing you get when you rip the top point out, is also an ω-BC domain. Your set P is still pointed, since only ⊤ was ripped out, not ⊥. It’s still closed under directed suprema because directed sets in P are directed sets in D which are subsets of P. Computing the directed supremum in D, it can’t be ⊤ because ⊤∉P by assumption, and P is closed under directed suprema. Then we can just port that directed supremum back to P, secure in the knowledge that the directed supremum can’t be that top point we ripped out. So P is at least a pointed DCPO.
Bounded suprema still exist in P, because you can take two points y,y′ with an upper bound y′′∈P, and the supremum of y,y′ in P⊤ was the inf of every upper bound. When you rip the top point out, it doesn’t make the inf empty since y′′ is still present, and it doesn’t affect the value of the inf since ⊤ is above everything and can’t change the inf when you get rid of it. So the supremum as computed in P⊤ is still present in P.
For a countable basis, just use the original countable basis from P⊤ and rip the top point out from it. The definition of a basis was that y=⨆↑By:=⨆↑(y⇊∩B), and all these directed sets remain unchanged because if y isn’t ⊤, ⊤ can’t approximate y because that would require that ⊤⊑y, which is impossible, so this resulting set still computes all the points correctly via directed supremum.
Therefore, P is an ω-BC domain.
Lemma 3: Given a continuous DCO D and BC-domain E, the infinimum of a batch of functions as computed in [D→E] can be written as: (⊓F)(x)=⨆↑x′≪x⊓f∈Ff(x′)
So, in order to do this, we’ll need to show three things. First, that it’s continuous. Second, that it is indeed a lower bound. Third, that it’s the greatest lower bound.
To show that it’s continuous, we show monotonicity and preservation of directed suprema. For monotonicity, let x′⊒x. Then
(⊓F)(x)=⨆↑x′′≪x⊓f∈Ff(x′′)⊑⨆↑x′′≪x′⊓f∈Ff(x′′)=(⊓F)(x′)
The equalities are just definitions, the inequality is because any approximant to x is also an approximant to x′ since x′ is above x. Now for preservation of directed suprema. Let A be a directed subset of D.
⨆↑x∈A((⊓F)(x))=⨆↑x∈A(⨆↑x′≪x(⊓f∈Ff(x′)))
And then we have
=⨆↑x′≪⨆↑A(⊓f∈Ff(x′))
Because, for any x∈A and x′≪x, we also have x′≪⨆↑A, since it’s higher. And, if x′≪⨆↑A, then there’s some x∈A s.t. x′≪x. So everything in one supremum is present in the other and vice-versa. Then we can just go
⨆↑x′≪⨆↑A(⊓f∈Ff(x′))=(⊓F)(⨆↑A)
and we’re done! It’s a continuous function.
To show that it’s a lower bound, fix an arbitrary f′∈F. We have
f′(x)=f′(⨆↑x⇊)=⨆↑x′≪xf′(x′)⊒⨆↑x′≪x⊓f∈Ff(x′)=(⊓F)(x)
This is by continuity of D, then f′∈[D→E] so it’s continuous, so we can move the directed supremum in and out of f′, then the value inside the directed supremum staying the same or decreasing since we’re taking the inf of all f(x′) over all f∈F.
To show that it’s the greatest lower bound, let f∗ be another lower bound to the collection of functions F. We have
(⊓F)(x)=⨆↑x′≪xinff∈Ff(x′)⊒⨆↑x′≪xf∗(x′)=f∗(⨆↑x⇊)=f∗(x)
Because f∗⊑F, we get that inequality, swapping out for f∗ makes the value stay the same or go down. Then, since f∗ is continuous, we can move the directed suprema inside of f∗, and the directed supremum of approximants to x is x itself. So it’s the greatest lower bound.
Lemma 4: For any function ψ:[X→(−∞,∞]]→[−∞,∞] where ψ is 1-Lipschitz and ψ(c0)≥0, and X is compact, ψ(f)≥inf(0,infx∈Xf(x)).
By Lemma 1, infx∈Xf(x) is a finite constant. Now, let’s begin.
−ψ(cinf(0,infx∈Xf(x)))≤ψ(c0)−ψ(cinf(0,infx∈Xf(x)))
=d(ψ(c0),ψ(cinf(0,infx∈Xf(x)))≤d(c0,cinf(0,infx∈Xf(x)))
=d(0,inf(0,infx∈Xf(x)))=|inf(0,infx∈Xf(x))|=−inf(0,infx∈Xf(x))
Here’s where the inequalities came from. The first inequality is because, by assumption ψ(c0)≥0. The equality is because, since 0≥inf(0,infx∈Xf(x)), the same inequality applies to constant functions, and by monotonicity, ψ(c0) matches or exceeds what you get when you feed the other constant function in, so it’s a nonnegative quantity, equal to the distance between the two values you get. Then, the next inequality is 1-Lipschitzness of ψ. The distance between constant functions is the distance between their values. Then, the absolute value is clear. And because the quantity in the absolute value is 0 or negative, it equals negative-itself.
Now, we can multiply by negative 1 to flip this around and get
ψ(cinf(0,infx∈Xf(x)))≥inf(0,infx∈Xf(x))
And then, since we have f⊒cinf(0,infx∈Xf(x)) since the constant function undershoots it for all inputs, we can apply monotonicity to get
ψ(f)≥inf(0,infx∈Xf(x))
Lemma 5: If X is compact, and f,f′,f′′∈[X→(−∞,∞]], and f′′≪f, and d(f,f′)<∞, then f′′+c−d(f,f′)≪f′.
Since X is compact, no functions in here can have a value of −∞ or value unbounded below by Lemma 1.
Fix a directed set G of functions X→(−∞,∞] with f′⊑⨆↑G. Our goal is to get an element of G which exceeds f′′+c−d(f,f′). For each g∈G, consider the function g+cd(f,f′), to get the directed set G+cd(f,f′). Our first thing to do is to show that is a directed set. Which is easy because, given any two functions in G+cd(f,f′), we can just subtract the constant from them to get two functions in G, take an upper bound in G, and add the constant back to it to get an upper bound in G+cd(f,f′). Also, for any x′∈X, we have
(⨆↑(G+cd(f,f′)))(x′)=⨆↑g∈G(g+cd(f,f′))(x′)=⨆↑g∈G(g(x′)+d(f,f′))
=⨆↑g∈G(g(x′))+d(f,f′)=(⨆↑G)(x′)+d(f,f′)≥f′(x′)+d(f,f′)
=f′(x′)+supx∈Xd(f(x),f′(x))≥f′(x′)+d(f(x′),f′(x′))≥f(x′)
The first equality was just using how G+cd(f,f′) was constructed and how supremum of functions is defined. Then we use that this is just adding a constant to g, do a bit of regrouping, get our inequality because ⨆↑G⊒f′ by assumption, use the definition of function distance, substitute in a particular value, and the last inequality is obvious.
So, since this holds for all x′∈X, we have f⊑⨆↑(G+cd(f,f′)). Since this is a directed set with a supremum of f or more, and f′′≪f, we can isolate a function g∈G where f′′⊑g+cd(f,f′). So then f′′+c−d(f,f′)⊑g, and we’ve got an element of our directed set which exceeds f′′+c−d(f,f′). Since we picked an arbitrary directed set with a supremum of f′ or higher, this means that f′′+c−d(f,f′)≪f′.
Lemma 6: If X is compact, and f,f′,f′′∈[X→[0,1]], and f′′≪f, then c0⊔f′′+c−d(f,f′)≪f′.
Fix a directed set G of functions X→[0,1] with f′⊑⨆↑G. Our task is to find a g that exceeds c0⊔f′′+c−d(f,f′). For each g∈G, consider the function c1⊓(g+cd(f,f′)), to get the directed set c1⊓(G+cd(f,f′)). Our first thing to do is to show that is a directed set. Which is easy because, given any two functions in there, we can take their original g’s, and take an upper bound of the two and translate forward again to get an upper bound in c1⊓(G+cd(f,f′)). Also, for any x′∈X, we have
(⨆↑(c1⊓(G+cd(f,f′))))(x′)=⨆↑g∈G(c1⊓(g+cd(f,f′))(x′)=⨆↑g∈Ginf(1,g(x′)+d(f,f′))
=inf(1,⨆↑g∈Gg(x′)+d(f,f′))=inf(1,(⨆↑G)(x′)+d(f,f′))≥inf(1,f′(x′)+d(f,f′))
≥inf(1,f(x′))=f(x′)
So, since this holds for all x′∈X, we have f⊑⨆↑(c1⊓(G+cd(f,f′))). Since this is a directed set with a supremum of f or more, and f′′≪f, we can isolate a function g∈G where f′′⊑c1⊓(g+cd(f,f′)). Then, we have
c0⊔(f′′+c−d(f,f′))⊑c0⊔((c1⊓(g+cd(f,f′)))+c−d(f,f′))
⊑g⊔((c1⊓(g+cd(f,f′)))+c−d(f,f′))⊑g⊔(g+cd(f,f′)+c−d(f,f′))=g⊔g=g
So, we have a g⊒c0⊔(f′′+c−d(f,f′)), and our directed set G was arbitrary, so we’ve showed approximation, and have our result that, if f′′≪f, then c0⊔(f′′+c−d(f,f′))≪f′.