(This is basically just restating what Ilya stated but with more details filled in.)
I’d like to see if I can make the seemingly impossible at least plausibly possible. Let’s consider a specific startling-looking case, the theorem on slide 17. It’s about functions from infinite bit-strings to nonnegative integers, and it says:
If you have a function such that
for any a, there’s a limited number of bits the function looks at to decide f(a)
then actually
there’s a fixed number of bits (independent of a) the function can ever look at to decide f(a) for any a
and at first that seems ridiculous, because obviously you can make what bits it looks at depend on a, and make the number able to be arbitrarily large for suitably chosen a, right?
Let’s consider a specific “counterexample” that turns out not to be one. You might think: OK, let’s make f count the number of consecutive 1s at the start of our bit-string. Then the antecedent is true—if a begins with exactly m 1s then the function doesn’t care about anything past the (m+1)th bit—but the consequent isn’t—for any m, you can find two things that agree in their first m places.
“Of course” this is wrong, and the reason why it’s wrong is that I failed to actually define a function, because what if a consists entirely of 1s?
Stepping back a little: Suppose we consider values of a that require f to look at more and more bits. Then in a certain sense, they “converge” to a value that requires f to look at every bit—in other words, one for which the function doesn’t halt. (If you tried to implement f from the description I gave above, then you’d make a function that never terminates when passed a bit-string that’s all 1s.)
Can we turn this into a proof of the theorem on slide 17? Yes. Suppose you have a function that can look at unboundedly many bits in order to decide its output. Now:
Can it look at unboundedly many bits when the first bit is 0? If so, set a0 = 0.
If not, then it can look at unboundedly many bits when the first bit is 1. Set a1 = 1.
Can it look at unboundedly many bits when the first bit is a0 and the next is 0? If so, set a1 = 0.
If not, then it can look at unboundedly many bits when the first bit is a0 and the next is 1. Set a1 = 1.
And so on. And now consider feeding it the bit-string a = [a0,a1,a2,...]. If the function looks only at finitely many bits, we get a contradiction—because we know that it can look at unboundedly many bits even when those finitely many bits take the values they take in a, which means it doesn’t stop after looking at those finitely many bits, after all.
This is in general what’s going on with this “topological” stuff. We’re looking at cases where the input to the function consists of infinitely many finite choices—e.g., an infinite string of bits—and then if we’re given a function that can look at unboundedly many bits, we can repeatedly make one choice in a way that still leaves the function potentially looking at unboundedly many bits; in the limit we get an input that makes the function not terminate.
(I have been a little sloppy in talking about what bits the function “looks at”. If you think of the function as implemented by a program running on an actual computer, this language matches reality. But if you think more generally of any mathematical function, you need to translate “given bits X, the function looks only at bits Y” as “among bit-strings with bits X, the value of the function is uniquely determined by bits Y” and then “given bits X, the function looks at unboundedly many bits” as “there is no finite Y such that given bits X the function looks only at bits Y”.)
(This is basically just restating what Ilya stated but with more details filled in.)
I’d like to see if I can make the seemingly impossible at least plausibly possible. Let’s consider a specific startling-looking case, the theorem on slide 17. It’s about functions from infinite bit-strings to nonnegative integers, and it says:
If you have a function such that
for any a, there’s a limited number of bits the function looks at to decide f(a)
then actually
there’s a fixed number of bits (independent of a) the function can ever look at to decide f(a) for any a
and at first that seems ridiculous, because obviously you can make what bits it looks at depend on a, and make the number able to be arbitrarily large for suitably chosen a, right?
Let’s consider a specific “counterexample” that turns out not to be one. You might think: OK, let’s make f count the number of consecutive 1s at the start of our bit-string. Then the antecedent is true—if a begins with exactly m 1s then the function doesn’t care about anything past the (m+1)th bit—but the consequent isn’t—for any m, you can find two things that agree in their first m places.
“Of course” this is wrong, and the reason why it’s wrong is that I failed to actually define a function, because what if a consists entirely of 1s?
Stepping back a little: Suppose we consider values of a that require f to look at more and more bits. Then in a certain sense, they “converge” to a value that requires f to look at every bit—in other words, one for which the function doesn’t halt. (If you tried to implement f from the description I gave above, then you’d make a function that never terminates when passed a bit-string that’s all 1s.)
Can we turn this into a proof of the theorem on slide 17? Yes. Suppose you have a function that can look at unboundedly many bits in order to decide its output. Now:
Can it look at unboundedly many bits when the first bit is 0? If so, set a0 = 0.
If not, then it can look at unboundedly many bits when the first bit is 1. Set a1 = 1.
Can it look at unboundedly many bits when the first bit is a0 and the next is 0? If so, set a1 = 0.
If not, then it can look at unboundedly many bits when the first bit is a0 and the next is 1. Set a1 = 1.
And so on. And now consider feeding it the bit-string a = [a0,a1,a2,...]. If the function looks only at finitely many bits, we get a contradiction—because we know that it can look at unboundedly many bits even when those finitely many bits take the values they take in a, which means it doesn’t stop after looking at those finitely many bits, after all.
This is in general what’s going on with this “topological” stuff. We’re looking at cases where the input to the function consists of infinitely many finite choices—e.g., an infinite string of bits—and then if we’re given a function that can look at unboundedly many bits, we can repeatedly make one choice in a way that still leaves the function potentially looking at unboundedly many bits; in the limit we get an input that makes the function not terminate.
(I have been a little sloppy in talking about what bits the function “looks at”. If you think of the function as implemented by a program running on an actual computer, this language matches reality. But if you think more generally of any mathematical function, you need to translate “given bits X, the function looks only at bits Y” as “among bit-strings with bits X, the value of the function is uniquely determined by bits Y” and then “given bits X, the function looks at unboundedly many bits” as “there is no finite Y such that given bits X the function looks only at bits Y”.)