I think I have a partial solution for the case where the “infinitely confusing” string is unique. Namely, if there is a unique infinite string S on which D doesn’t terminate, then S is computable.
Here’s an algorithm for computing S by using D. Assume that the first bit of S is 0. That means D terminates on all strings whose first bit is 1. By the argument from the original post, that implies D’s running time on strings starting with 1 is uniformly bounded. (Otherwise we would be able to generate an “infinitely confusing” string for D that starts with 1 by using the proof technique from my post, which can’t happen because S is unique and starts with 0 by assumption.) Therefore, by feeding D all possible finite inputs in turn while allowing it gradually increasing running times (“dovetailing”), we will eventually get a “covering set” that proves D always terminates on strings starting with 1. So we determine that the first bit of S must be 0. In the same way you can determine the second bit of S, and so on.
Let’s see how it works on unary notation. We run D on all finite inputs in turn. First we notice that D(“0”) terminates without trying to read further, therefore the first bit of S is 1. Then we notice that D(“10″) terminates, therefore the second bit of S is also 1. And so on.
I think I have a partial solution for the case where the “infinitely confusing” string is unique. Namely, if there is a unique infinite string S on which D doesn’t terminate, then S is computable.
Here’s an algorithm for computing S by using D. Assume that the first bit of S is 0. That means D terminates on all strings whose first bit is 1. By the argument from the original post, that implies D’s running time on strings starting with 1 is uniformly bounded. (Otherwise we would be able to generate an “infinitely confusing” string for D that starts with 1 by using the proof technique from my post, which can’t happen because S is unique and starts with 0 by assumption.) Therefore, by feeding D all possible finite inputs in turn while allowing it gradually increasing running times (“dovetailing”), we will eventually get a “covering set” that proves D always terminates on strings starting with 1. So we determine that the first bit of S must be 0. In the same way you can determine the second bit of S, and so on.
Let’s see how it works on unary notation. We run D on all finite inputs in turn. First we notice that D(“0”) terminates without trying to read further, therefore the first bit of S is 1. Then we notice that D(“10″) terminates, therefore the second bit of S is also 1. And so on.
That looks true, though the process you describe only works if S is indeed unique, othwerwise it gets stuck.