For programmers, I think the simplest way to motivate domain theory is to think about a lazy language like Haskell. In such a language, if you have for example a list of numbers, it doesn’t actually contain numbers, but rather “thunks” that when evaluated will give numbers. Some of these thunks, when you try to evaluate them, can go into an infinite loop instead of giving a number. Let’s call these “bottoms”. Basically any data structure can have some of its parts replaced with bottoms: a subtree of a tree can be bottom, and so on. Then we can say all functions in a lazy language are total functions from things-with-bottoms to things-with-bottoms: if your function goes into an infinite loop, we’ll just say it returns a bottom.
Then we can say a piece of data X is “at least as informative” as Y, if Y can be obtained from X by replacing zero or more of its parts with bottoms. That’s a partial ordering. And it’s kind of obvious that any function we can write in our language will be monotone with respect to that ordering: f(X) will be at least as informative as f(Y).
So basically domain theory is what you get if you ask yourself “what if data structures could contain bottoms” and turn the abstractness crank a few times. In a lazy language it applies pretty naturally; in a strict language it’s a bit more awkward, because things-with-bottoms don’t exist at runtime to be passed around; and in a total language it’s unnecessary, because all functions are already total.
For programmers, I think the simplest way to motivate domain theory is to think about a lazy language like Haskell. In such a language, if you have for example a list of numbers, it doesn’t actually contain numbers, but rather “thunks” that when evaluated will give numbers. Some of these thunks, when you try to evaluate them, can go into an infinite loop instead of giving a number. Let’s call these “bottoms”. Basically any data structure can have some of its parts replaced with bottoms: a subtree of a tree can be bottom, and so on. Then we can say all functions in a lazy language are total functions from things-with-bottoms to things-with-bottoms: if your function goes into an infinite loop, we’ll just say it returns a bottom.
Then we can say a piece of data X is “at least as informative” as Y, if Y can be obtained from X by replacing zero or more of its parts with bottoms. That’s a partial ordering. And it’s kind of obvious that any function we can write in our language will be monotone with respect to that ordering: f(X) will be at least as informative as f(Y).
So basically domain theory is what you get if you ask yourself “what if data structures could contain bottoms” and turn the abstractness crank a few times. In a lazy language it applies pretty naturally; in a strict language it’s a bit more awkward, because things-with-bottoms don’t exist at runtime to be passed around; and in a total language it’s unnecessary, because all functions are already total.