To address your first question: this has to do with scope insensitivity, hyperbolic discounting, and other related biases. To put it bluntly, most humans are actually pretty bad at maximizing expected utility. For example, when I first head about x-risk, my thought process was definitely not “humanity might be wiped out—that’s IMPORTANT. I need to devote energy to this.” It was more along the lines of “huh; That’s interesting. Tragic, even. Oh well; moving on...”
Basically, we don’t care much about what happens in the distant future, especially if it isn’t guaranteed to happen. We also don’t care much more about humanity than we do about ourselves plus our close ones. Plus we don’t really care about things that don’t feel immediate. And so on. Then end result is that most people’s immediate problems are more important to them then x-risk, even if the latter might be by far the more essential according to utilitarian ethics.
I consider philosophy to be a study of human intuitions. Philosophy examines different ways to think about a variety of deep issues (morality, existence, etc.) and tries to resolve results that “feel wrong”.
On the other hand, I have very rarely heard it phrased this way. Often, philosophy is said to be reasoning directly about said issues (morality, existence, etc.), albeit with the help of human intuitions. This actually seems to be an underlying assumption of most philosophy discussions I’ve heard. I actually find that mildly disconcerting, given that I would expect it to confuse everyone involved with substantial frequency.
If anyone knows of a good argument for the assumption above, I would really like to hear it. I’ve only seen it assumed, never argued.