Sometimes the most important question has less importance (say, 20 percent of total) than the sum of less important questions (say, 8x10=80 for 8 smaller problems). For example, if everybody will work on AI safety, some smaller x-risks could be completely neglected.
The commons effect of existential risks may complicate that example. (Shorter-term existential risks make longer-term existential risks less impactful until the shorter-term ones are solved.)
Sometimes the most important question has less importance (say, 20 percent of total) than the sum of less important questions (say, 8x10=80 for 8 smaller problems). For example, if everybody will work on AI safety, some smaller x-risks could be completely neglected.
The commons effect of existential risks may complicate that example. (Shorter-term existential risks make longer-term existential risks less impactful until the shorter-term ones are solved.)