You’re getting into advanced questions; prospect theory was initially formulated to only deal with gambles with 2 (or fewer) possible outcomes so that it didn’t have to deal with this sort of stuff. Eventually Tversky & Kahneman (1992) came out with a more complicated version of the theory, Cumulative Prospect Theory, which addressed this problem by being rank-dependent. Looking at the graph of w(p), basically what you do is rank the outcomes in order of their value, line them up along the probability axis in order giving each one a width equal to its probability, and weight each one by the change in w(p) over its width. So if the 10 outcomes each with probability .01 are all losses, then the largest loss gets the weight w(.01), the next-largest loss gets the weight w(.02)-w(.01), the next gets the weight w(.03)-w(.02), … and the last one gets w(.10)-w(.09). So the total weight given to the 10 outcomes is still only w(.10), just as it would be if they were all combined into one outcome.
For more of the nitty gritty (like separating gains & losses), you can see the Tversky & Kahneman (1992) paper, or I found the explanation in this Fennema & Wakker (1997) paper easier to understand.
Tversky, A. & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5: 297–323.
You’re getting into advanced questions; prospect theory was initially formulated to only deal with gambles with 2 (or fewer) possible outcomes so that it didn’t have to deal with this sort of stuff. Eventually Tversky & Kahneman (1992) came out with a more complicated version of the theory, Cumulative Prospect Theory, which addressed this problem by being rank-dependent. Looking at the graph of w(p), basically what you do is rank the outcomes in order of their value, line them up along the probability axis in order giving each one a width equal to its probability, and weight each one by the change in w(p) over its width. So if the 10 outcomes each with probability .01 are all losses, then the largest loss gets the weight w(.01), the next-largest loss gets the weight w(.02)-w(.01), the next gets the weight w(.03)-w(.02), … and the last one gets w(.10)-w(.09). So the total weight given to the 10 outcomes is still only w(.10), just as it would be if they were all combined into one outcome.
For more of the nitty gritty (like separating gains & losses), you can see the Tversky & Kahneman (1992) paper, or I found the explanation in this Fennema & Wakker (1997) paper easier to understand.
Tversky, A. & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5: 297–323.