Your ordinal preferences as stated don’t answer that question—you decide by having more terms in your utility function. Assuming independence of outcome (which is NEVER true, but really handy when analyzing toys), you might be able to assign a numeric marginal value to each upgrade. Or you might just have an additional preference for certain pepperoni over 50⁄50 mushroom/anchovy. Heck, you might have a preference for certain anchovy over 50⁄50 mushroom/pepperoni (though this that uncertainty is itself negative value to you, in addition to your pizza topping valuation).
Your ordinal preferences as stated don’t answer that question—you decide by having more terms in your utility function. Assuming independence of outcome (which is NEVER true, but really handy when analyzing toys), you might be able to assign a numeric marginal value to each upgrade. Or you might just have an additional preference for certain pepperoni over 50⁄50 mushroom/anchovy. Heck, you might have a preference for certain anchovy over 50⁄50 mushroom/pepperoni (though this that uncertainty is itself negative value to you, in addition to your pizza topping valuation).
Note that if you prefer A to B to C, but prefer “certain C” to “coinflip A or B”, then you don’t have a utility function over ABC.
You might, as in OP, have a utility function over something else. Like maybe “ABC plus a history of correctly predicting which of ABC I’d have”.