Newcomb’s problem doesn’t specify how Omega chooses the ‘customers’. It’s a quite realistic possibility that it simply has not offered the choice to anyone that would use a randomizer, and cherrypicked only the people which have at least 99.9% ‘prediction strength’.
It’s often stipulated that if Omega predicts you’ll use some randomizer it can’t predict, it’ll punish you by acting as if it predicted two-boxing.
Newcomb’s problem doesn’t specify how Omega chooses the ‘customers’. It’s a quite realistic possibility that it simply has not offered the choice to anyone that would use a randomizer, and cherrypicked only the people which have at least 99.9% ‘prediction strength’.
(And the most favourable plausible outcome for randomizing would be scaling the payoff appropriately to the probability assigned.)