I don’t think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it’s magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).
I don’t think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it’s magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).