One of my points was that humanity has a level of complexity that means that an AI couldn’t simulate humanity perfectly without humanity.
So, a superintelligent AI would keep us because it would want to observe humanity, which can involve observing us in reality. I doubt that AI can “successfully calibrate simulations [of humanity]” as you mentioned.
Hmm. I agree that values are important: what does a superintelligent AI value?
My answer: to become a superintelligent AI, the AI must value learning about things with an increasing level of complexity.
If you accept this point, then a superintelligent AI would prefer to study more complex phenomena (humanity) than less complex phenomena (computing pi).
So, the superintelligent AI would prefer to keep humans and their atoms around to study them.