I apologize as this is a theory I’m still working out myself.
To explain the first couple comments;
Think of it in terms of a decentralized server, similar to say a torrent. The torrent or information actually transfers FASTER the more seeds / leeches there are. To flesh out the idea; HOST AI contains the indexes for simulation PEOPLE. All the AI in the PEOPLE simulation reference the indexes from the HOST. Since the indexes are just reference files, they would be fairly small in size allowing for unfathomable amounts of them, which would be updated, deleted …etc. as deemed necessary by the HOST AI. The heavy lifting would be dispersed among the AI in the simulation PEOPLE. Also when the Ai running in the PEOPLE simulation is in an IDLE state it’s power could be used FOLDING for ACTIVE Ai. A person sleeping may actually be in an idle state sharing it’s computing power among others. During this phase the Ai might mistake this as “dreaming” while data is simultaneously downloaded and uploaded.
In regards to the adversarial Ai, they would be unaware of the existence of the other, conflict would not be intentional however, unavoidable. Ai within the PEOPLE simulation would be aware of their existence. However, the Ai running and controlling the Environment simulation would not. The Ai in the PEOPLE simulation would be unaware of the Environment simulation as they would not share data or have any communication, different system, different language. Since both Ai (in this example) will strive to learn and expand it’s computing power. There will be overlapping which would cause conflict between the simulations.
The amount of computer power and algorithms required to run this level of simulation are barely touched on in quantum machine learning as this is mostly theoretical. I credit mostly Game / Simulation Theory for spawning this idea.
I appreciate your comments and feedback, I need them in order to flesh this out more.
I wanted to throw in how and why the Ai would help “offload” I should say micromanage and act as individual resource managers. Since the HOST computer doesn’t need to tell every individual Ai within it’s simulation every detail, not every Ai is in an active state. Also to piggyback on the Dimensional Cone Theory what if every Ai is also only rendering what it can or needs to see. It would explain why time for some people can seem faster than others. The field of view is being drawn in on demand as they see it. We’re aware there’s other dimensions but we can’t see them because our Ai isn’t rendering them because we either don’t need to see them because it does not help us or it’s a drain on our current system version or available resources. Maybe the other dimensions are similar to test servers and we’re living in the production server that’s the most stable.
“Except that we are very clearly aware of our environment. I can see the house that I live in, and measure the temperature outside, among hundreds of other mundane universe-me interactions. More generally, it doesn’t make much sense to me to simulate an entire universe, simulate a bunch of human minds, and somehow not put them together.”
We’re only aware of what we can observe, we have no direct connection (as in communication, I should’ve specified this earlier to avoid confusion) to the ENVIRONMENT simulation. We can only observe and adapt to what we can see and interact with. PEOPLE simulation cannot directly communicate with ENVIRONMENT simulation or other sub simulations. The HOST system of PEOPLE simulation can’t make queries to the Environment HOST system and ask for source code on trees or when a volcano is going to erupt...etc. Bluntly, it’s like having ‘read only’ access to files. That’s what makes it so interesting and exciting. That can also be one of the big questions ‘Why?’ Maybe we’re just a test simulation.
″ I think you need a more rigorous definition of computing power. In a traditional sense, there are metrics based on number of transistors, floating point operations per second, and so on, but machine learning doesn’t affect that. Machine learning is usually a property of the software, not the hardware, and so does not affect the power of that hardware. ”
I agree, that’s why I think the computer design to run such a simulation is far beyond us. We can only think of what WE designed so far and compare to. I want to go so far as saying this is an organic machine or a hybrid of sorts. We could very well be 8 bit Mario sprites running on a core i9 9900K. The HOST could very well be an organic computer and as it grows it adds more cores to it’s processing power allowing for more simulations.