I apologize as this is a theory I’m still working out myself.
No worries! Hashing out the details in our theories is always fun, and getting another perspective should be encouraged.
With that said, I think this theory could still use some more work.
The torrent or information actually transfers FASTER the more seeds / leeches there are.
That’s because there are more computers in use, yes. Adding more physical computers often increases speeds, but that’s not an ironclad rule. Changing how the host and client interact without adding more computers is unlikely to be incredibly helpful unless you’re fixing a mistake with the initial setup, and splitting one program on one supercomputer into multiple programs on the same supercomputer is almost certainly less efficient.
HOST AI contains the indexes for simulation PEOPLE. … The heavy lifting would be dispersed among the AI in the simulation PEOPLE.
Well, it seems clear that humans are part of the simulation. Our brains are made of normal matter, and cutting bits of them off materially affects how we think. Less morbidly, antidepressants (and a whole laundry list of other psychoactive drugs) can affect our worldview, moods, and thoughts. Those drugs are also made of normal matter, at least as far as we can tell, so there doesn’t seem to be a good way to keep a clear-cut distinction between the simulation people and the simulation universe.
Unfathomable amounts of them, which would be updated, deleted …etc. as deemed necessary by the HOST AI.
Does this line up with what we see in the real world? Do people exist in unfathomable numbers? change instantly? vanish without warning?
The heavy lifting would be dispersed among the AI in the simulation PEOPLE.
Using simulated systems to compute anything is almost always less efficient than just running the computations on the real computer. Compare the power of an old video game console to the power of a modern pc needed to emulate it—to correctly simulate even an old SNES, you need a very powerful computer. Using that simulated SNES to run anything as opposed to just running it on your real life computer would be insane, unless it’s an old game that can only on the SNES.
In short, running Breath of the Wild accurately requires fewer computational resources than accurately simulating an old SNES and playing the original Mario Bros on that simulated system. And that simulated system was designed for one reason alone—computation. Humans … kinda aren’t.
The Ai in the PEOPLE simulation would be unaware of the Environment simulation
Except that we are very clearly aware of our environment. I can see the house that I live in, and measure the temperature outside, among hundreds of other mundane universe-me interactions. More generally, it doesn’t make much sense to me to simulate an entire universe, simulate a bunch of human minds, and somehow not put them together.
A person sleeping may actually be in an idle state sharing it’s computing power among others.
Assuming that a sleeping person takes significantly fewer resources to simulate than a conscious one (doubtful), any reasonable computer would dynamically balance resources. The method you’re suggesting (the “person” program tells the host it can give up some resources) is called cooperative multitasking, and it dates all the way back to at least the Apollo guidance computer in the 1960s, if not even earlier. Note, we’ve largely moved to other forms of computer resource sharing because the cooperative approach has serious downsides.
Since both Ai (in this example) will strive to learn and expand it’s computing power.
I think you need a more rigorous definition of computing power. In a traditional sense, there are metrics based on number of transistors, floating point operations per second, and so on, but machine learning doesn’t affect that. Machine learning is usually a property of the software, not the hardware, and so does not affect the power of that hardware.
If you want a metric for “power” of a software agent, you’ll need to be very careful about how you define it.
I wanted to throw in how and why the Ai would help “offload” I should say micromanage and act as individual resource managers. Since the HOST computer doesn’t need to tell every individual Ai within it’s simulation every detail, not every Ai is in an active state. Also to piggyback on the Dimensional Cone Theory what if every Ai is also only rendering what it can or needs to see. It would explain why time for some people can seem faster than others. The field of view is being drawn in on demand as they see it. We’re aware there’s other dimensions but we can’t see them because our Ai isn’t rendering them because we either don’t need to see them because it does not help us or it’s a drain on our current system version or available resources. Maybe the other dimensions are similar to test servers and we’re living in the production server that’s the most stable.
“Except that we are very clearly aware of our environment. I can see the house that I live in, and measure the temperature outside, among hundreds of other mundane universe-me interactions. More generally, it doesn’t make much sense to me to simulate an entire universe, simulate a bunch of human minds, and somehow not put them together.”
We’re only aware of what we can observe, we have no direct connection (as in communication, I should’ve specified this earlier to avoid confusion) to the ENVIRONMENT simulation. We can only observe and adapt to what we can see and interact with. PEOPLE simulation cannot directly communicate with ENVIRONMENT simulation or other sub simulations. The HOST system of PEOPLE simulation can’t make queries to the Environment HOST system and ask for source code on trees or when a volcano is going to erupt...etc. Bluntly, it’s like having ‘read only’ access to files. That’s what makes it so interesting and exciting. That can also be one of the big questions ‘Why?’ Maybe we’re just a test simulation.
″ I think you need a more rigorous definition of computing power. In a traditional sense, there are metrics based on number of transistors, floating point operations per second, and so on, but machine learning doesn’t affect that. Machine learning is usually a property of the software, not the hardware, and so does not affect the power of that hardware. ”
I agree, that’s why I think the computer design to run such a simulation is far beyond us. We can only think of what WE designed so far and compare to. I want to go so far as saying this is an organic machine or a hybrid of sorts. We could very well be 8 bit Mario sprites running on a core i9 9900K. The HOST could very well be an organic computer and as it grows it adds more cores to it’s processing power allowing for more simulations.
No worries! Hashing out the details in our theories is always fun, and getting another perspective should be encouraged.
With that said, I think this theory could still use some more work.
That’s because there are more computers in use, yes. Adding more physical computers often increases speeds, but that’s not an ironclad rule. Changing how the host and client interact without adding more computers is unlikely to be incredibly helpful unless you’re fixing a mistake with the initial setup, and splitting one program on one supercomputer into multiple programs on the same supercomputer is almost certainly less efficient.
Well, it seems clear that humans are part of the simulation. Our brains are made of normal matter, and cutting bits of them off materially affects how we think. Less morbidly, antidepressants (and a whole laundry list of other psychoactive drugs) can affect our worldview, moods, and thoughts. Those drugs are also made of normal matter, at least as far as we can tell, so there doesn’t seem to be a good way to keep a clear-cut distinction between the simulation people and the simulation universe.
Does this line up with what we see in the real world? Do people exist in unfathomable numbers? change instantly? vanish without warning?
Using simulated systems to compute anything is almost always less efficient than just running the computations on the real computer. Compare the power of an old video game console to the power of a modern pc needed to emulate it—to correctly simulate even an old SNES, you need a very powerful computer. Using that simulated SNES to run anything as opposed to just running it on your real life computer would be insane, unless it’s an old game that can only on the SNES.
In short, running Breath of the Wild accurately requires fewer computational resources than accurately simulating an old SNES and playing the original Mario Bros on that simulated system. And that simulated system was designed for one reason alone—computation. Humans … kinda aren’t.
Except that we are very clearly aware of our environment. I can see the house that I live in, and measure the temperature outside, among hundreds of other mundane universe-me interactions. More generally, it doesn’t make much sense to me to simulate an entire universe, simulate a bunch of human minds, and somehow not put them together.
Assuming that a sleeping person takes significantly fewer resources to simulate than a conscious one (doubtful), any reasonable computer would dynamically balance resources. The method you’re suggesting (the “person” program tells the host it can give up some resources) is called cooperative multitasking, and it dates all the way back to at least the Apollo guidance computer in the 1960s, if not even earlier. Note, we’ve largely moved to other forms of computer resource sharing because the cooperative approach has serious downsides.
I think you need a more rigorous definition of computing power. In a traditional sense, there are metrics based on number of transistors, floating point operations per second, and so on, but machine learning doesn’t affect that. Machine learning is usually a property of the software, not the hardware, and so does not affect the power of that hardware.
If you want a metric for “power” of a software agent, you’ll need to be very careful about how you define it.
Oh, and sorry for the wall of text :)
I wanted to throw in how and why the Ai would help “offload” I should say micromanage and act as individual resource managers. Since the HOST computer doesn’t need to tell every individual Ai within it’s simulation every detail, not every Ai is in an active state. Also to piggyback on the Dimensional Cone Theory what if every Ai is also only rendering what it can or needs to see. It would explain why time for some people can seem faster than others. The field of view is being drawn in on demand as they see it. We’re aware there’s other dimensions but we can’t see them because our Ai isn’t rendering them because we either don’t need to see them because it does not help us or it’s a drain on our current system version or available resources. Maybe the other dimensions are similar to test servers and we’re living in the production server that’s the most stable.
“Except that we are very clearly aware of our environment. I can see the house that I live in, and measure the temperature outside, among hundreds of other mundane universe-me interactions. More generally, it doesn’t make much sense to me to simulate an entire universe, simulate a bunch of human minds, and somehow not put them together.”
We’re only aware of what we can observe, we have no direct connection (as in communication, I should’ve specified this earlier to avoid confusion) to the ENVIRONMENT simulation. We can only observe and adapt to what we can see and interact with. PEOPLE simulation cannot directly communicate with ENVIRONMENT simulation or other sub simulations. The HOST system of PEOPLE simulation can’t make queries to the Environment HOST system and ask for source code on trees or when a volcano is going to erupt...etc. Bluntly, it’s like having ‘read only’ access to files. That’s what makes it so interesting and exciting. That can also be one of the big questions ‘Why?’ Maybe we’re just a test simulation.
″ I think you need a more rigorous definition of computing power. In a traditional sense, there are metrics based on number of transistors, floating point operations per second, and so on, but machine learning doesn’t affect that. Machine learning is usually a property of the software, not the hardware, and so does not affect the power of that hardware. ”
I agree, that’s why I think the computer design to run such a simulation is far beyond us. We can only think of what WE designed so far and compare to. I want to go so far as saying this is an organic machine or a hybrid of sorts. We could very well be 8 bit Mario sprites running on a core i9 9900K. The HOST could very well be an organic computer and as it grows it adds more cores to it’s processing power allowing for more simulations.