I don’t think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like—provided you are prepared to give up some small space to the active support system and are safe from power cuts.
The general idea is that because of the speed of light limitation, a computer’s maximum speed and communication efficiency is always inversely proportional to its size.
The ultimate computer is thus necessarily dense to the point of gravitational collapse. See seth lloyd’s limits of computation paper for the details.
Any old hum-dum really big computer wouldn’t have to collapse into a big hole—but any ultimate computer would have to. In fact, the size of the computer isn’t even an issue. The ultimate configuration of any matter (in theory) for computation must have ultimately high density to maximum speed and minimize inter-component delay.
look up seth lloyd and on his wikipedia page the 1st link down there is “ultimate physical limits of computation”
the uncertainty principle limits the maximum information storage per gram of mass and the maximum computation rate in terms of bit ops per energy unit, he discusses all that.
However, the uncertainty principle is only really a limitation for classical computers. A quantum computer doesn’t have that issue (he discusses classical only, an ultimate quantum computer would be enormously more powerful)
What is the problem with whoever voted that down? There isn’t any violation of laws of nature involved in actively supporting something against collapse like that—any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?
hyper-large structures are hyper-slow and hyper-dumb. See my above reply. The future of computation is to shrink forever. I didn’t downvote your comment btw.
I don’t think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like—provided you are prepared to give up some small space to the active support system and are safe from power cuts.
The general idea is that because of the speed of light limitation, a computer’s maximum speed and communication efficiency is always inversely proportional to its size.
The ultimate computer is thus necessarily dense to the point of gravitational collapse. See seth lloyd’s limits of computation paper for the details.
Any old hum-dum really big computer wouldn’t have to collapse into a big hole—but any ultimate computer would have to. In fact, the size of the computer isn’t even an issue. The ultimate configuration of any matter (in theory) for computation must have ultimately high density to maximum speed and minimize inter-component delay.
What about the uncertainty principle as component size decreases?
look up seth lloyd and on his wikipedia page the 1st link down there is “ultimate physical limits of computation”
the uncertainty principle limits the maximum information storage per gram of mass and the maximum computation rate in terms of bit ops per energy unit, he discusses all that.
However, the uncertainty principle is only really a limitation for classical computers. A quantum computer doesn’t have that issue (he discusses classical only, an ultimate quantum computer would be enormously more powerful)
What is the problem with whoever voted that down? There isn’t any violation of laws of nature involved in actively supporting something against collapse like that—any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?
hyper-large structures are hyper-slow and hyper-dumb. See my above reply. The future of computation is to shrink forever. I didn’t downvote your comment btw.