Any artificial intelligence will have internal structure. Artificial intelligences, unlike humans, do not come in standard-sized reproductive units, walled off computationally; there is therefore no reason to expect individuals to exist in a post-AI society. But the bulk of the computation, and hence the bulk of the potential consciousness, will be within small, local units (due to the ubiquity of power-law distributions, the efficiency of fractal transport and communication networks, and the speed of light).
Physics is local. The speed of light is a derivative of that general principle. The local nature of our universe implies some strict limits on intelligence. Curiously, it looks like the only way to transcend these limits (to get a really powerful single intelligence/computer) is to collapse into a black hole, at which point you necessarily seal yourself off and give up any power in this universe. Interesting indeed.
But I have no idea how you leap to the conclusion “there is therefore no reason to expect individuals to exist in a post-AI society.” Although partly because I dont know what a post-AI society is. I understand post-human .. but post-AI? Is that the next thing after the next thing? That seems to be getting ahead of ourselves.
Also, you seem to reach the conclusion that there will not necessarily be any individuality in the ‘post-AI’ future society, but then give several good reasons why such individuality may persist. (namely, speed of light, locality of physics)
But what is individuality? One could say that we are a global consciousness today with just the “bulk of computation” in “small, local units”.
Physics is local. The speed of light is a derivative of that general principle.
I’m not sure I follow this. A purely Newtonian universe with no gravity (to keep things simple) would have completely local laws and no speed of light limit.
When you say “[a] purely Newtonian universe with no gravity,” do you mean a universe in which light doesn’t exist at all as a trivial counterexample to the above claim? Or do you actually have in mind some more complex point?
I was interpreting speed of light in this context to mean that there’s a maximum speed in general otherwise the claim becomes trivially false. In that regard, the claim isn’t true and one could make a universe that was essentially Newtonian, had some sort of particle or wave that functioned like light that didn’t move instantaneously but could move at different speeds. (Actually now that I’ve said that I have to wonder if the post I was replying to meant that locality implied that light always had a finite speed which is true.) I suspect that you can get a general result about a maximum speed if you insist on something slightly stronger than locality, by analogy to the distinction between continuous functions and uniformly continuous functions but I haven’t thought out the details.
From context, I believe “post-AI” means after AI “occurs”; that is, after the beginning of AI, or during the period in which AI is a major shaping force.
I don’t think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like—provided you are prepared to give up some small space to the active support system and are safe from power cuts.
The general idea is that because of the speed of light limitation, a computer’s maximum speed and communication efficiency is always inversely proportional to its size.
The ultimate computer is thus necessarily dense to the point of gravitational collapse. See seth lloyd’s limits of computation paper for the details.
Any old hum-dum really big computer wouldn’t have to collapse into a big hole—but any ultimate computer would have to. In fact, the size of the computer isn’t even an issue. The ultimate configuration of any matter (in theory) for computation must have ultimately high density to maximum speed and minimize inter-component delay.
look up seth lloyd and on his wikipedia page the 1st link down there is “ultimate physical limits of computation”
the uncertainty principle limits the maximum information storage per gram of mass and the maximum computation rate in terms of bit ops per energy unit, he discusses all that.
However, the uncertainty principle is only really a limitation for classical computers. A quantum computer doesn’t have that issue (he discusses classical only, an ultimate quantum computer would be enormously more powerful)
What is the problem with whoever voted that down? There isn’t any violation of laws of nature involved in actively supporting something against collapse like that—any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?
hyper-large structures are hyper-slow and hyper-dumb. See my above reply. The future of computation is to shrink forever. I didn’t downvote your comment btw.
Also, you seem to reach the conclusion that there will not necessarily be any individuality in the ‘post-AI’ future society, but then give several good reasons why such individuality may persist. (namely, speed of light, locality of physics)
Yes—it’s a matter of degree. Humans are isolated to a greater extent than I expect AIs to be. Also, the word “individual” means (I think) non-divisible, which AIs probably will not be.
I agree, the term post-AI is confusing. I’ll remove the ‘post’.
Physics is local. The speed of light is a derivative of that general principle. The local nature of our universe implies some strict limits on intelligence. Curiously, it looks like the only way to transcend these limits (to get a really powerful single intelligence/computer) is to collapse into a black hole, at which point you necessarily seal yourself off and give up any power in this universe. Interesting indeed.
But I have no idea how you leap to the conclusion “there is therefore no reason to expect individuals to exist in a post-AI society.” Although partly because I dont know what a post-AI society is. I understand post-human .. but post-AI? Is that the next thing after the next thing? That seems to be getting ahead of ourselves.
Also, you seem to reach the conclusion that there will not necessarily be any individuality in the ‘post-AI’ future society, but then give several good reasons why such individuality may persist. (namely, speed of light, locality of physics)
But what is individuality? One could say that we are a global consciousness today with just the “bulk of computation” in “small, local units”.
I’m not sure I follow this. A purely Newtonian universe with no gravity (to keep things simple) would have completely local laws and no speed of light limit.
When you say “[a] purely Newtonian universe with no gravity,” do you mean a universe in which light doesn’t exist at all as a trivial counterexample to the above claim? Or do you actually have in mind some more complex point?
I was interpreting speed of light in this context to mean that there’s a maximum speed in general otherwise the claim becomes trivially false. In that regard, the claim isn’t true and one could make a universe that was essentially Newtonian, had some sort of particle or wave that functioned like light that didn’t move instantaneously but could move at different speeds. (Actually now that I’ve said that I have to wonder if the post I was replying to meant that locality implied that light always had a finite speed which is true.) I suspect that you can get a general result about a maximum speed if you insist on something slightly stronger than locality, by analogy to the distinction between continuous functions and uniformly continuous functions but I haven’t thought out the details.
Oh, I see. Thanks for the explanation.
From context, I believe “post-AI” means after AI “occurs”; that is, after the beginning of AI, or during the period in which AI is a major shaping force.
Ahh of course. For some reason I couldn’t interpret that other than as a miswritten ‘posthuman’.
I don’t think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like—provided you are prepared to give up some small space to the active support system and are safe from power cuts.
The general idea is that because of the speed of light limitation, a computer’s maximum speed and communication efficiency is always inversely proportional to its size.
The ultimate computer is thus necessarily dense to the point of gravitational collapse. See seth lloyd’s limits of computation paper for the details.
Any old hum-dum really big computer wouldn’t have to collapse into a big hole—but any ultimate computer would have to. In fact, the size of the computer isn’t even an issue. The ultimate configuration of any matter (in theory) for computation must have ultimately high density to maximum speed and minimize inter-component delay.
What about the uncertainty principle as component size decreases?
look up seth lloyd and on his wikipedia page the 1st link down there is “ultimate physical limits of computation”
the uncertainty principle limits the maximum information storage per gram of mass and the maximum computation rate in terms of bit ops per energy unit, he discusses all that.
However, the uncertainty principle is only really a limitation for classical computers. A quantum computer doesn’t have that issue (he discusses classical only, an ultimate quantum computer would be enormously more powerful)
What is the problem with whoever voted that down? There isn’t any violation of laws of nature involved in actively supporting something against collapse like that—any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?
hyper-large structures are hyper-slow and hyper-dumb. See my above reply. The future of computation is to shrink forever. I didn’t downvote your comment btw.
Yes—it’s a matter of degree. Humans are isolated to a greater extent than I expect AIs to be. Also, the word “individual” means (I think) non-divisible, which AIs probably will not be.
I agree, the term post-AI is confusing. I’ll remove the ‘post’.