The AI grows to understand that humans mostly use all this computational power for entertainment. It masters game theory, design, programming, 3D art, and so on.
Yeah, it could do all that, or it could just do what humans today are doing, which is to infect some Windows PCs and run a botnet :-)
That said, there are several problems with your scenario.
Splitting up a computation among multiple computing nodes is not a trivial task. It is easy to run into diminishing returns, where your nodes spend more time on synchronizing with each other than on working. In addition, your computation will quickly become bottlenecked by network bandwidth (and latency); this is why companies like Google spend a lot of resources on constructing custom data centers.
I am not convinced that any agent, AI or not, could effectively control “all of the businesses of man”. This problem is very likely NP-Hard (at least), as well as intractable, even if the AI’s botnet was running on every PC on Earth. Certainly, all attempts by human agents to “acquire” even something as small as Europe have failed miserably so far.
Even controlling a single business would be very difficult for the AI. Traditionally, when a business’s computers suffer a critical failure—or merely a security leak—the business owners (even ones as incompetent as Sony) end up shutting down the affected parts of the business, or switching to backups, such as “human accountants pushing paper around”.
Unleashing “Nuclear acquisitions”, “War” and “Hell” would be counter-productive for the AI, even assuming such a thing were possible.. If the AI succeeded in doing this, it would undermine its own power base. Unless the AI’s explicit purpose is “Unleash Hell as quickly as possible”, it would strive to prevent this from happening.
You say that “there is no necessarily inherent physical energy cost of computation, it truly can approach zero”, but I don’t see how this could be true. At the end of the day, you still need to push electrons down some wires; in fact, you will often have to push them quite far, if your botnet is truly global. Pushing things takes energy, and you will never get all of it back by pulling things back at some future date. You say that “superintelligences will probably heavily exploit” this approach, but isn’t it the case that without it, superintelligences won’t form in the first place ? You also say that “It requires superconductor temperatures and environments”, but the energy you spend on cooling your superconductor is not free.
Ultimately, there’s an upper limit on how much computation you can get out of a cubic meter of space, dictated by quantum physics. If your AI requires more power than can be physically obtained, then it’s doomed.
While Jacob’s scenario seems unlikely, the AI could do similar things with a number of other options. Not only are botnets an option, but it is possible to do some really sneaky nefarious things in code- like having compilers that when they compile code include additional instructions (worse they could do so even when compiling a new compiler). Stuxnet has shown that sneaky behavior is surprisingly easy to get into secure systems. An AI that had a few years start and could have its own modifications to communication satellites for example could be quite insidious.
Not only are botnets an option, but it is possible to do some really sneaky nefarious things in code
What kinds of nefarious things, exactly ? Human virus writers have learned, in recent years, to make their exploits as subtle as possible. Sure, it’s attractive to make the exploited PC send out 1000 spam messages per second—but then, its human owner will inevitably notice that his computer is “slow”, and take it to the shop to get reformatted, or simply buy a new one. Biological parasites face the same problem; they need to reproduce efficiently, but no so efficiently that they kill the host.
Stuxnet has shown that sneaky behavior is surprisingly easy to get into secure systems
Yes, and this spectacularly successful exploit—and it was, IMO, spectacular—managed to destroy a single secure system, in a specific way that will most likely never succeed again (and that was quite unsubtle in the end). It also took years to prepare, and involved physical actions by human agents, IIRC. The AI has a long way to go.
Well, the evil compiler is I think the most nefarious thing anyone has come up with that’s a publicly known general stunt. But it is by nature a long-term trick. Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren’t going for any sort of largescale global control. They weren’t people interested in being able to take all the world’s satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns.
But there are definite ways that one can get things started- once one has a bank account of some sort, it can start getting money by doing Mechanical Turk and similar work. With enough of that, it can simply pay for server time. One doesn’t need a large botnet to start that off.
I think your point about physical agents is valid- they needed to have humans actually go and bring infected USBs to relevant computers. But that’s partially due to the highly targeted nature of the job and the fact that the systems in question were much more secure than many systems. Also, the subtlety level was I think higher than you expect- Stuxnet wasn’t even noticed as an active virus until a single computer happened to have a particularly abnormal reaction to it. If that hadn’t happened, it is possible that the public would never have learned about it.
Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren’t going for any sort of largescale global control. They weren’t people interested in being able to take all the world’s satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns...
Exploits only work for some systems. If you are dealing with different systems you will need different exploits. How do you reckon that such attacks won’t be visible and traceable? Packets do have to come from somewhere.
And don’t forget that out systems become ever more secure and our toolbox to detect) unauthorized use of information systems is becoming more advanced.
As a computer security guy, I disagree substantially. Yes, newer versions of popular operating systems and server programs are usually more secure than older versions; it’s easier to hack into Windows 95 than Windows 7. But this is happening within a larger ecosystem that’s becoming less secure: More important control systems are being connected to the Internet, more old, unsecured/unsecurable systems are as well, and these sets have a huge overlap. There are more programmers writing more programs for more platforms than ever before, making the same old security mistakes; embedded systems are taking a larger role in our economy and daily lives. And attacks just keep getting better.
If you’re thinking there are generalizable defenses against sneaky stuff with code, check out what mere humans come up with in the underhanded C competition. Those tricks are hard to detect for dedicated experts who know there’s something evil within a few lines of C code. Alterations that sophisticated would never be caught in the wild—hell, it took years to figure out that the most popular crypto program running on one of the more secure OS’s was basically worthless.
Sure we are, we just don’t care very much. The method of “Put the computer in a box and don’t let anyone open the box” (alternately, only let one person open the box) was developed decades ago and is quite secure.
Yeah, it could do all that, or it could just do what humans today are doing, which is to infect some Windows PCs and run a botnet :-)
It could/would, but this is an inferior mainline strategy. Too obvious, doesn’t scale as well. Botnets infect many computers, but they ultimately add up to computational chump change. Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.
Splitting up a computation among multiple computing nodes is not a trivial task.
True. Don’t try this at home.
. … spend a lot of resources on constructing custom data centers.
Also part of the plan. The home PCs are a good starting resource, a low hanging fruit, but you’d also need custom data centers. These quickly become the main resources.
Even controlling a single business would be very difficult for the AI.
Nah.
Unless the AI’s explicit purpose is “Unleash Hell as quickly as possible”, it would strive to prevent this from happening.
The AI’s entire purpose is to remove earth’s oxygen. See the overpost for the original reference. The AI is not interested in its power base for sake of power. It only cares about oxygen. It loathes oxygen.
You say that “there is no necessarily inherent physical energy cost of computation, it truly can approach zero”, but I don’t see how this could be true.
If we taboo the word and substitute in its definition, Bugmaster’s statement becomes:
“Even controlling a single business would be very difficult for the machine that can far surpass all the intellectual activities of any man however clever.”
Since “controlling a single business” is in fact one of these activities, this is false, no inference steps required.
Perhaps bugmaster is assuming the AI would be covertly controlling businesses, but if so he should have specified that. I didn’t assume that, and in this scenario the AI could be out in the open so to speak. Regardless, it wouldn’t change the conclusion. Humans can covertly control businesses.
Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.
It’s a bit of a tradeoff, seeing as botnets can run 24⁄7, but people play games relatively rarely.
Splitting up a computation among multiple computing nodes is not a trivial task. True. Don’t try this at home.
Ok, let me make a stronger statement then: it is not possible to scale any arbitrary computation in a linear fashion simply by adding more nodes. At some point, the cost of coordinating distributed tasks to one more node becomes higher than the benefit of adding the node to begin with. In addition, as I mentioned earlier, network bandwidth and latency will become your limiting factor relatively quickly.
The home PCs are a good starting resource, a low hanging fruit, but you’d also need custom data centers. These quickly become the main resources.
How will the AI acquire those data centers ? Would it have enough power in its conventional botnet (or game-net, if you prefer) to “take over all human businesses” and cause them to be built ? Current botnets are nowhere near powerful enough for that—otherwise human spammers would have done it already.
The AI’s entire purpose is to remove earth’s oxygen. See the overpost for the original reference.
My bad, I missed that reference. In this case, yes, the AI would have no problem with unleashing Global Thermonuclear War (unless there was some easier way to remove the oxygen).
Fortunately, the internets can be your eyes.
I still don’t understand how this reversible computing will work in the absence of a superconducting environment—which would require quite a bit of energy to run. Note that if you want to run this reversible computation on a global botnet, you will have to cool teansoceanic cables… and I’m not sure what you’d do with satellite links.
Yes, most likely, but not really relevant here.
My point is that, a). if the AI can’t get the computing resources it needs out of the space it has, then it will never accomplish its goals, and b). there’s an upper limit on how much computing you can extract out of a cubic meter of space, regardless of what technology you’re using. Thus, c). if the AI requires more resources that could conceivably be obtained, then it’s doomed. Some of the tasks you outline—such as “take over all human businesses”—will likely require more resources than can be obtained.
It’s a bit of a tradeoff, seeing as botnets can run 24⁄7, but people play games relatively rarely.
The botnet makes the AI a criminal from the beginning, putting it into an atagonistic relationship. A better strategy would probably entail benign benevolence and cooperation with humans.
Splitting up a computation among multiple computing nodes is not a trivial task.
True. Don’t try this at home.
Ok, let me make a stronger statement ..
I agree with that subchain but we don’t need to get in to that. I’ve actually argued that track here myself (parallelization constraints as a limiter on hard takeoffs).
But that’s all beside the point. This scenario I presented is a more modest takeoff. When I described the AI as becoming a civilization unto itself, I was attempting to imply that it was composed of many individual minds. Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
The internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication, so the AI civilization can employ a much wider set of distribution strategies.
How will the AI acquire those data centers ?
Buy them? Build them? Perhaps this would be more fun if we switched out of the adversial stance or switched roles.
Would it have enough power in its conventional botnet (or game-net, if you prefer) to “take over all human businesses” and cause them to be built ?
Quote me, but don’t misquote me. I actually said:
“Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them.”
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc. The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs. Humans can not hope to compete.
I still don’t understand how this reversible computing will work in ..
Yes reversible computing requires superconducting environments, no this does not necessarily increase energy costs for a data center for two reasons: 1. data centers already need cooling to dump all the waste heat generated by bit erasure. 2. Cooling cost to maintain the temperatural differential scales with surface area, but total computing power scales with volume.
If you question how reversible computing could work in general, first read the primary literature in that field to at least understand what they are proposing.
I should point out that there is an alternative tech path which will probably be the mainstream route to further computational gains in the decades ahead.
Even if you can’t shrink circuits further or reduce their power consumption, you could still reduce their manufacturing cost and build increasingly larger stacked 3D circuits where only a tiny portion of the circuitry is active at any one time. This is in fact how the brain solves the problem. It has a mass of circuitry equivalent to a large supercomputer (roughly a petabit) but runs on only 20 watts. The smallest computational features in the brain are slightly larger than our current smallest transistors. So it does not achieve its much greater power effeciency by using much more miniaturization.
My point is that, a). if the AI can’t get the computing resources it needs out of the space it has, then
I see. In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
A better strategy would probably entail benign benevolence and cooperation with humans.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that “the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication”, but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way—for example, by selling it games—it will be bottlenecked by human speeds.
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc.
My mistake; I thought that by “dominate human businesses” you meant something like “hack its way to the top”, not “build an honest business that outperforms human businesses”. That said:
The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs.
How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or… what ?
data centers already need cooling to dump all the waste heat generated by bit erasure
There’s a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they’re a bit off-topic (though we can discuss them if you wish). But here’s another potentially huge problem I see with your argument:
In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn’t that difficult to get people to free up their extra clock cycles.
Yeah, the “voluntary” part is key to getting humans to like you and your project. On the flip side, illicit botnets are quite effective at harnessing “spare” (i.e., owned by someone else) computing capacity; so, it’s a bit of a tradeoff.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it’s an application area where interim experiments can pay rent, so to speak.
If the AI can scale and perform about as well as human organizations, then why should we fear it ?
I never said or implied merely “about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down.
The discussion centered around lowering earth’s oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.
How are they going to build all those foundries and data centers, then ?
Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI...
I think this is one of many possible paths, though I wouldn’t call any of them “likely” to happen—at least, not in the next 20 years. That said, if the AI is an NPC in a game, then of course it makes sense that it would harness the game for its CPU cycles; that’s what it was built to do, after all.
“about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
Right, but my point is that communication is just one piece of the puzzle. I argue that, even if you somehow enabled us humans to communicate at 50 MB/s, our organizations would not become 400000 times more effective.
There are at least two human organizations that have the potential to accomplish this already
Which ones ? I don’t think that even WW3, given our current weapon stockpiles, would result in a successful destruction of all plant life. Animal life, maybe, but there are quite a few plants and algae out there. In addition, I am not entirely convinced that an AI could start WW3; keep in mind that it can’t hack itself total access to all nuclear weapons, because they are not connected to the Internet in any way.
If human labor is the cheapest option, then they can simply employ humans.
But then they lose their advantage of having zero employee costs, which you brought up earlier. In addition, whatever plans the AIs plan on executing become bottlenecked by human speeds.
On the other hand, once we have superintelligence then advanced robotics is almost a given.
It depends on what you mean by “advanced”, though in general I think I agree.
we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today
I am willing to bet money that this will not happen, assuming that by “high end” you mean something like Nvidia’s Geforce 680 GTX. What are you basing your estimate on ?
Yeah, it could do all that, or it could just do what humans today are doing, which is to infect some Windows PCs and run a botnet :-)
That said, there are several problems with your scenario.
Splitting up a computation among multiple computing nodes is not a trivial task. It is easy to run into diminishing returns, where your nodes spend more time on synchronizing with each other than on working. In addition, your computation will quickly become bottlenecked by network bandwidth (and latency); this is why companies like Google spend a lot of resources on constructing custom data centers.
I am not convinced that any agent, AI or not, could effectively control “all of the businesses of man”. This problem is very likely NP-Hard (at least), as well as intractable, even if the AI’s botnet was running on every PC on Earth. Certainly, all attempts by human agents to “acquire” even something as small as Europe have failed miserably so far.
Even controlling a single business would be very difficult for the AI. Traditionally, when a business’s computers suffer a critical failure—or merely a security leak—the business owners (even ones as incompetent as Sony) end up shutting down the affected parts of the business, or switching to backups, such as “human accountants pushing paper around”.
Unleashing “Nuclear acquisitions”, “War” and “Hell” would be counter-productive for the AI, even assuming such a thing were possible.. If the AI succeeded in doing this, it would undermine its own power base. Unless the AI’s explicit purpose is “Unleash Hell as quickly as possible”, it would strive to prevent this from happening.
You say that “there is no necessarily inherent physical energy cost of computation, it truly can approach zero”, but I don’t see how this could be true. At the end of the day, you still need to push electrons down some wires; in fact, you will often have to push them quite far, if your botnet is truly global. Pushing things takes energy, and you will never get all of it back by pulling things back at some future date. You say that “superintelligences will probably heavily exploit” this approach, but isn’t it the case that without it, superintelligences won’t form in the first place ? You also say that “It requires superconductor temperatures and environments”, but the energy you spend on cooling your superconductor is not free.
Ultimately, there’s an upper limit on how much computation you can get out of a cubic meter of space, dictated by quantum physics. If your AI requires more power than can be physically obtained, then it’s doomed.
While Jacob’s scenario seems unlikely, the AI could do similar things with a number of other options. Not only are botnets an option, but it is possible to do some really sneaky nefarious things in code- like having compilers that when they compile code include additional instructions (worse they could do so even when compiling a new compiler). Stuxnet has shown that sneaky behavior is surprisingly easy to get into secure systems. An AI that had a few years start and could have its own modifications to communication satellites for example could be quite insidious.
What kinds of nefarious things, exactly ? Human virus writers have learned, in recent years, to make their exploits as subtle as possible. Sure, it’s attractive to make the exploited PC send out 1000 spam messages per second—but then, its human owner will inevitably notice that his computer is “slow”, and take it to the shop to get reformatted, or simply buy a new one. Biological parasites face the same problem; they need to reproduce efficiently, but no so efficiently that they kill the host.
Yes, and this spectacularly successful exploit—and it was, IMO, spectacular—managed to destroy a single secure system, in a specific way that will most likely never succeed again (and that was quite unsubtle in the end). It also took years to prepare, and involved physical actions by human agents, IIRC. The AI has a long way to go.
Well, the evil compiler is I think the most nefarious thing anyone has come up with that’s a publicly known general stunt. But it is by nature a long-term trick. Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren’t going for any sort of largescale global control. They weren’t people interested in being able to take all the world’s satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns.
But there are definite ways that one can get things started- once one has a bank account of some sort, it can start getting money by doing Mechanical Turk and similar work. With enough of that, it can simply pay for server time. One doesn’t need a large botnet to start that off.
I think your point about physical agents is valid- they needed to have humans actually go and bring infected USBs to relevant computers. But that’s partially due to the highly targeted nature of the job and the fact that the systems in question were much more secure than many systems. Also, the subtlety level was I think higher than you expect- Stuxnet wasn’t even noticed as an active virus until a single computer happened to have a particularly abnormal reaction to it. If that hadn’t happened, it is possible that the public would never have learned about it.
Exploits only work for some systems. If you are dealing with different systems you will need different exploits. How do you reckon that such attacks won’t be visible and traceable? Packets do have to come from somewhere.
And don’t forget that out systems become ever more secure and our toolbox to detect) unauthorized use of information systems is becoming more advanced.
As a computer security guy, I disagree substantially. Yes, newer versions of popular operating systems and server programs are usually more secure than older versions; it’s easier to hack into Windows 95 than Windows 7. But this is happening within a larger ecosystem that’s becoming less secure: More important control systems are being connected to the Internet, more old, unsecured/unsecurable systems are as well, and these sets have a huge overlap. There are more programmers writing more programs for more platforms than ever before, making the same old security mistakes; embedded systems are taking a larger role in our economy and daily lives. And attacks just keep getting better.
If you’re thinking there are generalizable defenses against sneaky stuff with code, check out what mere humans come up with in the underhanded C competition. Those tricks are hard to detect for dedicated experts who know there’s something evil within a few lines of C code. Alterations that sophisticated would never be caught in the wild—hell, it took years to figure out that the most popular crypto program running on one of the more secure OS’s was basically worthless.
Humans are not good at securing computers.
Sure we are, we just don’t care very much. The method of “Put the computer in a box and don’t let anyone open the box” (alternately, only let one person open the box) was developed decades ago and is quite secure.
I would call that securing a turing machine. A computer, colloquially, has accessible inputs and outputs, and its value is subject to network effects.
Also, if you put the computer in a box developed decades ago, the box probably isn’t TEMPEST compliant.
It could/would, but this is an inferior mainline strategy. Too obvious, doesn’t scale as well. Botnets infect many computers, but they ultimately add up to computational chump change. Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.
True. Don’t try this at home.
Also part of the plan. The home PCs are a good starting resource, a low hanging fruit, but you’d also need custom data centers. These quickly become the main resources.
Nah.
The AI’s entire purpose is to remove earth’s oxygen. See the overpost for the original reference. The AI is not interested in its power base for sake of power. It only cares about oxygen. It loathes oxygen.
Fortunately, the internets can be your eyes.
Yes, most likely, but not really relevant here. You seem to be connecting all of the point 2 and point 1 stuff together, but they really don’t relate.
That seems like an insufficient reply to address Bugmaster’s point. Can you expand on why you think it would be not too hard?
We are discussing a superintelligence, a term which has a particular common meaning on this site.
If we taboo the word and substitute in its definition, Bugmaster’s statement becomes:
“Even controlling a single business would be very difficult for the machine that can far surpass all the intellectual activities of any man however clever.”
Since “controlling a single business” is in fact one of these activities, this is false, no inference steps required.
Perhaps bugmaster is assuming the AI would be covertly controlling businesses, but if so he should have specified that. I didn’t assume that, and in this scenario the AI could be out in the open so to speak. Regardless, it wouldn’t change the conclusion. Humans can covertly control businesses.
Yes, I would also like to see a better explanation.
It’s a bit of a tradeoff, seeing as botnets can run 24⁄7, but people play games relatively rarely.
Ok, let me make a stronger statement then: it is not possible to scale any arbitrary computation in a linear fashion simply by adding more nodes. At some point, the cost of coordinating distributed tasks to one more node becomes higher than the benefit of adding the node to begin with. In addition, as I mentioned earlier, network bandwidth and latency will become your limiting factor relatively quickly.
How will the AI acquire those data centers ? Would it have enough power in its conventional botnet (or game-net, if you prefer) to “take over all human businesses” and cause them to be built ? Current botnets are nowhere near powerful enough for that—otherwise human spammers would have done it already.
My bad, I missed that reference. In this case, yes, the AI would have no problem with unleashing Global Thermonuclear War (unless there was some easier way to remove the oxygen).
I still don’t understand how this reversible computing will work in the absence of a superconducting environment—which would require quite a bit of energy to run. Note that if you want to run this reversible computation on a global botnet, you will have to cool teansoceanic cables… and I’m not sure what you’d do with satellite links.
My point is that, a). if the AI can’t get the computing resources it needs out of the space it has, then it will never accomplish its goals, and b). there’s an upper limit on how much computing you can extract out of a cubic meter of space, regardless of what technology you’re using. Thus, c). if the AI requires more resources that could conceivably be obtained, then it’s doomed. Some of the tasks you outline—such as “take over all human businesses”—will likely require more resources than can be obtained.
The botnet makes the AI a criminal from the beginning, putting it into an atagonistic relationship. A better strategy would probably entail benign benevolence and cooperation with humans.
I agree with that subchain but we don’t need to get in to that. I’ve actually argued that track here myself (parallelization constraints as a limiter on hard takeoffs).
But that’s all beside the point. This scenario I presented is a more modest takeoff. When I described the AI as becoming a civilization unto itself, I was attempting to imply that it was composed of many individual minds. Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
The internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication, so the AI civilization can employ a much wider set of distribution strategies.
Buy them? Build them? Perhaps this would be more fun if we switched out of the adversial stance or switched roles.
Quote me, but don’t misquote me. I actually said:
“Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them.”
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc. The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs. Humans can not hope to compete.
Yes reversible computing requires superconducting environments, no this does not necessarily increase energy costs for a data center for two reasons: 1. data centers already need cooling to dump all the waste heat generated by bit erasure. 2. Cooling cost to maintain the temperatural differential scales with surface area, but total computing power scales with volume.
If you question how reversible computing could work in general, first read the primary literature in that field to at least understand what they are proposing.
I should point out that there is an alternative tech path which will probably be the mainstream route to further computational gains in the decades ahead.
Even if you can’t shrink circuits further or reduce their power consumption, you could still reduce their manufacturing cost and build increasingly larger stacked 3D circuits where only a tiny portion of the circuitry is active at any one time. This is in fact how the brain solves the problem. It has a mass of circuitry equivalent to a large supercomputer (roughly a petabit) but runs on only 20 watts. The smallest computational features in the brain are slightly larger than our current smallest transistors. So it does not achieve its much greater power effeciency by using much more miniaturization.
I see. In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that “the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication”, but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way—for example, by selling it games—it will be bottlenecked by human speeds.
My mistake; I thought that by “dominate human businesses” you meant something like “hack its way to the top”, not “build an honest business that outperforms human businesses”. That said:
How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or… what ?
There’s a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they’re a bit off-topic (though we can discuss them if you wish). But here’s another potentially huge problem I see with your argument:
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn’t that difficult to get people to free up their extra clock cycles.
Yeah, the “voluntary” part is key to getting humans to like you and your project. On the flip side, illicit botnets are quite effective at harnessing “spare” (i.e., owned by someone else) computing capacity; so, it’s a bit of a tradeoff.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it’s an application area where interim experiments can pay rent, so to speak.
I never said or implied merely “about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
The discussion centered around lowering earth’s oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.
Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.
After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.
I think this is one of many possible paths, though I wouldn’t call any of them “likely” to happen—at least, not in the next 20 years. That said, if the AI is an NPC in a game, then of course it makes sense that it would harness the game for its CPU cycles; that’s what it was built to do, after all.
Right, but my point is that communication is just one piece of the puzzle. I argue that, even if you somehow enabled us humans to communicate at 50 MB/s, our organizations would not become 400000 times more effective.
Which ones ? I don’t think that even WW3, given our current weapon stockpiles, would result in a successful destruction of all plant life. Animal life, maybe, but there are quite a few plants and algae out there. In addition, I am not entirely convinced that an AI could start WW3; keep in mind that it can’t hack itself total access to all nuclear weapons, because they are not connected to the Internet in any way.
But then they lose their advantage of having zero employee costs, which you brought up earlier. In addition, whatever plans the AIs plan on executing become bottlenecked by human speeds.
It depends on what you mean by “advanced”, though in general I think I agree.
I am willing to bet money that this will not happen, assuming that by “high end” you mean something like Nvidia’s Geforce 680 GTX. What are you basing your estimate on ?