Yeah, it could do all that, or it could just do what humans today are doing, which is to infect some Windows PCs and run a botnet :-)
It could/would, but this is an inferior mainline strategy. Too obvious, doesn’t scale as well. Botnets infect many computers, but they ultimately add up to computational chump change. Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.
Splitting up a computation among multiple computing nodes is not a trivial task.
True. Don’t try this at home.
. … spend a lot of resources on constructing custom data centers.
Also part of the plan. The home PCs are a good starting resource, a low hanging fruit, but you’d also need custom data centers. These quickly become the main resources.
Even controlling a single business would be very difficult for the AI.
Nah.
Unless the AI’s explicit purpose is “Unleash Hell as quickly as possible”, it would strive to prevent this from happening.
The AI’s entire purpose is to remove earth’s oxygen. See the overpost for the original reference. The AI is not interested in its power base for sake of power. It only cares about oxygen. It loathes oxygen.
You say that “there is no necessarily inherent physical energy cost of computation, it truly can approach zero”, but I don’t see how this could be true.
If we taboo the word and substitute in its definition, Bugmaster’s statement becomes:
“Even controlling a single business would be very difficult for the machine that can far surpass all the intellectual activities of any man however clever.”
Since “controlling a single business” is in fact one of these activities, this is false, no inference steps required.
Perhaps bugmaster is assuming the AI would be covertly controlling businesses, but if so he should have specified that. I didn’t assume that, and in this scenario the AI could be out in the open so to speak. Regardless, it wouldn’t change the conclusion. Humans can covertly control businesses.
Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.
It’s a bit of a tradeoff, seeing as botnets can run 24⁄7, but people play games relatively rarely.
Splitting up a computation among multiple computing nodes is not a trivial task. True. Don’t try this at home.
Ok, let me make a stronger statement then: it is not possible to scale any arbitrary computation in a linear fashion simply by adding more nodes. At some point, the cost of coordinating distributed tasks to one more node becomes higher than the benefit of adding the node to begin with. In addition, as I mentioned earlier, network bandwidth and latency will become your limiting factor relatively quickly.
The home PCs are a good starting resource, a low hanging fruit, but you’d also need custom data centers. These quickly become the main resources.
How will the AI acquire those data centers ? Would it have enough power in its conventional botnet (or game-net, if you prefer) to “take over all human businesses” and cause them to be built ? Current botnets are nowhere near powerful enough for that—otherwise human spammers would have done it already.
The AI’s entire purpose is to remove earth’s oxygen. See the overpost for the original reference.
My bad, I missed that reference. In this case, yes, the AI would have no problem with unleashing Global Thermonuclear War (unless there was some easier way to remove the oxygen).
Fortunately, the internets can be your eyes.
I still don’t understand how this reversible computing will work in the absence of a superconducting environment—which would require quite a bit of energy to run. Note that if you want to run this reversible computation on a global botnet, you will have to cool teansoceanic cables… and I’m not sure what you’d do with satellite links.
Yes, most likely, but not really relevant here.
My point is that, a). if the AI can’t get the computing resources it needs out of the space it has, then it will never accomplish its goals, and b). there’s an upper limit on how much computing you can extract out of a cubic meter of space, regardless of what technology you’re using. Thus, c). if the AI requires more resources that could conceivably be obtained, then it’s doomed. Some of the tasks you outline—such as “take over all human businesses”—will likely require more resources than can be obtained.
It’s a bit of a tradeoff, seeing as botnets can run 24⁄7, but people play games relatively rarely.
The botnet makes the AI a criminal from the beginning, putting it into an atagonistic relationship. A better strategy would probably entail benign benevolence and cooperation with humans.
Splitting up a computation among multiple computing nodes is not a trivial task.
True. Don’t try this at home.
Ok, let me make a stronger statement ..
I agree with that subchain but we don’t need to get in to that. I’ve actually argued that track here myself (parallelization constraints as a limiter on hard takeoffs).
But that’s all beside the point. This scenario I presented is a more modest takeoff. When I described the AI as becoming a civilization unto itself, I was attempting to imply that it was composed of many individual minds. Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
The internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication, so the AI civilization can employ a much wider set of distribution strategies.
How will the AI acquire those data centers ?
Buy them? Build them? Perhaps this would be more fun if we switched out of the adversial stance or switched roles.
Would it have enough power in its conventional botnet (or game-net, if you prefer) to “take over all human businesses” and cause them to be built ?
Quote me, but don’t misquote me. I actually said:
“Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them.”
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc. The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs. Humans can not hope to compete.
I still don’t understand how this reversible computing will work in ..
Yes reversible computing requires superconducting environments, no this does not necessarily increase energy costs for a data center for two reasons: 1. data centers already need cooling to dump all the waste heat generated by bit erasure. 2. Cooling cost to maintain the temperatural differential scales with surface area, but total computing power scales with volume.
If you question how reversible computing could work in general, first read the primary literature in that field to at least understand what they are proposing.
I should point out that there is an alternative tech path which will probably be the mainstream route to further computational gains in the decades ahead.
Even if you can’t shrink circuits further or reduce their power consumption, you could still reduce their manufacturing cost and build increasingly larger stacked 3D circuits where only a tiny portion of the circuitry is active at any one time. This is in fact how the brain solves the problem. It has a mass of circuitry equivalent to a large supercomputer (roughly a petabit) but runs on only 20 watts. The smallest computational features in the brain are slightly larger than our current smallest transistors. So it does not achieve its much greater power effeciency by using much more miniaturization.
My point is that, a). if the AI can’t get the computing resources it needs out of the space it has, then
I see. In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
A better strategy would probably entail benign benevolence and cooperation with humans.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that “the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication”, but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way—for example, by selling it games—it will be bottlenecked by human speeds.
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc.
My mistake; I thought that by “dominate human businesses” you meant something like “hack its way to the top”, not “build an honest business that outperforms human businesses”. That said:
The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs.
How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or… what ?
data centers already need cooling to dump all the waste heat generated by bit erasure
There’s a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they’re a bit off-topic (though we can discuss them if you wish). But here’s another potentially huge problem I see with your argument:
In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn’t that difficult to get people to free up their extra clock cycles.
Yeah, the “voluntary” part is key to getting humans to like you and your project. On the flip side, illicit botnets are quite effective at harnessing “spare” (i.e., owned by someone else) computing capacity; so, it’s a bit of a tradeoff.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it’s an application area where interim experiments can pay rent, so to speak.
If the AI can scale and perform about as well as human organizations, then why should we fear it ?
I never said or implied merely “about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down.
The discussion centered around lowering earth’s oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.
How are they going to build all those foundries and data centers, then ?
Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI...
I think this is one of many possible paths, though I wouldn’t call any of them “likely” to happen—at least, not in the next 20 years. That said, if the AI is an NPC in a game, then of course it makes sense that it would harness the game for its CPU cycles; that’s what it was built to do, after all.
“about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
Right, but my point is that communication is just one piece of the puzzle. I argue that, even if you somehow enabled us humans to communicate at 50 MB/s, our organizations would not become 400000 times more effective.
There are at least two human organizations that have the potential to accomplish this already
Which ones ? I don’t think that even WW3, given our current weapon stockpiles, would result in a successful destruction of all plant life. Animal life, maybe, but there are quite a few plants and algae out there. In addition, I am not entirely convinced that an AI could start WW3; keep in mind that it can’t hack itself total access to all nuclear weapons, because they are not connected to the Internet in any way.
If human labor is the cheapest option, then they can simply employ humans.
But then they lose their advantage of having zero employee costs, which you brought up earlier. In addition, whatever plans the AIs plan on executing become bottlenecked by human speeds.
On the other hand, once we have superintelligence then advanced robotics is almost a given.
It depends on what you mean by “advanced”, though in general I think I agree.
we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today
I am willing to bet money that this will not happen, assuming that by “high end” you mean something like Nvidia’s Geforce 680 GTX. What are you basing your estimate on ?
It could/would, but this is an inferior mainline strategy. Too obvious, doesn’t scale as well. Botnets infect many computers, but they ultimately add up to computational chump change. Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.
True. Don’t try this at home.
Also part of the plan. The home PCs are a good starting resource, a low hanging fruit, but you’d also need custom data centers. These quickly become the main resources.
Nah.
The AI’s entire purpose is to remove earth’s oxygen. See the overpost for the original reference. The AI is not interested in its power base for sake of power. It only cares about oxygen. It loathes oxygen.
Fortunately, the internets can be your eyes.
Yes, most likely, but not really relevant here. You seem to be connecting all of the point 2 and point 1 stuff together, but they really don’t relate.
That seems like an insufficient reply to address Bugmaster’s point. Can you expand on why you think it would be not too hard?
We are discussing a superintelligence, a term which has a particular common meaning on this site.
If we taboo the word and substitute in its definition, Bugmaster’s statement becomes:
“Even controlling a single business would be very difficult for the machine that can far surpass all the intellectual activities of any man however clever.”
Since “controlling a single business” is in fact one of these activities, this is false, no inference steps required.
Perhaps bugmaster is assuming the AI would be covertly controlling businesses, but if so he should have specified that. I didn’t assume that, and in this scenario the AI could be out in the open so to speak. Regardless, it wouldn’t change the conclusion. Humans can covertly control businesses.
Yes, I would also like to see a better explanation.
It’s a bit of a tradeoff, seeing as botnets can run 24⁄7, but people play games relatively rarely.
Ok, let me make a stronger statement then: it is not possible to scale any arbitrary computation in a linear fashion simply by adding more nodes. At some point, the cost of coordinating distributed tasks to one more node becomes higher than the benefit of adding the node to begin with. In addition, as I mentioned earlier, network bandwidth and latency will become your limiting factor relatively quickly.
How will the AI acquire those data centers ? Would it have enough power in its conventional botnet (or game-net, if you prefer) to “take over all human businesses” and cause them to be built ? Current botnets are nowhere near powerful enough for that—otherwise human spammers would have done it already.
My bad, I missed that reference. In this case, yes, the AI would have no problem with unleashing Global Thermonuclear War (unless there was some easier way to remove the oxygen).
I still don’t understand how this reversible computing will work in the absence of a superconducting environment—which would require quite a bit of energy to run. Note that if you want to run this reversible computation on a global botnet, you will have to cool teansoceanic cables… and I’m not sure what you’d do with satellite links.
My point is that, a). if the AI can’t get the computing resources it needs out of the space it has, then it will never accomplish its goals, and b). there’s an upper limit on how much computing you can extract out of a cubic meter of space, regardless of what technology you’re using. Thus, c). if the AI requires more resources that could conceivably be obtained, then it’s doomed. Some of the tasks you outline—such as “take over all human businesses”—will likely require more resources than can be obtained.
The botnet makes the AI a criminal from the beginning, putting it into an atagonistic relationship. A better strategy would probably entail benign benevolence and cooperation with humans.
I agree with that subchain but we don’t need to get in to that. I’ve actually argued that track here myself (parallelization constraints as a limiter on hard takeoffs).
But that’s all beside the point. This scenario I presented is a more modest takeoff. When I described the AI as becoming a civilization unto itself, I was attempting to imply that it was composed of many individual minds. Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
The internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication, so the AI civilization can employ a much wider set of distribution strategies.
Buy them? Build them? Perhaps this would be more fun if we switched out of the adversial stance or switched roles.
Quote me, but don’t misquote me. I actually said:
“Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them.”
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc. The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs. Humans can not hope to compete.
Yes reversible computing requires superconducting environments, no this does not necessarily increase energy costs for a data center for two reasons: 1. data centers already need cooling to dump all the waste heat generated by bit erasure. 2. Cooling cost to maintain the temperatural differential scales with surface area, but total computing power scales with volume.
If you question how reversible computing could work in general, first read the primary literature in that field to at least understand what they are proposing.
I should point out that there is an alternative tech path which will probably be the mainstream route to further computational gains in the decades ahead.
Even if you can’t shrink circuits further or reduce their power consumption, you could still reduce their manufacturing cost and build increasingly larger stacked 3D circuits where only a tiny portion of the circuitry is active at any one time. This is in fact how the brain solves the problem. It has a mass of circuitry equivalent to a large supercomputer (roughly a petabit) but runs on only 20 watts. The smallest computational features in the brain are slightly larger than our current smallest transistors. So it does not achieve its much greater power effeciency by using much more miniaturization.
I see. In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that “the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication”, but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way—for example, by selling it games—it will be bottlenecked by human speeds.
My mistake; I thought that by “dominate human businesses” you meant something like “hack its way to the top”, not “build an honest business that outperforms human businesses”. That said:
How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or… what ?
There’s a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they’re a bit off-topic (though we can discuss them if you wish). But here’s another potentially huge problem I see with your argument:
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn’t that difficult to get people to free up their extra clock cycles.
Yeah, the “voluntary” part is key to getting humans to like you and your project. On the flip side, illicit botnets are quite effective at harnessing “spare” (i.e., owned by someone else) computing capacity; so, it’s a bit of a tradeoff.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it’s an application area where interim experiments can pay rent, so to speak.
I never said or implied merely “about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
The discussion centered around lowering earth’s oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.
Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.
After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.
I think this is one of many possible paths, though I wouldn’t call any of them “likely” to happen—at least, not in the next 20 years. That said, if the AI is an NPC in a game, then of course it makes sense that it would harness the game for its CPU cycles; that’s what it was built to do, after all.
Right, but my point is that communication is just one piece of the puzzle. I argue that, even if you somehow enabled us humans to communicate at 50 MB/s, our organizations would not become 400000 times more effective.
Which ones ? I don’t think that even WW3, given our current weapon stockpiles, would result in a successful destruction of all plant life. Animal life, maybe, but there are quite a few plants and algae out there. In addition, I am not entirely convinced that an AI could start WW3; keep in mind that it can’t hack itself total access to all nuclear weapons, because they are not connected to the Internet in any way.
But then they lose their advantage of having zero employee costs, which you brought up earlier. In addition, whatever plans the AIs plan on executing become bottlenecked by human speeds.
It depends on what you mean by “advanced”, though in general I think I agree.
I am willing to bet money that this will not happen, assuming that by “high end” you mean something like Nvidia’s Geforce 680 GTX. What are you basing your estimate on ?