It’s a bit of a tradeoff, seeing as botnets can run 24⁄7, but people play games relatively rarely.
The botnet makes the AI a criminal from the beginning, putting it into an atagonistic relationship. A better strategy would probably entail benign benevolence and cooperation with humans.
Splitting up a computation among multiple computing nodes is not a trivial task.
True. Don’t try this at home.
Ok, let me make a stronger statement ..
I agree with that subchain but we don’t need to get in to that. I’ve actually argued that track here myself (parallelization constraints as a limiter on hard takeoffs).
But that’s all beside the point. This scenario I presented is a more modest takeoff. When I described the AI as becoming a civilization unto itself, I was attempting to imply that it was composed of many individual minds. Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
The internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication, so the AI civilization can employ a much wider set of distribution strategies.
How will the AI acquire those data centers ?
Buy them? Build them? Perhaps this would be more fun if we switched out of the adversial stance or switched roles.
Would it have enough power in its conventional botnet (or game-net, if you prefer) to “take over all human businesses” and cause them to be built ?
Quote me, but don’t misquote me. I actually said:
“Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them.”
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc. The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs. Humans can not hope to compete.
I still don’t understand how this reversible computing will work in ..
Yes reversible computing requires superconducting environments, no this does not necessarily increase energy costs for a data center for two reasons: 1. data centers already need cooling to dump all the waste heat generated by bit erasure. 2. Cooling cost to maintain the temperatural differential scales with surface area, but total computing power scales with volume.
If you question how reversible computing could work in general, first read the primary literature in that field to at least understand what they are proposing.
I should point out that there is an alternative tech path which will probably be the mainstream route to further computational gains in the decades ahead.
Even if you can’t shrink circuits further or reduce their power consumption, you could still reduce their manufacturing cost and build increasingly larger stacked 3D circuits where only a tiny portion of the circuitry is active at any one time. This is in fact how the brain solves the problem. It has a mass of circuitry equivalent to a large supercomputer (roughly a petabit) but runs on only 20 watts. The smallest computational features in the brain are slightly larger than our current smallest transistors. So it does not achieve its much greater power effeciency by using much more miniaturization.
My point is that, a). if the AI can’t get the computing resources it needs out of the space it has, then
I see. In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
A better strategy would probably entail benign benevolence and cooperation with humans.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that “the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication”, but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way—for example, by selling it games—it will be bottlenecked by human speeds.
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc.
My mistake; I thought that by “dominate human businesses” you meant something like “hack its way to the top”, not “build an honest business that outperforms human businesses”. That said:
The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs.
How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or… what ?
data centers already need cooling to dump all the waste heat generated by bit erasure
There’s a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they’re a bit off-topic (though we can discuss them if you wish). But here’s another potentially huge problem I see with your argument:
In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn’t that difficult to get people to free up their extra clock cycles.
Yeah, the “voluntary” part is key to getting humans to like you and your project. On the flip side, illicit botnets are quite effective at harnessing “spare” (i.e., owned by someone else) computing capacity; so, it’s a bit of a tradeoff.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it’s an application area where interim experiments can pay rent, so to speak.
If the AI can scale and perform about as well as human organizations, then why should we fear it ?
I never said or implied merely “about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down.
The discussion centered around lowering earth’s oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.
How are they going to build all those foundries and data centers, then ?
Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI...
I think this is one of many possible paths, though I wouldn’t call any of them “likely” to happen—at least, not in the next 20 years. That said, if the AI is an NPC in a game, then of course it makes sense that it would harness the game for its CPU cycles; that’s what it was built to do, after all.
“about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
Right, but my point is that communication is just one piece of the puzzle. I argue that, even if you somehow enabled us humans to communicate at 50 MB/s, our organizations would not become 400000 times more effective.
There are at least two human organizations that have the potential to accomplish this already
Which ones ? I don’t think that even WW3, given our current weapon stockpiles, would result in a successful destruction of all plant life. Animal life, maybe, but there are quite a few plants and algae out there. In addition, I am not entirely convinced that an AI could start WW3; keep in mind that it can’t hack itself total access to all nuclear weapons, because they are not connected to the Internet in any way.
If human labor is the cheapest option, then they can simply employ humans.
But then they lose their advantage of having zero employee costs, which you brought up earlier. In addition, whatever plans the AIs plan on executing become bottlenecked by human speeds.
On the other hand, once we have superintelligence then advanced robotics is almost a given.
It depends on what you mean by “advanced”, though in general I think I agree.
we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today
I am willing to bet money that this will not happen, assuming that by “high end” you mean something like Nvidia’s Geforce 680 GTX. What are you basing your estimate on ?
The botnet makes the AI a criminal from the beginning, putting it into an atagonistic relationship. A better strategy would probably entail benign benevolence and cooperation with humans.
I agree with that subchain but we don’t need to get in to that. I’ve actually argued that track here myself (parallelization constraints as a limiter on hard takeoffs).
But that’s all beside the point. This scenario I presented is a more modest takeoff. When I described the AI as becoming a civilization unto itself, I was attempting to imply that it was composed of many individual minds. Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
The internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication, so the AI civilization can employ a much wider set of distribution strategies.
Buy them? Build them? Perhaps this would be more fun if we switched out of the adversial stance or switched roles.
Quote me, but don’t misquote me. I actually said:
“Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them.”
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc. The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs. Humans can not hope to compete.
Yes reversible computing requires superconducting environments, no this does not necessarily increase energy costs for a data center for two reasons: 1. data centers already need cooling to dump all the waste heat generated by bit erasure. 2. Cooling cost to maintain the temperatural differential scales with surface area, but total computing power scales with volume.
If you question how reversible computing could work in general, first read the primary literature in that field to at least understand what they are proposing.
I should point out that there is an alternative tech path which will probably be the mainstream route to further computational gains in the decades ahead.
Even if you can’t shrink circuits further or reduce their power consumption, you could still reduce their manufacturing cost and build increasingly larger stacked 3D circuits where only a tiny portion of the circuitry is active at any one time. This is in fact how the brain solves the problem. It has a mass of circuitry equivalent to a large supercomputer (roughly a petabit) but runs on only 20 watts. The smallest computational features in the brain are slightly larger than our current smallest transistors. So it does not achieve its much greater power effeciency by using much more miniaturization.
I see. In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that “the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication”, but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way—for example, by selling it games—it will be bottlenecked by human speeds.
My mistake; I thought that by “dominate human businesses” you meant something like “hack its way to the top”, not “build an honest business that outperforms human businesses”. That said:
How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or… what ?
There’s a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they’re a bit off-topic (though we can discuss them if you wish). But here’s another potentially huge problem I see with your argument:
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn’t that difficult to get people to free up their extra clock cycles.
Yeah, the “voluntary” part is key to getting humans to like you and your project. On the flip side, illicit botnets are quite effective at harnessing “spare” (i.e., owned by someone else) computing capacity; so, it’s a bit of a tradeoff.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it’s an application area where interim experiments can pay rent, so to speak.
I never said or implied merely “about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
The discussion centered around lowering earth’s oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.
Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.
After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.
I think this is one of many possible paths, though I wouldn’t call any of them “likely” to happen—at least, not in the next 20 years. That said, if the AI is an NPC in a game, then of course it makes sense that it would harness the game for its CPU cycles; that’s what it was built to do, after all.
Right, but my point is that communication is just one piece of the puzzle. I argue that, even if you somehow enabled us humans to communicate at 50 MB/s, our organizations would not become 400000 times more effective.
Which ones ? I don’t think that even WW3, given our current weapon stockpiles, would result in a successful destruction of all plant life. Animal life, maybe, but there are quite a few plants and algae out there. In addition, I am not entirely convinced that an AI could start WW3; keep in mind that it can’t hack itself total access to all nuclear weapons, because they are not connected to the Internet in any way.
But then they lose their advantage of having zero employee costs, which you brought up earlier. In addition, whatever plans the AIs plan on executing become bottlenecked by human speeds.
It depends on what you mean by “advanced”, though in general I think I agree.
I am willing to bet money that this will not happen, assuming that by “high end” you mean something like Nvidia’s Geforce 680 GTX. What are you basing your estimate on ?