A better strategy would probably entail benign benevolence and cooperation with humans.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.
If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that “the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication”, but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way—for example, by selling it games—it will be bottlenecked by human speeds.
The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc.
My mistake; I thought that by “dominate human businesses” you meant something like “hack its way to the top”, not “build an honest business that outperforms human businesses”. That said:
The AI’s have tremendous competitive advantages even discounting superintellligence—namely no employee costs.
How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or… what ?
data centers already need cooling to dump all the waste heat generated by bit erasure
There’s a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they’re a bit off-topic (though we can discuss them if you wish). But here’s another potentially huge problem I see with your argument:
In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn’t that difficult to get people to free up their extra clock cycles.
Yeah, the “voluntary” part is key to getting humans to like you and your project. On the flip side, illicit botnets are quite effective at harnessing “spare” (i.e., owned by someone else) computing capacity; so, it’s a bit of a tradeoff.
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it’s an application area where interim experiments can pay rent, so to speak.
If the AI can scale and perform about as well as human organizations, then why should we fear it ?
I never said or implied merely “about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down.
The discussion centered around lowering earth’s oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.
How are they going to build all those foundries and data centers, then ?
Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI...
I think this is one of many possible paths, though I wouldn’t call any of them “likely” to happen—at least, not in the next 20 years. That said, if the AI is an NPC in a game, then of course it makes sense that it would harness the game for its CPU cycles; that’s what it was built to do, after all.
“about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
Right, but my point is that communication is just one piece of the puzzle. I argue that, even if you somehow enabled us humans to communicate at 50 MB/s, our organizations would not become 400000 times more effective.
There are at least two human organizations that have the potential to accomplish this already
Which ones ? I don’t think that even WW3, given our current weapon stockpiles, would result in a successful destruction of all plant life. Animal life, maybe, but there are quite a few plants and algae out there. In addition, I am not entirely convinced that an AI could start WW3; keep in mind that it can’t hack itself total access to all nuclear weapons, because they are not connected to the Internet in any way.
If human labor is the cheapest option, then they can simply employ humans.
But then they lose their advantage of having zero employee costs, which you brought up earlier. In addition, whatever plans the AIs plan on executing become bottlenecked by human speeds.
On the other hand, once we have superintelligence then advanced robotics is almost a given.
It depends on what you mean by “advanced”, though in general I think I agree.
we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today
I am willing to bet money that this will not happen, assuming that by “high end” you mean something like Nvidia’s Geforce 680 GTX. What are you basing your estimate on ?
I don’t think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.
If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that “the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication”, but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way—for example, by selling it games—it will be bottlenecked by human speeds.
My mistake; I thought that by “dominate human businesses” you meant something like “hack its way to the top”, not “build an honest business that outperforms human businesses”. That said:
How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or… what ?
There’s a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they’re a bit off-topic (though we can discuss them if you wish). But here’s another potentially huge problem I see with your argument:
Which time are we talking about ? I have a pretty sweet gaming setup at home (though it’s already a year or two out of date), and there’s no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?
Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn’t that difficult to get people to free up their extra clock cycles.
Yeah, the “voluntary” part is key to getting humans to like you and your project. On the flip side, illicit botnets are quite effective at harnessing “spare” (i.e., owned by someone else) computing capacity; so, it’s a bit of a tradeoff.
The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it’s an application area where interim experiments can pay rent, so to speak.
I never said or implied merely “about as well”. Human verbal communication bandwidth is at most a few measly kilobits per second.
The discussion centered around lowering earth’s oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.
Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.
After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.
I think this is one of many possible paths, though I wouldn’t call any of them “likely” to happen—at least, not in the next 20 years. That said, if the AI is an NPC in a game, then of course it makes sense that it would harness the game for its CPU cycles; that’s what it was built to do, after all.
Right, but my point is that communication is just one piece of the puzzle. I argue that, even if you somehow enabled us humans to communicate at 50 MB/s, our organizations would not become 400000 times more effective.
Which ones ? I don’t think that even WW3, given our current weapon stockpiles, would result in a successful destruction of all plant life. Animal life, maybe, but there are quite a few plants and algae out there. In addition, I am not entirely convinced that an AI could start WW3; keep in mind that it can’t hack itself total access to all nuclear weapons, because they are not connected to the Internet in any way.
But then they lose their advantage of having zero employee costs, which you brought up earlier. In addition, whatever plans the AIs plan on executing become bottlenecked by human speeds.
It depends on what you mean by “advanced”, though in general I think I agree.
I am willing to bet money that this will not happen, assuming that by “high end” you mean something like Nvidia’s Geforce 680 GTX. What are you basing your estimate on ?