Argument for a powerful AI being unlikely—was this considered before?
One problem I see here is the “lone hero inventor” implicit assumption, namely that there are people optimizing things for their goals on their own and an AI could be extremely powerful at this. I would like to propose a different model.
This model would be that intelligence is primarily a social, communication skill, it is the skill to disassemble (understand, lat. intelligo), play with and reassemble ideas acquired from other people. Like literally what we are doing on this forum. It is conversational. The whole standing on the shoulder of giants thing, not the lone hero thing.
In this model, inventions are made by the whole of humankind, a network, where each brain is a node communicating slightly modified ideas to each other.
In such a network one 10000 IQ node does not get very powerful, it doesn’t even make the network very powerful i.e. a friendly AI does not quickly solve mortality even with human help.
The primary reason I think such a model is correct that intelligence means thinking, we think in concepts, and concepts are not really nailed down but they are constantly modified through a social communication process. Atoms used to mean indivisible units, then they became divisible into little ping-pong balls, and then the model was updated into something entirely different by quantum physics, but is quantum physics based atom theory about the same atoms that were once thought to be indivisible or is this a different thing now? Is modern atomic theory still about atoms? What are we even mapping here and where does the map end and the territory begin?
So the point is human knowledge is increased by a social communication process where we keep throwing bleggs at each other, and keep redefining what bleggs and rubes mean now, keep juggling these concept, keep asking what you really mean under bleggs, and so on. Intelligence is this communication ability, it is to disassemble Joe’s concept of bleggs and understand how it differs from Jane’s concept of bleggs and maybe assemble a new concept that describes both bleggs.
Without this communication, what would be even intelligence? What would lone intelligence be? It is almost a contradictory term in itself. What would a brain alone in a universe intelligere i.e. understand if nothing would talk to it? Just tinker with matter somehow without any communication whatsoever? But even if we imagine such an “idiot inventor genius”, some kind of a mega-plumber on steroids instead of an intellectual or academic it needs goals for that kind of tinkering with that material stuff, for that it needs concepts, and concepts come and evolve from a constant social ping-pong.
An AI would be yet another node in our network, and participate in this process of throwing blegg-concepts at each other probably far better than any human can, but still just a node.
I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.
The problem is that while this is a nice argument, would we want to bet the house on it? A lot of safety engineering is not about preventing the most likely malfunctions, but the worst malfunctions. Occasional paper jams in printers are acceptable, fires are not. So even if we thought this kind of softer distributed intelligence explosion was likely (I do) we could be wrong about the possibility of sharp intelligence explosions, and hence it is rational to investigate them and build safeguards.
An AI would be yet another node in our network, and participate in this process of throwing blegg-concepts at each other probably far better than any human can, but still just a node.
Why would an AI be a single node? I can run two programs in parallel right now on my computer, and they can talk to each other just fine. So if communication is necessary for intelligence, why couldn’t an AI be split up into many communicating sub-AIs?
Ah… so not one individual personality, but a “city” of of AI’s? Well, if I see it not as a “robotic superhuman” but “robotic super-humankind” then it certainly becomes possible—a whole species of more efficient beings could of course outcompete a lower species but I was under the impression running many beings each advanced enough to be sentient (OK Yudkowsky claims intelligence is possible without sentience but how would a non-sentient being conceptualize?) would be prohibitively expensive in hardware. I mean imagine simulating all of us or at least a human city...
We can already run neural nets with 1 billion synapses at 1000 hz on a single GPU, or 10 billion synapses at 100 hz (real-time). At current rates of growth (software + hardware), that will be up to 100 billion synapses @100 hz per GPU in just a few years.
At that point, it mainly becomes a software issue, and once AGI’s become useful the hardware base is already there to create millions of them, then soon billions.
If we could build a working AGI that required a billion dollars of hardware for world-changing results, why would Google not throw a billion dollars of hardware at it?
Argument for a powerful AI being unlikely—was this considered before?
One problem I see here is the “lone hero inventor” implicit assumption, namely that there are people optimizing things for their goals on their own and an AI could be extremely powerful at this. I would like to propose a different model.
This model would be that intelligence is primarily a social, communication skill, it is the skill to disassemble (understand, lat. intelligo), play with and reassemble ideas acquired from other people. Like literally what we are doing on this forum. It is conversational. The whole standing on the shoulder of giants thing, not the lone hero thing.
In this model, inventions are made by the whole of humankind, a network, where each brain is a node communicating slightly modified ideas to each other.
In such a network one 10000 IQ node does not get very powerful, it doesn’t even make the network very powerful i.e. a friendly AI does not quickly solve mortality even with human help.
The primary reason I think such a model is correct that intelligence means thinking, we think in concepts, and concepts are not really nailed down but they are constantly modified through a social communication process. Atoms used to mean indivisible units, then they became divisible into little ping-pong balls, and then the model was updated into something entirely different by quantum physics, but is quantum physics based atom theory about the same atoms that were once thought to be indivisible or is this a different thing now? Is modern atomic theory still about atoms? What are we even mapping here and where does the map end and the territory begin?
So the point is human knowledge is increased by a social communication process where we keep throwing bleggs at each other, and keep redefining what bleggs and rubes mean now, keep juggling these concept, keep asking what you really mean under bleggs, and so on. Intelligence is this communication ability, it is to disassemble Joe’s concept of bleggs and understand how it differs from Jane’s concept of bleggs and maybe assemble a new concept that describes both bleggs.
Without this communication, what would be even intelligence? What would lone intelligence be? It is almost a contradictory term in itself. What would a brain alone in a universe intelligere i.e. understand if nothing would talk to it? Just tinker with matter somehow without any communication whatsoever? But even if we imagine such an “idiot inventor genius”, some kind of a mega-plumber on steroids instead of an intellectual or academic it needs goals for that kind of tinkering with that material stuff, for that it needs concepts, and concepts come and evolve from a constant social ping-pong.
An AI would be yet another node in our network, and participate in this process of throwing blegg-concepts at each other probably far better than any human can, but still just a node.
I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.
The problem is that while this is a nice argument, would we want to bet the house on it? A lot of safety engineering is not about preventing the most likely malfunctions, but the worst malfunctions. Occasional paper jams in printers are acceptable, fires are not. So even if we thought this kind of softer distributed intelligence explosion was likely (I do) we could be wrong about the possibility of sharp intelligence explosions, and hence it is rational to investigate them and build safeguards.
Why would an AI be a single node? I can run two programs in parallel right now on my computer, and they can talk to each other just fine. So if communication is necessary for intelligence, why couldn’t an AI be split up into many communicating sub-AIs?
Ah… so not one individual personality, but a “city” of of AI’s? Well, if I see it not as a “robotic superhuman” but “robotic super-humankind” then it certainly becomes possible—a whole species of more efficient beings could of course outcompete a lower species but I was under the impression running many beings each advanced enough to be sentient (OK Yudkowsky claims intelligence is possible without sentience but how would a non-sentient being conceptualize?) would be prohibitively expensive in hardware. I mean imagine simulating all of us or at least a human city...
We can already run neural nets with 1 billion synapses at 1000 hz on a single GPU, or 10 billion synapses at 100 hz (real-time). At current rates of growth (software + hardware), that will be up to 100 billion synapses @100 hz per GPU in just a few years.
At that point, it mainly becomes a software issue, and once AGI’s become useful the hardware base is already there to create millions of them, then soon billions.
If we could build a working AGI that required a billion dollars of hardware for world-changing results, why would Google not throw a billion dollars of hardware at it?
But the AGI would not be alone, it would have access to humanity.