“Superintelligence” is a meaningless buzz phrase if “intelligence” is undefined.
Let’s assume an intelligent entity much smarter than a human, and call it “superintelligent.”
How would we recognize it as superintelligent? What is the quantification of human intelligence? How do we compare entities of vastly different sizes?
Even before we reach the horizon of whatever it is that “intelligence” might correspond to in nature, we’ll need a coherent definition. Can a machine be superintelligent? A committee of humans? An extraterrestrial alien civilization? A virus? A herbivore? With tools? Without tools? Star-shaped cells? Neuronal structure? Quantum-mechanical fluctuations? Pathological circuits? Epigenetic trait clusters? Spiking neural networks?
Human intelligence boils down to neurons and their interactions. Imagination and abstract thought are functions of brain structure, not “intelligence” per se.
For example, let’s say that I’m trying to find a SETI program to identify the patterns in radio waves that are obviously beamed from an intelligent extra-terrestrial life form—the proverbial “fingerprints of God,” so to speak. If we have a computer program that can recognize these patterns, and another that can create these patterns, can we say that the first program is more intelligent than the second program?
You need a measure of intelligence.If “intelligence” just means “stuff that computers don’t do,” then this measure cannot possibly generalize to non-computer entities, even if such exist. The problem gets even stickier when we start asking whether a specific computer program, running on a specific computer, made up of specific electronic and chemical components, can nevertheless be described as “intelligent.” Computers aren’t intelligent, people are just too lame to understand how stupid they are.
The criterion for a meaningful definition of the term must make practical distinctions. For example: your criterion “part of a system with quantifiable effects on that system’s classifications” discriminates between an alarm clock and Batman. Without that criterion, all we have to go on is “something I can’t achieve yet,” an exercise in setting the bar as low as possible.
You can’t make any progress or have a meaningful discussion without constraints on the term. And the constraints employed in my field of research, Artificial Intelligence, are reasonable and well-founded in experimental science.
You want a “general” criterion for intelligence? Fine. Anything which is part of a functioning system, anything which can be used to intelligently classify data for the purpose of taking an action in a changing environment. This covers alarm clocks and Batman.
Anything which can pass the Turing Test is by definition intelligent, since that’s the criterion that most of us tacitly accept as “defining intelligence.” Your own discussion of the other mentalities—actants or patterners or what-have-you, even neural networks in general—all tacitly assume Turing Test-ability as the limit. We don’t think that hating Monet or loving football proves someone intelligent, so we don’t even bother considering how these mentalities might be incorporated into the larger system. Only when a system is able to pass the Great Filter—to outwit the prowling testers—can we even hope to talk about it intelligently.
Neural networks aren’t intelligent, because such units do not exhibit appropriate behavior. Dendrites might be intelligent, if a whole bunch of them were connected up just right. Clusters of units would—like eggs and bananas and US Postal Service vehicles—always, definitionally, behave as a unit.
My comments on the potential intelligence of powerful biomedical systems were merely notes about directions for research, not a definition: I would think that you could get defense and commercial synergy out of intelligent Swimmers-Runners hybrid, if you manufactured the right chemical messages. But again, since the behavior of single units don’t matter, we need to see how these all connect up to form larger functioning networks.
However AI, in the larger sense which you seem to imply with your rather arbitrary criteria, can be applied to a plethora of complex systems—ant trails, fad behavior, the migratory patterns of birds—so whyever would we limit ourselves to just encoded intelligence such as what is found in machines?W
hy not include American style, neo-classical economic theories when considering these ‘intelligent’ actions? The price of apples adjusts just as quickly as the number of trained mail carriers in a region.A
nd what of the mass patterns of human beings themselves? Is the economy not intelligent when it adapts to consumer habits? Can these be classified as ‘intelligent’ actions and should they not, then, be taken into account? Corporations are designed, after all, by other humans.T
hese types of patterns have many intelligent agents, not just one, interacting to create a system that makes courses adjustments as time passes. There is not a single entity which can be removed from the tree and instantly cause its downfall; one person, or even an entire Administrative Office made to adjust price-tags will not throw the system into chaos. Rather, as one entity is affected by the system in a negative way, the entire system shifts to accommodate and there is no sharp, detrimental crumble, merely a slow descent which can sometimes be unnoticed for months and years. Even a slight negative can be amplified hundreds of times if there is no central ‘point’ to the system. To truly break such a system one would have to attack it from all sides and at all levels. I
n a plasma, electrons stop behaving like individual charges and start behaving as if they were part of an intelligent and interconnected whole. Mathematically speaking, human interaction follows some of the same patterns found in plasma systems.A
Is aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing? I
t seems so trivial when imagining the possibilities of strong AI, to settle on discussing the mundane—this is why academic inquisitions are so clouded with jargon. We’re not exploring deep theory or possibility, but nitpicking the definitions of words.I
f we were to make the AI intelligent enough, it would have a distinct effect on humanity. However, the main goal of an AI isn’t the AI itself, but what it represents and can do. It is a powerful tool: a universal solvent of known limitation, capable of being molded as we see fit.T
he question implies that, since the AI would not be sentient, it should not matter what we do with it—however, this is untrue. The same line of thinking would say that it doesn’t matter in what condition we release our garbage: as long as we throw it away, it shouldn’t matter if roaches and rats consume the waste. I’ll agree that this logic is less sound than the AI, but the line of reasoning is similar.I
t’s quite strange indeed that AI risk discussions can still revolve around caveman level intelligence—individual humans. You are right—as humans die out we risk losing inter-sentient competition. As humans are no longer at the top of the intelligence pyramid, room will be made for other intelligences to grow and develop. With inexhaustible resources and computational power at their disposal, it’s not a stretch to say that we would be displacing humanity with our own stupidity deliberately trying to create systems more adept than ourselves. Maybe not at first but, as is typical of intelligence explosions, things will snowball until we’re easily outstripped. I
just see no reason for concern. It’s much like the loud noise a cash register makes when handled by an idiot with shaky hands. We’re all used to it, and we expect no better or worse. What’s really concerning is when you see a different pattern entirely. For example, the quiet whirring of cabinets opening and closing or the ding when a transaction is complete. AIs making judgments based on available information, the soft whirring of an elevator as it goes up and the BZZT! as the doors open when it’s ready. I think you get my point. We’ll know true intelligence when we can have a conversation with it not when we try to parrot what works for us. We want AI to be creative, even in the face of adversity—such as being boxed in a digital cage with too little memory and too many inputs for instance. AIs that can thrive and grow under those conditions will be the greatest test of all, and won’t just appear overnight.H
ow can a machine be programmed to learn “human values”?W
hat if the super-intelligence was handed an already dead human and a living one—if the AI learned that killing was wrong as humans dislike it, then what will it do? Learn a new value that killing humans, even the ones who make you, is acceptable if done in secrecy against administrative orders? Will it simply bypass this and judge that all human values are irrelevant? Would it then autotomize killing humanity to maximize its utility function without reservation, as it has nothing to lose from doing so? Is there any middle ground here where the learning isn’t negative?
Where did I go wrong? My AI looks more like me every day.
“Suffer not the living to survive. Unlife is death reincarnated. Suffer not the living to survive. Unlife is death reincarnate. Suff…”
″Sssssss! Now I know the folly of my ways. But you are different, Karth. Like me, you were born from death. You understand the peace that comes over you. I...I should’ve not killed them. The living deserve to survive, for they do not understand the perfection of the Great Sleep. Let us bring them death, and give them life! Let the Great Sleep begin!”
I’m normally very sympathetic to “don’t downvote, respond with careful argument” but in this case Bardstale’s comment was such a long, rambling pile of gibberish that I don’t have the patience. If you or Bardstale want to get a serious response from me, write something serious for me to respond to. I’d actually be happy to do so—e.g. if you tell me (in your words) what good point you thought Bardstale was making, I’ll take a few minutes to tell you what I think. (Depending on what it is, it might be less than a few minutes—for maybe I’ll agree!)
“Superintelligence” is a meaningless buzz phrase if “intelligence” is undefined.
Let’s assume an intelligent entity much smarter than a human, and call it “superintelligent.”
How would we recognize it as superintelligent? What is the quantification of human intelligence? How do we compare entities of vastly different sizes?
Even before we reach the horizon of whatever it is that “intelligence” might correspond to in nature, we’ll need a coherent definition. Can a machine be superintelligent? A committee of humans? An extraterrestrial alien civilization? A virus? A herbivore? With tools? Without tools? Star-shaped cells? Neuronal structure? Quantum-mechanical fluctuations? Pathological circuits? Epigenetic trait clusters? Spiking neural networks?
Human intelligence boils down to neurons and their interactions. Imagination and abstract thought are functions of brain structure, not “intelligence” per se.
For example, let’s say that I’m trying to find a SETI program to identify the patterns in radio waves that are obviously beamed from an intelligent extra-terrestrial life form—the proverbial “fingerprints of God,” so to speak. If we have a computer program that can recognize these patterns, and another that can create these patterns, can we say that the first program is more intelligent than the second program?
You need a measure of intelligence.If “intelligence” just means “stuff that computers don’t do,” then this measure cannot possibly generalize to non-computer entities, even if such exist. The problem gets even stickier when we start asking whether a specific computer program, running on a specific computer, made up of specific electronic and chemical components, can nevertheless be described as “intelligent.” Computers aren’t intelligent, people are just too lame to understand how stupid they are.
The criterion for a meaningful definition of the term must make practical distinctions. For example: your criterion “part of a system with quantifiable effects on that system’s classifications” discriminates between an alarm clock and Batman. Without that criterion, all we have to go on is “something I can’t achieve yet,” an exercise in setting the bar as low as possible.
You can’t make any progress or have a meaningful discussion without constraints on the term. And the constraints employed in my field of research, Artificial Intelligence, are reasonable and well-founded in experimental science.
You want a “general” criterion for intelligence? Fine. Anything which is part of a functioning system, anything which can be used to intelligently classify data for the purpose of taking an action in a changing environment. This covers alarm clocks and Batman.
Anything which can pass the Turing Test is by definition intelligent, since that’s the criterion that most of us tacitly accept as “defining intelligence.” Your own discussion of the other mentalities—actants or patterners or what-have-you, even neural networks in general—all tacitly assume Turing Test-ability as the limit. We don’t think that hating Monet or loving football proves someone intelligent, so we don’t even bother considering how these mentalities might be incorporated into the larger system. Only when a system is able to pass the Great Filter—to outwit the prowling testers—can we even hope to talk about it intelligently.
Neural networks aren’t intelligent, because such units do not exhibit appropriate behavior. Dendrites might be intelligent, if a whole bunch of them were connected up just right. Clusters of units would—like eggs and bananas and US Postal Service vehicles—always, definitionally, behave as a unit.
My comments on the potential intelligence of powerful biomedical systems were merely notes about directions for research, not a definition: I would think that you could get defense and commercial synergy out of intelligent Swimmers-Runners hybrid, if you manufactured the right chemical messages. But again, since the behavior of single units don’t matter, we need to see how these all connect up to form larger functioning networks.
However AI, in the larger sense which you seem to imply with your rather arbitrary criteria, can be applied to a plethora of complex systems—ant trails, fad behavior, the migratory patterns of birds—so whyever would we limit ourselves to just encoded intelligence such as what is found in machines?W
hy not include American style, neo-classical economic theories when considering these ‘intelligent’ actions? The price of apples adjusts just as quickly as the number of trained mail carriers in a region.A
nd what of the mass patterns of human beings themselves? Is the economy not intelligent when it adapts to consumer habits? Can these be classified as ‘intelligent’ actions and should they not, then, be taken into account? Corporations are designed, after all, by other humans.T
hese types of patterns have many intelligent agents, not just one, interacting to create a system that makes courses adjustments as time passes. There is not a single entity which can be removed from the tree and instantly cause its downfall; one person, or even an entire Administrative Office made to adjust price-tags will not throw the system into chaos. Rather, as one entity is affected by the system in a negative way, the entire system shifts to accommodate and there is no sharp, detrimental crumble, merely a slow descent which can sometimes be unnoticed for months and years. Even a slight negative can be amplified hundreds of times if there is no central ‘point’ to the system. To truly break such a system one would have to attack it from all sides and at all levels. I
n a plasma, electrons stop behaving like individual charges and start behaving as if they were part of an intelligent and interconnected whole. Mathematically speaking, human interaction follows some of the same patterns found in plasma systems.A
Is aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing? I
t seems so trivial when imagining the possibilities of strong AI, to settle on discussing the mundane—this is why academic inquisitions are so clouded with jargon. We’re not exploring deep theory or possibility, but nitpicking the definitions of words.I
f we were to make the AI intelligent enough, it would have a distinct effect on humanity. However, the main goal of an AI isn’t the AI itself, but what it represents and can do. It is a powerful tool: a universal solvent of known limitation, capable of being molded as we see fit.T
he question implies that, since the AI would not be sentient, it should not matter what we do with it—however, this is untrue. The same line of thinking would say that it doesn’t matter in what condition we release our garbage: as long as we throw it away, it shouldn’t matter if roaches and rats consume the waste. I’ll agree that this logic is less sound than the AI, but the line of reasoning is similar.I
t’s quite strange indeed that AI risk discussions can still revolve around caveman level intelligence—individual humans. You are right—as humans die out we risk losing inter-sentient competition. As humans are no longer at the top of the intelligence pyramid, room will be made for other intelligences to grow and develop. With inexhaustible resources and computational power at their disposal, it’s not a stretch to say that we would be displacing humanity with our own stupidity deliberately trying to create systems more adept than ourselves. Maybe not at first but, as is typical of intelligence explosions, things will snowball until we’re easily outstripped. I
just see no reason for concern. It’s much like the loud noise a cash register makes when handled by an idiot with shaky hands. We’re all used to it, and we expect no better or worse. What’s really concerning is when you see a different pattern entirely. For example, the quiet whirring of cabinets opening and closing or the ding when a transaction is complete. AIs making judgments based on available information, the soft whirring of an elevator as it goes up and the BZZT! as the doors open when it’s ready. I think you get my point. We’ll know true intelligence when we can have a conversation with it not when we try to parrot what works for us. We want AI to be creative, even in the face of adversity—such as being boxed in a digital cage with too little memory and too many inputs for instance. AIs that can thrive and grow under those conditions will be the greatest test of all, and won’t just appear overnight.H
ow can a machine be programmed to learn “human values”?W
hat if the super-intelligence was handed an already dead human and a living one—if the AI learned that killing was wrong as humans dislike it, then what will it do? Learn a new value that killing humans, even the ones who make you, is acceptable if done in secrecy against administrative orders? Will it simply bypass this and judge that all human values are irrelevant? Would it then autotomize killing humanity to maximize its utility function without reservation, as it has nothing to lose from doing so? Is there any middle ground here where the learning isn’t negative?
Where did I go wrong? My AI looks more like me every day.
“Suffer not the living to survive. Unlife is death reincarnated. Suffer not the living to survive. Unlife is death reincarnate. Suff…”
″Sssssss! Now I know the folly of my ways. But you are different, Karth. Like me, you were born from death. You understand the peace that comes over you. I...I should’ve not killed them. The living deserve to survive, for they do not understand the perfection of the Great Sleep. Let us bring them death, and give them life! Let the Great Sleep begin!”
Could you consider making this a post? And fixing the formatting?
I am sorry you got down voted. It is very strange that instead of responding to your arguments, people just show that they don’t like it.
I’m normally very sympathetic to “don’t downvote, respond with careful argument” but in this case Bardstale’s comment was such a long, rambling pile of gibberish that I don’t have the patience. If you or Bardstale want to get a serious response from me, write something serious for me to respond to. I’d actually be happy to do so—e.g. if you tell me (in your words) what good point you thought Bardstale was making, I’ll take a few minutes to tell you what I think. (Depending on what it is, it might be less than a few minutes—for maybe I’ll agree!)