So, {Possible AI} > {Evolved Intelligence} > {Human Intelligence}.
What about {AI practically discoverable/inventable by humans}? This could be an even smaller set than {Human Intelligence}. If it’s of a much higher order of intelligence than {Human Intelligence}, it’s argued that it would build smarter and smarter AI. How do we know it’s likely to be of a higher order?
I guess, I’d like to know more about: {AI practically discoverable/inventable by humans+non-sentient computers in the current generation}. Is there a compelling reason to believe this set is quite large, or quite small?
In particular, there is this quote:
The community and class of algorithms we’re using is fairly well defined, so we think we have a good sense of the competitive and technological landscape. There are probably something like 200—so, to be conservative, let’s say 2000—people out there with the skills and enthusiasm to be able to execute what we’re going after. But are they all tackling the exact same problems we are, and in the same way? That seems really unlikely.
Somehow the diversity that could be generated by 2000, 20000, or even 200000 researchers, presumably working in project teams of a few or a dozen, seems to be much smaller than the evolutionary diversity generated by a population of 10 billion Homo sapiens. (Though it may well span a much larger “volume” of design space, only a relatively few points would be represented across that volume.)
Keep in mind, successful designs will expand in mindspace as they are easy to copy, modify, and improve upon. Think mold colonies growing rapidly from a small handful of spores. Also, remember bootstrapping. It’s not just the AI’s that can be build by a few thousand humans (plus all the disparate fields they draw on). It’s all the AI’s that can be built by those AI’s, and on and on, ad infinitum.
Keep in mind, successful designs will expand in mindspace as they are easy to copy, modify, and improve upon.
You are essentially defining “successful designs” as such. And what we know about evolution is strong supporting evidence for this.
What makes you think the first generation of AI will have all of those qualities? What makes you think the first gen AIs will be useful for building more AIs?
The first biological replicators on earth would be considered non-viable clunky disasters by today’s standards.
The only way we can have “Friendly AI” beyond the first generation is if such entities are part of a larger “ecosystem” and they face economic, group dynamics, and evolutionary pressures that motivate (and “motivate”) them to be this way.
Perhaps the key to “Friendly AI” is going to be competitive augmentation of Human Intelligence.
So, {Possible AI} > {Evolved Intelligence} > {Human Intelligence}.
What about {AI practically discoverable/inventable by humans}? This could be an even smaller set than {Human Intelligence}. If it’s of a much higher order of intelligence than {Human Intelligence}, it’s argued that it would build smarter and smarter AI. How do we know it’s likely to be of a higher order?
I guess, I’d like to know more about: {AI practically discoverable/inventable by humans+non-sentient computers in the current generation}. Is there a compelling reason to believe this set is quite large, or quite small?
In particular, there is this quote:
Somehow the diversity that could be generated by 2000, 20000, or even 200000 researchers, presumably working in project teams of a few or a dozen, seems to be much smaller than the evolutionary diversity generated by a population of 10 billion Homo sapiens. (Though it may well span a much larger “volume” of design space, only a relatively few points would be represented across that volume.)
Keep in mind, successful designs will expand in mindspace as they are easy to copy, modify, and improve upon. Think mold colonies growing rapidly from a small handful of spores. Also, remember bootstrapping. It’s not just the AI’s that can be build by a few thousand humans (plus all the disparate fields they draw on). It’s all the AI’s that can be built by those AI’s, and on and on, ad infinitum.
You are essentially defining “successful designs” as such. And what we know about evolution is strong supporting evidence for this.
What makes you think the first generation of AI will have all of those qualities? What makes you think the first gen AIs will be useful for building more AIs?
The first biological replicators on earth would be considered non-viable clunky disasters by today’s standards.
The only way we can have “Friendly AI” beyond the first generation is if such entities are part of a larger “ecosystem” and they face economic, group dynamics, and evolutionary pressures that motivate (and “motivate”) them to be this way.
Perhaps the key to “Friendly AI” is going to be competitive augmentation of Human Intelligence.