Evolu­tion­ary Ar­gu­ment For Hu­man-Level AI

WikiLast edit: 16 Sep 2020 19:44 UTC by Swimmer963 (Miranda Dixon-Luinenburg)

An Evolutionary argument for human-level AI is an argument that uses the fact that evolution produced human level intelligence to argue for the feasibility of human-level AI1. Evolution is an extremely slow, random and erratic process, sometimes compared to a drunkard’s walk 2 or a “fickle, and tightly shackled tinkerer”3. It has, however, produced human intelligence in this manner, so intelligent humans with a plan should produce it much faster. Bostrom and Shulman4 formalize the many steps of the argument for non-hard human level AI as follows: ”

_________

There are variations of this argument, some argue for the easily feasibility of human level AI through the use of evolutionary algorithms. These arguments use estimates on the total amount of computational power needed to simulate the entire evolution of human level intelligence and argue such level of computational power is well within range. Bostrom and Shulman5 argue that, in fact, it would take more than a century of Moore’s Law progress in order to match the entire evolution of intelligence computational powers. They also make a careful analysis of Evolutionary argument for human-level AI considering observational selection effects.

References

  1. CHALMERS, David. (2010) “The Singularity: A Philosophical Analysis, Journal of Consciousness Studies”, 17 (9-10), pp. 7-65.

  2. MLODINOW, Leonard. (2008) “The Drunkard’s walk : how randomness rules our lives.” New York: Pantheon Books.

  3. POWELL, Russell & BUCHANAN, Allen. (2011) “Breaking Evolution’s Chains: The Promise of Enhancement by Design” In: SAVULESCU, J. e MEULEN, Rudd ter (orgs.) “Enhancing Human Capacities”. Wiley-Blackwell.

  4. BOSTROM, Nick & SHULMAN, Carl. (2012) “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” Forthcoming in the Journal of Consciousness Studies. Available at: http://​​www.nickbostrom.com/​​aievolution.pdf

No comments.