The term «Artificial Intelligence» refers to a vastly greater space of possibilities than does the term «Homo sapiens». When we talk about «AIs» we are really talking about minds-in-general, or optimization processes in general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-in-general. The entire map floats in a still vaster space, the space of optimization processes.
Note: This is to make it clear that AI is very scary, this is not to shame or “counter” policymakers who anthropomorphize AGI. People look at today’s AI and see “tool”, not “alien mind”, and that is probably the biggest part of the problem, since ML researchers do it too. ML researchers STILL do it, in spite of everything that’s been happening lately.
“Any two AI designs might be less similar to one another than you are to a petunia.”
-Yudkowsky, AI pos neg factors, around 2006
for policymakers
semi-optional extra:
The term «Artificial Intelligence» refers to a vastly greater space of possibilities than does the term «Homo sapiens». When we talk about «AIs» we are really talking about minds-in-general, or optimization processes in general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-in-general. The entire map floats in a still vaster space, the space of optimization processes.
Note: This is to make it clear that AI is very scary, this is not to shame or “counter” policymakers who anthropomorphize AGI. People look at today’s AI and see “tool”, not “alien mind”, and that is probably the biggest part of the problem, since ML researchers do it too. ML researchers STILL do it, in spite of everything that’s been happening lately.