One could say AI is efficient cross-domain optimization, or “something that, given a mental representation of an arbitrary goal in the universe, can accomplish it in the same timescale as humans or faster”, but personally I think the “A” is not really necessary here, and we all know what intelligence is. It’s the trait that evolved in Homo sapiens that let them take over the planet in an evolutionary eyeblink. We can’t precisely define it, and the definitions I offered are only grasping at things that might be important.
If you think of intelligence as a trait of a process, you can imagine how many possible different things with utterly alien goals might get intelligence, and what they might use it for. Even the ones that would be a tiny bit interesting to us are just a small minority.
You may not care about satisfying human values, but I want my preferences to be satisfied and I have a meta-value that we should do the best effort to satisfy the preferences of any sapient being. If we look for the easiest thing to find that displays intelligence, the odds of that happening are next to none. It would eat us alive for a world of something that makes paperclips look beautiful in comparison.
And the prospect of an AI designed by the “Memetic Supercivilization” frankly terrifies me. A few minutes after an AI developer submits the last bugfix on github, a script kiddie thinks “Hey, let’s put a minus in front of the utility function right here and have it TORTURE PEOPLE LULZ” and thus the world ends. I think that is something best left to a small group of people. Placing our trust in the fact that the emergent structure of society that had little Darwinian selection, and a spectacular history of failures over a pretty short timescale, handed such a dangerous technology, would produce something good even for itself, let alone humans, seems unreasonable.
Replies to some points in your comment:
One could say AI is efficient cross-domain optimization, or “something that, given a mental representation of an arbitrary goal in the universe, can accomplish it in the same timescale as humans or faster”, but personally I think the “A” is not really necessary here, and we all know what intelligence is. It’s the trait that evolved in Homo sapiens that let them take over the planet in an evolutionary eyeblink. We can’t precisely define it, and the definitions I offered are only grasping at things that might be important.
If you think of intelligence as a trait of a process, you can imagine how many possible different things with utterly alien goals might get intelligence, and what they might use it for. Even the ones that would be a tiny bit interesting to us are just a small minority.
You may not care about satisfying human values, but I want my preferences to be satisfied and I have a meta-value that we should do the best effort to satisfy the preferences of any sapient being. If we look for the easiest thing to find that displays intelligence, the odds of that happening are next to none. It would eat us alive for a world of something that makes paperclips look beautiful in comparison.
And the prospect of an AI designed by the “Memetic Supercivilization” frankly terrifies me. A few minutes after an AI developer submits the last bugfix on github, a script kiddie thinks “Hey, let’s put a minus in front of the utility function right here and have it TORTURE PEOPLE LULZ” and thus the world ends. I think that is something best left to a small group of people. Placing our trust in the fact that the emergent structure of society that had little Darwinian selection, and a spectacular history of failures over a pretty short timescale, handed such a dangerous technology, would produce something good even for itself, let alone humans, seems unreasonable.