Where it came up with important improvements that Lenat wouldn’t have thought of by himself was in the Traveler game—a simple formal puzzle, fully captured by a set of rules that were coded into the program, making the result a success in machine learning but not self-improvement.
...And applied these improvements to the subsequent modified set of rules. “That was machine learning, not self-improvement” sounds like a fully general counter-argument, especially considering your skepticism toward the very idea of self-improvement. Perhaps you can clarify the distinction?
Consider the requirement that a program have an intuitive user interface. We have nothing remotely approaching the ability to formally specify this, nor could an AI ever come up with such by pure introspection because it depends on entities that are not part of the AI.
An AI is allowed to learn from its environment, no one’s claiming it will simply meditate on the nature of being and then take over the universe. That said, this example has nothing to do with UFAI. A paperclip maximizer has no need for an intuitive user interface.
And if [science] ever does manage to accomplish [a formal specification of human psychology], why then, that would be the key to enabling the development of provably Friendly AI.
Indeed! Sadly, such a specification is not required to interact with and modify one’s environment. Humans were killing chimpanzees with stone tools long before they even possessed the concept of “psychology”.
I’ll call it self improvement when a substantial, nontrivial body of code is automatically developed, that is applicable to domains other than gameplaying, as opposed to playing a slightly different version of the same game. (Note that it was substantially the same strategy that won the second time as the first time, despite the rules changes.)
“A paperclip maximizer has no need for an intuitive user interface.”
True if you’re talking about something like Galactus that begins the story already possessing the ability to eat planets. However, UFAI believers often talk about paperclip maximizers being able to get past firewalls etc. by verbal persuasion of human operators. That certainly isn’t going to happen without a comprehensive theory of human psychology.
...And applied these improvements to the subsequent modified set of rules. “That was machine learning, not self-improvement” sounds like a fully general counter-argument, especially considering your skepticism toward the very idea of self-improvement. Perhaps you can clarify the distinction?
An AI is allowed to learn from its environment, no one’s claiming it will simply meditate on the nature of being and then take over the universe. That said, this example has nothing to do with UFAI. A paperclip maximizer has no need for an intuitive user interface.
Indeed! Sadly, such a specification is not required to interact with and modify one’s environment. Humans were killing chimpanzees with stone tools long before they even possessed the concept of “psychology”.
“Perhaps you can clarify the distinction?”
I’ll call it self improvement when a substantial, nontrivial body of code is automatically developed, that is applicable to domains other than gameplaying, as opposed to playing a slightly different version of the same game. (Note that it was substantially the same strategy that won the second time as the first time, despite the rules changes.)
“A paperclip maximizer has no need for an intuitive user interface.”
True if you’re talking about something like Galactus that begins the story already possessing the ability to eat planets. However, UFAI believers often talk about paperclip maximizers being able to get past firewalls etc. by verbal persuasion of human operators. That certainly isn’t going to happen without a comprehensive theory of human psychology.