Eliezer: As I said, there are plenty of circular definitions of intelligence, such as defining it as an “powerful optimization process” that hones in on outcomes you’ve predefined as being the product of intelligence (which is what your KnowabilityOfAI appears to do). Perhaps for your needs such a (circular) operational definition would suffice: take the set of artifacts and work backwards. That hardly seems helpful in designing any sort of workable software system though.
Re: modeling the human brain. Modeling the human brain would involve higher levels of organization. The point is that those higher levels of organization would be actual higher levels of organization that exist in real life and not the biologically implausible fantasies “AI researchers” have plucked out of thin air based on a mixture of folk psychology, introspection and wishful thinking.
Eliezer: As I said, there are plenty of circular definitions of intelligence, such as defining it as an “powerful optimization process” that hones in on outcomes you’ve predefined as being the product of intelligence (which is what your KnowabilityOfAI appears to do). Perhaps for your needs such a (circular) operational definition would suffice: take the set of artifacts and work backwards. That hardly seems helpful in designing any sort of workable software system though.
Re: modeling the human brain. Modeling the human brain would involve higher levels of organization. The point is that those higher levels of organization would be actual higher levels of organization that exist in real life and not the biologically implausible fantasies “AI researchers” have plucked out of thin air based on a mixture of folk psychology, introspection and wishful thinking.