The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don’t personally think we’ll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.
Consciousness, I hope, is something more and different in kind, and maybe that’s what you were really after in the original post, but it’s a subjective beast. OTOH, if it is “mere” complex behavior we’re after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.
I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.
I had conceived of something like the Turing test but for intelligence period, not just general intelligence.
I wonder if general intelligence is about the domains under which a control system can perform.
I also wonder whether “minds” is a too limiting criteria for the goals of FAI.
Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don’t know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.
The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don’t personally think we’ll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.
Consciousness, I hope, is something more and different in kind, and maybe that’s what you were really after in the original post, but it’s a subjective beast. OTOH, if it is “mere” complex behavior we’re after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.
I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.
I had conceived of something like the Turing test but for intelligence period, not just general intelligence.
I wonder if general intelligence is about the domains under which a control system can perform.
I also wonder whether “minds” is a too limiting criteria for the goals of FAI.
Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don’t know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.
Maybe this is a more general formulation?