A control system has two inputs (called its “perception” and “reference”) and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism.
What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception.
The answers to your questions are:
A “goal” is the reference input of a control system.
An “obstacle” is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference.
“Complicated” means “I don’t (yet) understand this.”
And a thought: “Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet’s lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely.”
Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?
Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.
The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don’t personally think we’ll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.
Consciousness, I hope, is something more and different in kind, and maybe that’s what you were really after in the original post, but it’s a subjective beast. OTOH, if it is “mere” complex behavior we’re after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.
I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.
I had conceived of something like the Turing test but for intelligence period, not just general intelligence.
I wonder if general intelligence is about the domains under which a control system can perform.
I also wonder whether “minds” is a too limiting criteria for the goals of FAI.
Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don’t know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.
I don’t want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I’d start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words “now we just have to scale it up”, if I was working on AGI I wouldn’t bother mentioning it until I had a demo of a level that would scare Eliezer.
Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds.
You are talking about control systems.
A control system has two inputs (called its “perception” and “reference”) and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism.
What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception.
The answers to your questions are:
A “goal” is the reference input of a control system.
An “obstacle” is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference.
“Complicated” means “I don’t (yet) understand this.”
Suggestions for readings.
And a thought: “Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet’s lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely.”
-- William James, “The Principles of Psychology”
Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?
Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.
The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don’t personally think we’ll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.
Consciousness, I hope, is something more and different in kind, and maybe that’s what you were really after in the original post, but it’s a subjective beast. OTOH, if it is “mere” complex behavior we’re after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.
I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.
I had conceived of something like the Turing test but for intelligence period, not just general intelligence.
I wonder if general intelligence is about the domains under which a control system can perform.
I also wonder whether “minds” is a too limiting criteria for the goals of FAI.
Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don’t know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.
Maybe this is a more general formulation?
I don’t want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I’d start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words “now we just have to scale it up”, if I was working on AGI I wouldn’t bother mentioning it until I had a demo of a level that would scare Eliezer.
Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds.
1. LessWrong, passim.
2. Marcus Hutter’s Compression Prize.
3. AIXItl and the Gödel machine.