This really calls for a post rather than a comment, if I could ever get round to it.
So much of intelligence seems to be about ‘flexibility’. An intelligent agent can ‘step back from the system’ and ‘reflect on’ what it’s trying to do and why. As Hofstadter might say, to be intelligent it needs to have “fluid concepts” and be able to make “creative analogies”.
I don’t think it’s possible for human programmers in a basement to create this ‘fluidity’ by hand—my hunch would be that it has to ‘grow from within’. But then how can we inject a simple, crystalline ‘rule’ defining ‘utility’ and expect it to exert the necessary control over some lurching sea of ‘fluid concepts’? Couldn’t the agent “stand back from”, “reflect on” and “creatively reinterpret” whatever rules we tell it to follow?
Now you’re going to say “But hang on, when we ‘stand back’ and ‘reflect on’ something, what we’re doing is re-evaluating whether a proximate goal best serves a more distant goal, while the more distant goal itself remains unexamined. The hierarchy of goals must be finite, and the ‘top level goal’ can never be revised or ‘reinterpreted’.” I think that’s too simple. It’s certainly too simple as a description of human ‘reflection on goals’ (which is the only ‘intelligent reflection’ we know about so far).
To me it seems more realistic to say that our proximate goals are the more ‘real’ and ‘tangible’ ones, whereas higher level goals are abstract, vague, and malleable creations of the intellect alone. Our reinterpretation of a goal is some largely ad hoc intellectual feat, whose reasons are hard to fathom and perhaps not ‘entirely rational’, rather than the unfolding of a deep, inner plan. (At the same time, we have unconscious, animal ‘drives’ which again can be reflected on and overridden. It’s all very messy and complicated.)
Just because humans do it that way doesn’t mean it’s the only or best way for intelligence to work. Humans don’t have utility functions, but you might make a similar argument that biological tissue is necessary for intelligence because humans are made of biological tissue.
Or it may be neglecting emergent properties - the idea that creativity is “fluid,” so to make something creative we can’t have any parts that are “not fluid.”
This really calls for a post rather than a comment, if I could ever get round to it.
So much of intelligence seems to be about ‘flexibility’. An intelligent agent can ‘step back from the system’ and ‘reflect on’ what it’s trying to do and why. As Hofstadter might say, to be intelligent it needs to have “fluid concepts” and be able to make “creative analogies”.
I don’t think it’s possible for human programmers in a basement to create this ‘fluidity’ by hand—my hunch would be that it has to ‘grow from within’. But then how can we inject a simple, crystalline ‘rule’ defining ‘utility’ and expect it to exert the necessary control over some lurching sea of ‘fluid concepts’? Couldn’t the agent “stand back from”, “reflect on” and “creatively reinterpret” whatever rules we tell it to follow?
Now you’re going to say “But hang on, when we ‘stand back’ and ‘reflect on’ something, what we’re doing is re-evaluating whether a proximate goal best serves a more distant goal, while the more distant goal itself remains unexamined. The hierarchy of goals must be finite, and the ‘top level goal’ can never be revised or ‘reinterpreted’.” I think that’s too simple. It’s certainly too simple as a description of human ‘reflection on goals’ (which is the only ‘intelligent reflection’ we know about so far).
To me it seems more realistic to say that our proximate goals are the more ‘real’ and ‘tangible’ ones, whereas higher level goals are abstract, vague, and malleable creations of the intellect alone. Our reinterpretation of a goal is some largely ad hoc intellectual feat, whose reasons are hard to fathom and perhaps not ‘entirely rational’, rather than the unfolding of a deep, inner plan. (At the same time, we have unconscious, animal ‘drives’ which again can be reflected on and overridden. It’s all very messy and complicated.)
(it wasn’t me, but...)
Just because humans do it that way doesn’t mean it’s the only or best way for intelligence to work. Humans don’t have utility functions, but you might make a similar argument that biological tissue is necessary for intelligence because humans are made of biological tissue.
Or it may be neglecting emergent properties - the idea that creativity is “fluid,” so to make something creative we can’t have any parts that are “not fluid.”