Once again I strongly suggest that we taboo the word “goals” in this type of discussion. Or at least specify and stick to a particular technical definition for “goals,” so that we can separate “goalish behaviors not explicitly specified by a utility function” from “explicit goals-as-defined.” The salient issues in a discussion of hypothetical AI behaviors can and should be discussed without imbueing the AI with these very, very human abstract qualities.
As a further example of what I mean, this is the algorithm we are fond of executing. The following symbol indicates “approximate, lossy mapping:” =>
( “Physical human brain made of neurons” ) ⇒ ( “Conceptual messy web of biological drives and instrumental goals” ) ⇒ ( “Utility function containing discrete objects called ‘goals’ and ‘values’” )
My point is that goals are not ontologically fundamental. This algorithm/mapping is not something that happens physically. It is just how we like to simplify things in order to model ourselves and each other. We automatically extend this mapping to all intelligent agents, even when it is not appropriate.
So try this:
( “Physical computer running a program” ) ⇒ ( “Software algorithm interacting in complex ways with its environment” ) ⇒ ( “Goal directed agent” )
When the system we are talking about is Microsoft Word, most people would say that applying this algorithm/mapping is inappropriate and confusing. And yet Microsoft Word can still do things that its user and even its programmer wouldn’t necessarily expect it to in the process of carrying out simple instructions.
And an artificial agent doesn’t desire anything that it isn’t made to desire.
In a nutshell, this is the statement I most strongly disagree with. “Desire” does not exist. To imply that intelligent agents can only do things that are explicitly defined, while creativity is implied by intelligence, is to actually reverse the algorithm/mapping defined above. It is the misapplication of a lossy abstraction.
To imply that intelligent agents can only do things that are explicitly defined, while creativity is implied by intelligence...
In the case of an artificial agent, intelligence is explicitly defined. The agent does whatever is either explicitly defined, or what is implied by it. In the case of a creative, goal-oriented and generally intelligent agent this can be anything from doing nothing at all up to unbounded recursive self-improvement. But does general intelligence, in and of itself, evoke certain kinds of behavior? I don’t think so. You can be the smartest being around, if there are no internal causation’s that prompt the use of the full capacity of your potential then you won’t use it.
The salient issues in a discussion of hypothetical AI behaviors can and should be discussed without imbueing the AI with these very, very human abstract qualities.
I would love to see more technical discussions on the likely behavior of hyphothetical AI’s.
Once again I strongly suggest that we taboo the word “goals” in this type of discussion. Or at least specify and stick to a particular technical definition for “goals,” so that we can separate “goalish behaviors not explicitly specified by a utility function” from “explicit goals-as-defined.” The salient issues in a discussion of hypothetical AI behaviors can and should be discussed without imbueing the AI with these very, very human abstract qualities.
As a further example of what I mean, this is the algorithm we are fond of executing. The following symbol indicates “approximate, lossy mapping:” =>
( “Physical human brain made of neurons” ) ⇒ ( “Conceptual messy web of biological drives and instrumental goals” ) ⇒ ( “Utility function containing discrete objects called ‘goals’ and ‘values’” )
My point is that goals are not ontologically fundamental. This algorithm/mapping is not something that happens physically. It is just how we like to simplify things in order to model ourselves and each other. We automatically extend this mapping to all intelligent agents, even when it is not appropriate.
So try this:
( “Physical computer running a program” ) ⇒ ( “Software algorithm interacting in complex ways with its environment” ) ⇒ ( “Goal directed agent” )
When the system we are talking about is Microsoft Word, most people would say that applying this algorithm/mapping is inappropriate and confusing. And yet Microsoft Word can still do things that its user and even its programmer wouldn’t necessarily expect it to in the process of carrying out simple instructions.
In a nutshell, this is the statement I most strongly disagree with. “Desire” does not exist. To imply that intelligent agents can only do things that are explicitly defined, while creativity is implied by intelligence, is to actually reverse the algorithm/mapping defined above. It is the misapplication of a lossy abstraction.
In the case of an artificial agent, intelligence is explicitly defined. The agent does whatever is either explicitly defined, or what is implied by it. In the case of a creative, goal-oriented and generally intelligent agent this can be anything from doing nothing at all up to unbounded recursive self-improvement. But does general intelligence, in and of itself, evoke certain kinds of behavior? I don’t think so. You can be the smartest being around, if there are no internal causation’s that prompt the use of the full capacity of your potential then you won’t use it.
I would love to see more technical discussions on the likely behavior of hyphothetical AI’s.