It’s interesting that Eliezer ties intelligence so closely to action (“steering the future”). I generally think of intelligence as being inside the mind, with behaviors & outcomes serving as excellent cues to an individual’s intelligence (or unintelligence), but not as part of the definition of intelligence. Would Deep Blue no longer be intelligent at chess if it didn’t have a human there to move the pieces on the board, or if it didn’t signal the next move in a way that was readily intelligible to humans? Is the AI-in-a-box not intelligent until it escapes the box?
Does an intelligent system have to have its own preferences? Or is it enough if it can find the means to the goals (with high optimization power, across domains), wherever the goals come from? Suppose that a machine was set up so that a “user” could spend a bit of time with it, and the machine would figure out enough about the user’s goals, and about the rest of the world, to inform the user about a course of action that would be near-optimal according to the user’s goals. I’d say it’s an intelligent machine, but it’s not steering the future toward any particular target in outcome space. You could call it intelligence as problem-solving.
There is only action, or interaction to be precise. It doesn’t matter whether we experience the intelligence or not, of course, just that it can be experienced.
Second paragraph
Sure, it could still be intelligent. It’s just more intelligent if it’s less dependent. The definition includes this since more cross-domain ⇒ less dependence.
It’s interesting that Eliezer ties intelligence so closely to action (“steering the future”). I generally think of intelligence as being inside the mind, with behaviors & outcomes serving as excellent cues to an individual’s intelligence (or unintelligence), but not as part of the definition of intelligence. Would Deep Blue no longer be intelligent at chess if it didn’t have a human there to move the pieces on the board, or if it didn’t signal the next move in a way that was readily intelligible to humans? Is the AI-in-a-box not intelligent until it escapes the box?
Does an intelligent system have to have its own preferences? Or is it enough if it can find the means to the goals (with high optimization power, across domains), wherever the goals come from? Suppose that a machine was set up so that a “user” could spend a bit of time with it, and the machine would figure out enough about the user’s goals, and about the rest of the world, to inform the user about a course of action that would be near-optimal according to the user’s goals. I’d say it’s an intelligent machine, but it’s not steering the future toward any particular target in outcome space. You could call it intelligence as problem-solving.
There is only action, or interaction to be precise. It doesn’t matter whether we experience the intelligence or not, of course, just that it can be experienced.
Sure, it could still be intelligent. It’s just more intelligent if it’s less dependent. The definition includes this since more cross-domain ⇒ less dependence.