Legg (2008) argues that many definitions of intelligence converge on this idea. We mean to endorse this informal definition, not Legg’s attempt to formalize intelligence in a later section of his manuscript.
Curious: is your lack of endorsement for Legg’s formalization because you don’t think that most readers would accept it, or because you find it flawed? (I always thought that his formalization was a pretty good one, and would like to hear about serious flaws if you think that such exist.)
You can substitute “utility” for “reward”, if you prefer. Reinforcement learning is a fairly general framework, except for its insistence on a scalar reward signal. If you talk to RL folk about the need for multiple reward signals, they say that sticking that information in the sensory channels is mathematically equivalent—which is kinda true.
Curious: is your lack of endorsement for Legg’s formalization because you don’t think that most readers would accept it, or because you find it flawed? (I always thought that his formalization was a pretty good one, and would like to hear about serious flaws if you think that such exist.)
I don’t endorse Legg’s formalization because it is limited to reinforcement learning agents.
That’s a good reason, and you should make that explicit.
Good point.
You can substitute “utility” for “reward”, if you prefer. Reinforcement learning is a fairly general framework, except for its insistence on a scalar reward signal. If you talk to RL folk about the need for multiple reward signals, they say that sticking that information in the sensory channels is mathematically equivalent—which is kinda true.