It actually comes from Peter Norvig’s definition that AI is simply good software, a comment that Robin Hanson made: , and the general theme of Shane Legg’s definitions: which are ways of achieving particular goals.
I would also emphasize that the foundations of statistics can (and probably should) be framed in terms of decision theory (See DeGroot, “Optimal Statistical Decisions” for what I think is the best book on the topic, as a further note the decision-theoretic perspective is neither frequentist nor Bayesian: those two approaches can be understood through decision theory). The notion of an AI as being like an automated statistician captures at least the spirit of how I think about what I’m working on and this requires fundamentally economic thinking (in terms of the tradeoffs) as well as notions of utility.
It actually comes from Peter Norvig’s definition that AI is simply good software, a comment that Robin Hanson made: , and the general theme of Shane Legg’s definitions: which are ways of achieving particular goals.
I would also emphasize that the foundations of statistics can (and probably should) be framed in terms of decision theory (See DeGroot, “Optimal Statistical Decisions” for what I think is the best book on the topic, as a further note the decision-theoretic perspective is neither frequentist nor Bayesian: those two approaches can be understood through decision theory). The notion of an AI as being like an automated statistician captures at least the spirit of how I think about what I’m working on and this requires fundamentally economic thinking (in terms of the tradeoffs) as well as notions of utility.
Surely Peter Norvig never said that!
Go to 1:00 minute here
“Building the best possible programs” is what he says.
Ah, what he means is having an agent which will sort through the available programs—and quickly find one that efficiently does the specified task.