I think there is a science of intelligence which (in my opinion) is closely related to computation, biology, and production functions (in the economic sense). The difficulty is that there is much debate as to what constitutes intelligence: there aren’t any easily definable results in the field of intelligence nor are there clear definitions.
There is also the engineering side: this is to create an intelligence. The engineering is driven by a vague sense of what an AI should be, and one builds theories to construct concrete subproblems and give a framework for developing solutions.
Either way this is very different than astrophysics where one is attempting to: say, explain the motions of the heavenly sphere: which have a regularity, simplicity, and clarity to them that is lacking in any formulation of the AI problem.
I would say that AI researchers do formulate theories about how to solve particular engineering problems for AI systems, and then they test them out by programming them (hopefully). I suppose I count, and that’s certainly what I and my colleagues do. Most papers in my fields of interest (machine learning and speech recognition) usually include an “experiments” section. I think that when you know a bit more about the actually problems AI people are solving you’ll find that quite a bit of progress has been achieved since the 1960′s.
Yes, but I guess Marks’ problem was that there are too many clear definitions. Thus, it’s not clear which to use.
Interestingly, many unclear definitions don’t have this particular problem. Clear definitions tend don’t allow as much wiggle room to make them mutually compatible :-)
The fact that there are so many definitions and no consensus is precisely the unclarity. Shane Legg has done us all a great favor by collecting those definitions together. With that said, his definition is certainly not the standard in the field and many people still believe their separate definitions.
I think his definitions often lack an understanding of the statistical aspects of intelligence, and as such they don’t give much insight into the part of AI that I and others work on.
I think there is a science of intelligence which (in my opinion) is closely related to computation, biology, and production functions (in the economic sense).
Interesting that you’re taking into account the economic angle. Is it related to Eric Baum’s ideas (e.g. “Manifesto for an evolutionary economics of intelligence”)?
The difficulty is that there is much debate as to what constitutes intelligence: there aren’t any easily definable results in the field of intelligence nor are there clear definitions.
Right, so in Kuhnian terms, AI is in a pre-paradigm phase where there is no consensus on definitions or frameworks, and so normal science cannot occur. That implies to me that people should spend much more time thinking about candidate paradigms and conceptual frameworks, and less time doing technical research that is unattached to any paradigm (or attached to a candidate paradigm that is obviously flawed).
It actually comes from Peter Norvig’s definition that AI is simply good software, a comment that Robin Hanson made: , and the general theme of Shane Legg’s definitions: which are ways of achieving particular goals.
I would also emphasize that the foundations of statistics can (and probably should) be framed in terms of decision theory (See DeGroot, “Optimal Statistical Decisions” for what I think is the best book on the topic, as a further note the decision-theoretic perspective is neither frequentist nor Bayesian: those two approaches can be understood through decision theory). The notion of an AI as being like an automated statistician captures at least the spirit of how I think about what I’m working on and this requires fundamentally economic thinking (in terms of the tradeoffs) as well as notions of utility.
I think there is a science of intelligence which (in my opinion) is closely related to computation, biology, and production functions (in the economic sense). The difficulty is that there is much debate as to what constitutes intelligence: there aren’t any easily definable results in the field of intelligence nor are there clear definitions.
There is also the engineering side: this is to create an intelligence. The engineering is driven by a vague sense of what an AI should be, and one builds theories to construct concrete subproblems and give a framework for developing solutions.
Either way this is very different than astrophysics where one is attempting to: say, explain the motions of the heavenly sphere: which have a regularity, simplicity, and clarity to them that is lacking in any formulation of the AI problem.
I would say that AI researchers do formulate theories about how to solve particular engineering problems for AI systems, and then they test them out by programming them (hopefully). I suppose I count, and that’s certainly what I and my colleagues do. Most papers in my fields of interest (machine learning and speech recognition) usually include an “experiments” section. I think that when you know a bit more about the actually problems AI people are solving you’ll find that quite a bit of progress has been achieved since the 1960′s.
Re: there aren’t any easily definable results in the field of intelligence nor are there clear definitions.
There are pretty clear definitions: http://www.vetta.org/definitions-of-intelligence/
Yes, but I guess Marks’ problem was that there are too many clear definitions. Thus, it’s not clear which to use.
Interestingly, many unclear definitions don’t have this particular problem. Clear definitions tend don’t allow as much wiggle room to make them mutually compatible :-)
The fact that there are so many definitions and no consensus is precisely the unclarity. Shane Legg has done us all a great favor by collecting those definitions together. With that said, his definition is certainly not the standard in the field and many people still believe their separate definitions.
I think his definitions often lack an understanding of the statistical aspects of intelligence, and as such they don’t give much insight into the part of AI that I and others work on.
Interesting that you’re taking into account the economic angle. Is it related to Eric Baum’s ideas (e.g. “Manifesto for an evolutionary economics of intelligence”)?
Right, so in Kuhnian terms, AI is in a pre-paradigm phase where there is no consensus on definitions or frameworks, and so normal science cannot occur. That implies to me that people should spend much more time thinking about candidate paradigms and conceptual frameworks, and less time doing technical research that is unattached to any paradigm (or attached to a candidate paradigm that is obviously flawed).
It actually comes from Peter Norvig’s definition that AI is simply good software, a comment that Robin Hanson made: , and the general theme of Shane Legg’s definitions: which are ways of achieving particular goals.
I would also emphasize that the foundations of statistics can (and probably should) be framed in terms of decision theory (See DeGroot, “Optimal Statistical Decisions” for what I think is the best book on the topic, as a further note the decision-theoretic perspective is neither frequentist nor Bayesian: those two approaches can be understood through decision theory). The notion of an AI as being like an automated statistician captures at least the spirit of how I think about what I’m working on and this requires fundamentally economic thinking (in terms of the tradeoffs) as well as notions of utility.
Surely Peter Norvig never said that!
Go to 1:00 minute here
“Building the best possible programs” is what he says.
Ah, what he means is having an agent which will sort through the available programs—and quickly find one that efficiently does the specified task.