That’s what I know most about. I could go into much more depth on any of them.
I think Go, the board game, will likely fall to the machines. The driving engine of advances will shift somewhat from academia to industry.
Basic statistical techniques are advancing, but not nearly as fast as these more downstream applications, partly because they’re harder to put to work in industry. But in general we’ll have substantially faster algorithms to solve many probabilistic inference problems, much the same way that convex programming solvers will be faster. But really, model specification has already become the bottleneck for many problems.
I think at the tail end of 10 years we might start to see the integration of NLP-derived techniques into computer program analysis. Simple prototypes of this are on the bleeding edge in academia, so it’ll take a while. I don’t know exactly what it would look like, beyond better bug identification.
What more specific things would you like thoughts on?
I think Go, the board game, will likely fall to the machines. The driving engine of advances will shift somewhat from academia to industry.
This is a sucker bet. I don’t know if you’ve kept up to date, but AI techniques for Go-playing have advanced dramatically over the last couple of years, and they’re rapidly catching up to the best human players. They’ve already passed the 1-dan mark.
Interestingly, from my reading this is by way of general techniques rather than writing programs that are terribly specialized to Go.
Advanced quickly for a while due to a complete change in algorithm, but then we seem to have hit a plateau again. It’s still an enormous climb to world champion level. It’s not obvious that this will be achieved.
Right—I agree that Go computers will beat human champions.
In a sense you’re right that the techniques are general, but are they the general techniques that work specifically for Go, if you get what I’m saying. That is, would the produce similar improvements when applied to Chess or other games? I don’t know but it’s always something to ask.
Advances in planning engines, knowledge representation and concept forming, and agent behavior would be interesting predictions to have, I think. Also any opinion you have on AGI if you care to share.
I think NLP, text mining and information extraction have essentially engulfed knowledge representation.
You can take large text corpora like the and extract facts (like Obama IS President of the US) using fairly simple parsing techniques (and soon, more complex ones) put this in your database in either semi-raw form (e.g. subject—verb—object, instead of trying to transform verb into a particular relation) or use a small variety of simple relations. In general it seems that simple representations (that could include non-interpretable ones real-valued vectors) that accommodate complex data and high-powered inference are more powerful than trying to load more complexity into the data’s structure.
Problems with logic-based approaches don’t have a clear solution, other than to replace logic with probabilistic inference. In the real world, logical quantifiers and set-subset relations are really really messy. For instance a taxonomy of dogs is true and useful from a genetic perspective, but from a functional perspective a chihuahua may be more similar to a cat than a St. Bernard. I think instead of solving that with a profusion of logical facts in a knowledge base, it might be solved by non-human interpretable vector-based representations produced from, say, a million youtube videos of chihuahuas and a billion words of text on chihuahuas.
Google’s Knowledge Graph is a good example of this in action.
I know very little about planning and agents. Do you have any thoughts on them?
By knowledge representation and concept formation I meant something more general than linguistic fact storage. For example seeing lots of instances of chairs and not just being able to recognize other instances of chairs – machine learning handles that – but also derive that the function of a chair is to provide a shape that enables bipedal animals to support their bodies in a resting position. It would then be able to derive that an adequately sized flat rock could also serve as a chair, even as it doesn’t match the training set.
Or to give another example, given nothing but a large almanac of accurate planet sightings from a fixed location on the Earth, derive first the heliocentric model then a set of differential equations governing their motion (Kepler’s laws). As an Ockham causal model, predict a 1/r^2 attractive force to explain these laws. Then notice an object can travel between these objects by adjusting their speed relative to the central object, the Sun. It might also notice that for the Earth, the only object it has rotational information about, it is possible for an object to fall around the Earth at such a distance that it remains at a fixed location in the sky.
The latter example isn’t science fiction btw. It was accomplished by Pat Langley’s BACON program in the 70’s and 80’s (but sadly this area hasn’t seen much work since). I think it would be interesting to see what happens if machine learning and modern big data and knowledge representation systems were combined with this sort of model formation and concept mixing codes.
Probabilistic inference is interesting and relevant, I think, because where it doesn’t suffer from combinatorial explosion it is able to make inferences that require an inordinate number of example cases for statistical methods. Combined with concept nets, it’s possible to teach such a system with just one example per learned concept, which is very efficient. The trick of course is identifying those +1 examples.
Regarding planning and agents… they already run our lives. Obviously self-driving cars will be a big thing, but I hesitate from making predictions because it is what we don’t foresee that will have the largest impact, typically.
I am in the NLP mindset. I don’t personally predict much progress on the front you described. Specifically, I think this is because industrial uses mesh well with the machine learning approach. You won’t ask an app “where could I sit” because you can figure that out. You might ask it ’what brand of chair is that” though, at which point your app has to have some object recognition abilities.
So you mean agent in the sense that an autonomous taxi would be an agent, or an Ebay bidding robot? I think there’s more work in economics, algorithmic game theory and operations research on those sorts of problems than in anything I’ve studied a lot of. These fields are developing, but I don’t see them as being part of AI (since the agents are still quite dumb).
For the same reason, a program that figures out the heliocentric model mainly interests academics.
There is work on solvers that try to fit simple equations to data, I’m not that familiar.
I’m not asking for sexy predictions; I’m explicitly looking for more grounded ones, stuff that wouldn’t win you much in a prediction market if you were right but which other people might not be informed about.
That’s what I know most about. I could go into much more depth on any of them.
I think Go, the board game, will likely fall to the machines. The driving engine of advances will shift somewhat from academia to industry.
Basic statistical techniques are advancing, but not nearly as fast as these more downstream applications, partly because they’re harder to put to work in industry. But in general we’ll have substantially faster algorithms to solve many probabilistic inference problems, much the same way that convex programming solvers will be faster. But really, model specification has already become the bottleneck for many problems.
I think at the tail end of 10 years we might start to see the integration of NLP-derived techniques into computer program analysis. Simple prototypes of this are on the bleeding edge in academia, so it’ll take a while. I don’t know exactly what it would look like, beyond better bug identification.
What more specific things would you like thoughts on?
This is a sucker bet. I don’t know if you’ve kept up to date, but AI techniques for Go-playing have advanced dramatically over the last couple of years, and they’re rapidly catching up to the best human players. They’ve already passed the 1-dan mark.
Interestingly, from my reading this is by way of general techniques rather than writing programs that are terribly specialized to Go.
Advanced quickly for a while due to a complete change in algorithm, but then we seem to have hit a plateau again. It’s still an enormous climb to world champion level. It’s not obvious that this will be achieved.
Right—I agree that Go computers will beat human champions.
In a sense you’re right that the techniques are general, but are they the general techniques that work specifically for Go, if you get what I’m saying. That is, would the produce similar improvements when applied to Chess or other games? I don’t know but it’s always something to ask.
Advances in planning engines, knowledge representation and concept forming, and agent behavior would be interesting predictions to have, I think. Also any opinion you have on AGI if you care to share.
I think NLP, text mining and information extraction have essentially engulfed knowledge representation.
You can take large text corpora like the and extract facts (like Obama IS President of the US) using fairly simple parsing techniques (and soon, more complex ones) put this in your database in either semi-raw form (e.g. subject—verb—object, instead of trying to transform verb into a particular relation) or use a small variety of simple relations. In general it seems that simple representations (that could include non-interpretable ones real-valued vectors) that accommodate complex data and high-powered inference are more powerful than trying to load more complexity into the data’s structure.
Problems with logic-based approaches don’t have a clear solution, other than to replace logic with probabilistic inference. In the real world, logical quantifiers and set-subset relations are really really messy. For instance a taxonomy of dogs is true and useful from a genetic perspective, but from a functional perspective a chihuahua may be more similar to a cat than a St. Bernard. I think instead of solving that with a profusion of logical facts in a knowledge base, it might be solved by non-human interpretable vector-based representations produced from, say, a million youtube videos of chihuahuas and a billion words of text on chihuahuas.
Google’s Knowledge Graph is a good example of this in action.
I know very little about planning and agents. Do you have any thoughts on them?
You’re still thinking in a NLP mindset :P
By knowledge representation and concept formation I meant something more general than linguistic fact storage. For example seeing lots of instances of chairs and not just being able to recognize other instances of chairs – machine learning handles that – but also derive that the function of a chair is to provide a shape that enables bipedal animals to support their bodies in a resting position. It would then be able to derive that an adequately sized flat rock could also serve as a chair, even as it doesn’t match the training set.
Or to give another example, given nothing but a large almanac of accurate planet sightings from a fixed location on the Earth, derive first the heliocentric model then a set of differential equations governing their motion (Kepler’s laws). As an Ockham causal model, predict a 1/r^2 attractive force to explain these laws. Then notice an object can travel between these objects by adjusting their speed relative to the central object, the Sun. It might also notice that for the Earth, the only object it has rotational information about, it is possible for an object to fall around the Earth at such a distance that it remains at a fixed location in the sky.
The latter example isn’t science fiction btw. It was accomplished by Pat Langley’s BACON program in the 70’s and 80’s (but sadly this area hasn’t seen much work since). I think it would be interesting to see what happens if machine learning and modern big data and knowledge representation systems were combined with this sort of model formation and concept mixing codes.
Probabilistic inference is interesting and relevant, I think, because where it doesn’t suffer from combinatorial explosion it is able to make inferences that require an inordinate number of example cases for statistical methods. Combined with concept nets, it’s possible to teach such a system with just one example per learned concept, which is very efficient. The trick of course is identifying those +1 examples.
Regarding planning and agents… they already run our lives. Obviously self-driving cars will be a big thing, but I hesitate from making predictions because it is what we don’t foresee that will have the largest impact, typically.
I am in the NLP mindset. I don’t personally predict much progress on the front you described. Specifically, I think this is because industrial uses mesh well with the machine learning approach. You won’t ask an app “where could I sit” because you can figure that out. You might ask it ’what brand of chair is that” though, at which point your app has to have some object recognition abilities.
So you mean agent in the sense that an autonomous taxi would be an agent, or an Ebay bidding robot? I think there’s more work in economics, algorithmic game theory and operations research on those sorts of problems than in anything I’ve studied a lot of. These fields are developing, but I don’t see them as being part of AI (since the agents are still quite dumb).
For the same reason, a program that figures out the heliocentric model mainly interests academics.
There is work on solvers that try to fit simple equations to data, I’m not that familiar.
I’m not asking for sexy predictions; I’m explicitly looking for more grounded ones, stuff that wouldn’t win you much in a prediction market if you were right but which other people might not be informed about.