It would be nice for at least one futurist who shows a graph of GDP to describe some of the many, many difficulties in comparing GDP across years, and to talk about the distribution of wealth. The power-law distribution of wealth means that population growth without a shift in the wealth distribution can look like an exponential increase in wealth, while actually the wealth of all but the very wealthy must decrease to preserve the same distribution. Arguably, this has happened repeatedly in American history.
I was very glad Nick mentioned that genetic algorithms are just another kind of hill-climbing, and have no mystical power. I suspect GA is inferior to hillclimbing with multiple random starts in most domains, though I’m ashamed to admit I haven’t tested this in any way. GA is interesting not so much as an algorithm, but for how it can be used to classify and give insight into search problems. Ones where GA works better than hillclimbing are (my intuition) probably rare, yet constitute a large proportion of the difficult search problems we find solved by biology.
His description of conditionalization, as “setting the new probability of those worlds that are inconsistent with the information received to zero” followed by renormalization, is incorrect in two ways. Conditionalization recomputes the probability of every state, and never sets any probabilities to zero. This latter point is a common enough error that it’s distressing to see it here.
Showing that a Bayesian agent is impossible to make would be very involved, and not worthwhile. It’s more important to argue that a Bayesian agent would usually lose to dumber, faster agents, because the trade-off between speed and correctness is essential when thinking about super-intelligences. Whether the most-successful “super-intelligences” could in fact be intelligent by our definitions is still an important open question. If fast and stupid wins the race in the long run, preserving human values will be difficult.
What happened in the late 80s was not that neural nets and GAs performed better than GOFAI; what happened was an argument about which activities represented “intelligence”, which the reactive behavior / physical robot / statistical learning people won. Statistics and machine learning are still poor at the problems that GOFAI does well on.
“AI” is not a viable field anymore; anyone getting a degree in “artificial intelligence” would find themselves unemployable today. Its territory has been taken over by statistics and “machine learning”. I think we do people a disservice to keep talking about machine intelligence using only the term “artificial intelligence”, because it mis-directs them into the backwaters of research and development.
I remember there was a paper co-authored by one of the inventor of genetic algorithms. They tried to come up with a toy problem that would show where genetic algorithms definitely beat hill-climbing. The problem they came up with was extremely contrived. But a slight modification to hill-climbing to make it slightly less greedy, and it worked just as fine or better than GA.
Statistics and machine learning are still poor at the problems that GOFAI does well on.
We are just starting to see ML successfully applied to search problems. There was a paper on deep neural networks that predict the moves of Go experts 45% of the time. Another paper found deep learning could significantly narrow the search space for automatically finding mathematical identities. Reinforcement Learning is becoming increasingly popular, which is just heuristic search, but very general.
I suspect GA is inferior to hillclimbing with multiple random starts in most domains
Simulated annealing is another similar class of optimizers with interesting properties.
As to standard hill-climbing with multiple starts, it fails in the presence of a large number of local optima. If your error landscape is lots of small hills, each restart will get you to the top of the nearest small hill but you might never get to that large range in the corner of your search space.
In any case most domains have their characteristics or peculiarities which make certain search algorithms perform well and others perform badly. Often enough domain-specific tweaks can improve things greatly compared to the general case...
Comments:
It would be nice for at least one futurist who shows a graph of GDP to describe some of the many, many difficulties in comparing GDP across years, and to talk about the distribution of wealth. The power-law distribution of wealth means that population growth without a shift in the wealth distribution can look like an exponential increase in wealth, while actually the wealth of all but the very wealthy must decrease to preserve the same distribution. Arguably, this has happened repeatedly in American history.
I was very glad Nick mentioned that genetic algorithms are just another kind of hill-climbing, and have no mystical power. I suspect GA is inferior to hillclimbing with multiple random starts in most domains, though I’m ashamed to admit I haven’t tested this in any way. GA is interesting not so much as an algorithm, but for how it can be used to classify and give insight into search problems. Ones where GA works better than hillclimbing are (my intuition) probably rare, yet constitute a large proportion of the difficult search problems we find solved by biology.
His description of conditionalization, as “setting the new probability of those worlds that are inconsistent with the information received to zero” followed by renormalization, is incorrect in two ways. Conditionalization recomputes the probability of every state, and never sets any probabilities to zero. This latter point is a common enough error that it’s distressing to see it here.
Showing that a Bayesian agent is impossible to make would be very involved, and not worthwhile. It’s more important to argue that a Bayesian agent would usually lose to dumber, faster agents, because the trade-off between speed and correctness is essential when thinking about super-intelligences. Whether the most-successful “super-intelligences” could in fact be intelligent by our definitions is still an important open question. If fast and stupid wins the race in the long run, preserving human values will be difficult.
What happened in the late 80s was not that neural nets and GAs performed better than GOFAI; what happened was an argument about which activities represented “intelligence”, which the reactive behavior / physical robot / statistical learning people won. Statistics and machine learning are still poor at the problems that GOFAI does well on.
“AI” is not a viable field anymore; anyone getting a degree in “artificial intelligence” would find themselves unemployable today. Its territory has been taken over by statistics and “machine learning”. I think we do people a disservice to keep talking about machine intelligence using only the term “artificial intelligence”, because it mis-directs them into the backwaters of research and development.
I remember there was a paper co-authored by one of the inventor of genetic algorithms. They tried to come up with a toy problem that would show where genetic algorithms definitely beat hill-climbing. The problem they came up with was extremely contrived. But a slight modification to hill-climbing to make it slightly less greedy, and it worked just as fine or better than GA.
We are just starting to see ML successfully applied to search problems. There was a paper on deep neural networks that predict the moves of Go experts 45% of the time. Another paper found deep learning could significantly narrow the search space for automatically finding mathematical identities. Reinforcement Learning is becoming increasingly popular, which is just heuristic search, but very general.
Simulated annealing is another similar class of optimizers with interesting properties.
As to standard hill-climbing with multiple starts, it fails in the presence of a large number of local optima. If your error landscape is lots of small hills, each restart will get you to the top of the nearest small hill but you might never get to that large range in the corner of your search space.
In any case most domains have their characteristics or peculiarities which make certain search algorithms perform well and others perform badly. Often enough domain-specific tweaks can improve things greatly compared to the general case...