Besides what fezziwig said, which is correct, the other issue is the fundamental capabilities of the domain you are looking at. I figured something like this was the source of the error, which is why I asked for context.
Neural networks, deep or otherwise, are basically just classifiers. The reason we’ve seen large advancements recently in machine learning is chiefly because of the immense volumes of data available to these classifier-learning programs. Machine learning is particularly good at taking heaps of structured or unstructured data and finding clusters, then coming up with ways to classify new data into one of those identified clusters. The more data you have, the most detail that can be identified, and the better your classifiers become. Certainly you need a lot of hardware to process the mind boggling amounts of data that are being pushed through these machine learning tools, but hardware is not the limiter, available data is. Giant companies like Google and Facebook are building better and better classifiers not because they have more hardware available, but because they have more data available (chiefly because we are choosing to escrow our personal lives to these companies servers, but that’s an aside).
In as much as machine learning tends to dominate current approaches to narrow AI, you could be excused for saying “the biggest limitation on AI development is availabilities of data.” But you mentioned safety, and AI safety around here is a codeword for general AI, and general AI is truly a software problem that has very little to do with neural networks, data availability, or hardware speeds. “But human brains are networks of neurons!” you reply. True. But the field of computer algorithms called neural networks is a total misnomer. A “neural network” is an algorithm inspired by an over simplification of a misconception of how brains worked that dates back to the 1950′s / 1960′s.
Developing algorithms that are actually capable of performing general intelligence tasks, either bio-inspired or de novo, is the field of artificial general intelligence. And that field is currently software limited. We suspect we have the computational capability to run a human-level AGI today, if only we had the know-how to write one.
I already know all this (from a combination of intro-to-ML course and reading writing along the same lines by Yann Lecun and Andrew Ng), and I’m still leaning towards hardware being the limiting factor (ie I currently don’t think your last sentence is true).
Besides what fezziwig said, which is correct, the other issue is the fundamental capabilities of the domain you are looking at. I figured something like this was the source of the error, which is why I asked for context.
Neural networks, deep or otherwise, are basically just classifiers. The reason we’ve seen large advancements recently in machine learning is chiefly because of the immense volumes of data available to these classifier-learning programs. Machine learning is particularly good at taking heaps of structured or unstructured data and finding clusters, then coming up with ways to classify new data into one of those identified clusters. The more data you have, the most detail that can be identified, and the better your classifiers become. Certainly you need a lot of hardware to process the mind boggling amounts of data that are being pushed through these machine learning tools, but hardware is not the limiter, available data is. Giant companies like Google and Facebook are building better and better classifiers not because they have more hardware available, but because they have more data available (chiefly because we are choosing to escrow our personal lives to these companies servers, but that’s an aside).
In as much as machine learning tends to dominate current approaches to narrow AI, you could be excused for saying “the biggest limitation on AI development is availabilities of data.” But you mentioned safety, and AI safety around here is a codeword for general AI, and general AI is truly a software problem that has very little to do with neural networks, data availability, or hardware speeds. “But human brains are networks of neurons!” you reply. True. But the field of computer algorithms called neural networks is a total misnomer. A “neural network” is an algorithm inspired by an over simplification of a misconception of how brains worked that dates back to the 1950′s / 1960′s.
Developing algorithms that are actually capable of performing general intelligence tasks, either bio-inspired or de novo, is the field of artificial general intelligence. And that field is currently software limited. We suspect we have the computational capability to run a human-level AGI today, if only we had the know-how to write one.
I already know all this (from a combination of intro-to-ML course and reading writing along the same lines by Yann Lecun and Andrew Ng), and I’m still leaning towards hardware being the limiting factor (ie I currently don’t think your last sentence is true).