The existing approaches I hear of for creating AGI sound like “lets redo evolution”; if a lifetime of data didn’t already sound like a stupendously large amount, the amount of data used by evolution puts that to shame.
It’s also not just pre-existing data, it’s partially manufactured. (The environment changes, and some of that change is a result of evolution (within and without species).) The distinction mattered in Go—AGZ just learned ‘from playing against itself’, while its predecessor looked at a dataset of human games.
Intuitively, if a domain is complicated enough that data must be manufactured to perform well enough then that domain is complicated enough AI systems in that domain will be underparameterized.
And “Data is created” sounds like a control problem.
But was AGZ’s predecessor overparameterized? I don’t know. The line between selection and control isn’t clear in the problem or that solution—Go and AGZ respectively.
Overparameterized = you can predict your training data perfectly. Our performance on memory tests show that humans are nowhere close to that.
If AGZ is better than anyone else at Go, is it overparameterized? It’s not labeling, it’s competing and making choices. The moves/choices it makes might not be the optimal (relative to no resource constraints), and that optimum might never be achieved, so yes—underparameterized, forever. But it’s better than the trainers, so no—it’s overparameterized, it’s making better choices than the data set it wasn’t provided with.
A word’s been suggested before for when technology reach the point that humans don’t add anything—I don’t remember it. (This is true for 1v1 in chess, but with teams humans become useful again. I’d guess Go would be the same, but I haven’t heard about it.)
I’d say AGZ demonstrates Super performance—when the performer surpasses prior experts (and their datasets), and becomes the new expert (and source of data).
It’s also not just pre-existing data, it’s partially manufactured. (The environment changes, and some of that change is a result of evolution (within and without species).) The distinction mattered in Go—AGZ just learned ‘from playing against itself’, while its predecessor looked at a dataset of human games.
I agree with that, but how does it matter for whether AI systems will be underparameterized?
Intuitively, if a domain is complicated enough that data must be manufactured to perform well enough then that domain is complicated enough AI systems in that domain will be underparameterized.
And “Data is created” sounds like a control problem.
But was AGZ’s predecessor overparameterized? I don’t know. The line between selection and control isn’t clear in the problem or that solution—Go and AGZ respectively.
If AGZ is better than anyone else at Go, is it overparameterized? It’s not labeling, it’s competing and making choices. The moves/choices it makes might not be the optimal (relative to no resource constraints), and that optimum might never be achieved, so yes—underparameterized, forever. But it’s better than the trainers, so no—it’s overparameterized, it’s making better choices than the data set it wasn’t provided with.
A word’s been suggested before for when technology reach the point that humans don’t add anything—I don’t remember it. (This is true for 1v1 in chess, but with teams humans become useful again. I’d guess Go would be the same, but I haven’t heard about it.)
I’d say AGZ demonstrates Super performance—when the performer surpasses prior experts (and their datasets), and becomes the new expert (and source of data).