On the chess thing, the reason why I went from ‘AI will kill our children’ to ‘AI will kill our parents’ shortly after I understood how AlphaZero worked was precisely because it seemed to play chess like I do.
I’m an OK chess player (1400ish), and I when I’m playing I totally do the ‘if I do this and then he moves this and then ….’ thing, but not very deep. And not any deeper than I did as a beginner, and I’m told grandmasters don’t really go any deeper.
Most of the witchy ability to see good chess moves is coming from an entirely opaque intuition about what moves would be good, and what positions are good.
You can’t explain this intuition in any way that allows it to move from mind to mind, although you can sometimes in retrospect justify it, or capture bits of it in words.
You train it through doing loads of tactics puzzles and playing loads of games.
AlphaZero was the first time I’d seen an AI algorithm where the magic didn’t go away after I’d understood it.
The first time I’d looked at something and thought: “Yes, that’s it, that’s intelligence. The same thing I’m doing. We’ve solved general game playing and that’s probably most of the way there.”
Human intelligence really does, to me, look like a load of opaque neural nets combined with a rudimentary search function.
On the chess thing, the reason why I went from ‘AI will kill our children’ to ‘AI will kill our parents’ shortly after I understood how AlphaZero worked was precisely because it seemed to play chess like I do.
I’m an OK chess player (1400ish), and I when I’m playing I totally do the ‘if I do this and then he moves this and then ….’ thing, but not very deep. And not any deeper than I did as a beginner, and I’m told grandmasters don’t really go any deeper.
Most of the witchy ability to see good chess moves is coming from an entirely opaque intuition about what moves would be good, and what positions are good.
You can’t explain this intuition in any way that allows it to move from mind to mind, although you can sometimes in retrospect justify it, or capture bits of it in words.
You train it through doing loads of tactics puzzles and playing loads of games.
AlphaZero was the first time I’d seen an AI algorithm where the magic didn’t go away after I’d understood it.
The first time I’d looked at something and thought: “Yes, that’s it, that’s intelligence. The same thing I’m doing. We’ve solved general game playing and that’s probably most of the way there.”
Human intelligence really does, to me, look like a load of opaque neural nets combined with a rudimentary search function.