I’m guessing Punoxysm’s pointing to the fact that the algorithms used for contemporary machine learning are pretty simple; few of them involve anything more complicated than repeated matrix multiplication at their core, although a lot of code can go into generating, filtering, and permuting their inputs.
I’m not sure that necessarily implies a lack of sophistication or potential, though. There’s a tendency to look at the human mind’s outputs and conclude that its architecture must involve comparable specialization and variety, but I suspect that’s a confusion of levels; the world’s awash in locally simple math with complex consequences. Not that I think an artificial neural network, say, is a particularly close representation of natural neurology; it pretty clearly isn’t.
I agree with you on both counts—that most human cognition is simpler than it appears in particular. But some of it isn’t, and that’s probably the really critical part when we talk about strong AI.
For instance, I think that a computer could write a “Turing Novel” that would be indistinguishable from some human-made fiction with just a little bit of human editing, and that would still leave us quite far from FOOMable AI (I don’t mean this could happen today, but say in 10 years).
What do you mean with the term “mechanical”?
I’m guessing Punoxysm’s pointing to the fact that the algorithms used for contemporary machine learning are pretty simple; few of them involve anything more complicated than repeated matrix multiplication at their core, although a lot of code can go into generating, filtering, and permuting their inputs.
I’m not sure that necessarily implies a lack of sophistication or potential, though. There’s a tendency to look at the human mind’s outputs and conclude that its architecture must involve comparable specialization and variety, but I suspect that’s a confusion of levels; the world’s awash in locally simple math with complex consequences. Not that I think an artificial neural network, say, is a particularly close representation of natural neurology; it pretty clearly isn’t.
I agree with you on both counts—that most human cognition is simpler than it appears in particular. But some of it isn’t, and that’s probably the really critical part when we talk about strong AI.
For instance, I think that a computer could write a “Turing Novel” that would be indistinguishable from some human-made fiction with just a little bit of human editing, and that would still leave us quite far from FOOMable AI (I don’t mean this could happen today, but say in 10 years).