That seems like the extreme case of “you don’t really understand something until you can explain it to somebody else”, which I’m sure somebody other than me must have said a long time ago.
Of course, machine learning algorithms render this obsolete. You don’t have to understand something to program it, just have a vague understanding of how that understanding might come about.
Arguably, that’s still understanding. ‘Now I know that natural language parsing is in this family of parametric functions which my ML algorithm can handle, with the coefficients given by minimizing the divergence from a bazillion word corpus...&etc.’
Yes! I’m happy that at least one person clicks on that.
The software industry is currently held back by a conception of programming-as-manual-labor, consisting of semi-mechanically turning a specification document into executable code. In fact it’s much closer to “the art of improving your understanding of some business domain by expressing the details of that domain in a formal notation”. The resulting program isn’t quite a by-product of that activity—it’s important, though not nearly as important as distilling the domain understanding.
Yes, I agree. The real test of AI is not the automation of “formal specification → working code”—if the client could formalize it to that level, they could write the code themselves. Rather, the real test is whether an AI could talk to an extroverted MBA, figure out what they want, and then produce the working code. But so far, only humans programmers can do that.
And by the same token, we’ll know we’ve nailed AI not when we have written a program that can have that conversation… but when we have written down an account of how we are able to have that conversation, to such a level of detail that there’s nothing left to explain.
Writing a program which solves the Towers of Hanoi is not too hard. Proving, given a formalization of the ToH, various properties of a program that solves it, isn’t too hard. But looking at a bunch of wooden disks slotted on pegs and coming up with an interpretation of that situation which corresponds to the abstract scheme we know as “Towers of Hanoi”… That’s where the fun is.
While that’s basically true, a significant part of any large program consists of dealing with “accidental complexity” that isn’t really part of the “business logic”. Of course in many cases that only makes the programming even less mechanical.
Yes, and explaining it to a computer (i.e. writing working code) is the hardest version of this test, because it’s the closest thing to a blank slate—you can’t rely on anything being “understood” like you would with a person, in which case you can just start from the NePOCU (nearest point of common understanding, learn to live with the acronym).
-- Bill Venables
That seems like the extreme case of “you don’t really understand something until you can explain it to somebody else”, which I’m sure somebody other than me must have said a long time ago.
Yep.
“Epigrams in Programming”, by Alan J. Perlis; ACM’s SIGPLAN publication, September, 1982
Of course, machine learning algorithms render this obsolete. You don’t have to understand something to program it, just have a vague understanding of how that understanding might come about.
Arguably, that’s still understanding. ‘Now I know that natural language parsing is in this family of parametric functions which my ML algorithm can handle, with the coefficients given by minimizing the divergence from a bazillion word corpus...&etc.’
If that could work, that would be equivalent to having a Level 3 understanding of how to regenerate the required knowledge—hardly a shortcut!
No, you have to have a certain understanding of how that understanding might come about.
Yes! I’m happy that at least one person clicks on that.
The software industry is currently held back by a conception of programming-as-manual-labor, consisting of semi-mechanically turning a specification document into executable code. In fact it’s much closer to “the art of improving your understanding of some business domain by expressing the details of that domain in a formal notation”. The resulting program isn’t quite a by-product of that activity—it’s important, though not nearly as important as distilling the domain understanding.
Programming is the art of figuring out what you want so precisely that you can tell even a machine how to do it.
Yes, I agree. The real test of AI is not the automation of “formal specification → working code”—if the client could formalize it to that level, they could write the code themselves. Rather, the real test is whether an AI could talk to an extroverted MBA, figure out what they want, and then produce the working code. But so far, only humans programmers can do that.
And by the same token, we’ll know we’ve nailed AI not when we have written a program that can have that conversation… but when we have written down an account of how we are able to have that conversation, to such a level of detail that there’s nothing left to explain.
Writing a program which solves the Towers of Hanoi is not too hard. Proving, given a formalization of the ToH, various properties of a program that solves it, isn’t too hard. But looking at a bunch of wooden disks slotted on pegs and coming up with an interpretation of that situation which corresponds to the abstract scheme we know as “Towers of Hanoi”… That’s where the fun is.
One can’t proceed from the informal to the formal by formal means. Yet.
(Apologies to Alan Perlis etc)
While that’s basically true, a significant part of any large program consists of dealing with “accidental complexity” that isn’t really part of the “business logic”. Of course in many cases that only makes the programming even less mechanical.
Yes, and explaining it to a computer (i.e. writing working code) is the hardest version of this test, because it’s the closest thing to a blank slate—you can’t rely on anything being “understood” like you would with a person, in which case you can just start from the NePOCU (nearest point of common understanding, learn to live with the acronym).