‘The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’ - that, somehow, is so much harder!‘. (p14) There are some activities we think of as involving substantial thinking that we haven’t tried to automate much, presumably because they require some of the ‘not thinking’ skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting successful companies. If we had successfully automated the ‘without thinking’ tasks like vision and common sense, do you think these remaining kinds of thinking tasks would come easily to AI—like chess in a new domain—or be hard like the ‘without thinking’ tasks?
I think this is a very good question that should be asked more. I find it particularly important because of the example of automating research, which is probably the task I care most about.
My own best guess is that the computational work that humans are doing while they do the “thinking” tasks is probably very minimal (compared to the computation involved in perception, or to the computation currently available). However, the task of understanding which computation to do in these contexts seems quite similar to the task of understanding which computation to do in order to play a good game of chess, and automating this still seems out of reach for now. So I guess I disagree somewhat with Knuth’s characterization.
I would be really curious to get the perspectives of AI researchers involved with work in the “thinking” domains.
I find it particularly important because of the example of automating research, which is probably the task I care most about.
Neither math research nor programming or debugging are being taken over by AI, so far, and none of those require any of the complicated unconscious circuitry for sensory or motor interfacing. The programming application, at least, would also have immediate and major commercial relevance. I think these activities are fairly similar to research in general, which suggests that what one would classically call the “thinking” parts remain hard to implement AI.
They’re not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.
Most programming is not about writing the code, it is about translating a human description of the problem into a computer description of the problem. This is also why all attempts so far to make a system so simple “non-programmers” can program it have failed. The difficult aptitude for programming is the ability to think abstractly and systematically, and recognize what parts of a human description of the problem need to be translated into code, and what unspoken parts also need to be translated into code.
They’re not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
Do you mean that each time you do a research task, deciding how to do it is like making a program to play chess, rather than just designing a general system for research tasks being like designing a system for chess?
I think by “things that require thinking” he means logical problems in well defined domains. Computers can solve logical puzzles much faster than humans, often through sheer brute force. From board games to scheduling to finding the shortest path.
Of course there are counter examples like theorem proving or computer programming. Though they are improving and starting to match humans at some tasks.
‘The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’ - that, somehow, is so much harder!‘. (p14) There are some activities we think of as involving substantial thinking that we haven’t tried to automate much, presumably because they require some of the ‘not thinking’ skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting successful companies. If we had successfully automated the ‘without thinking’ tasks like vision and common sense, do you think these remaining kinds of thinking tasks would come easily to AI—like chess in a new domain—or be hard like the ‘without thinking’ tasks?
I think this is a very good question that should be asked more. I find it particularly important because of the example of automating research, which is probably the task I care most about.
My own best guess is that the computational work that humans are doing while they do the “thinking” tasks is probably very minimal (compared to the computation involved in perception, or to the computation currently available). However, the task of understanding which computation to do in these contexts seems quite similar to the task of understanding which computation to do in order to play a good game of chess, and automating this still seems out of reach for now. So I guess I disagree somewhat with Knuth’s characterization.
I would be really curious to get the perspectives of AI researchers involved with work in the “thinking” domains.
Neither math research nor programming or debugging are being taken over by AI, so far, and none of those require any of the complicated unconscious circuitry for sensory or motor interfacing. The programming application, at least, would also have immediate and major commercial relevance. I think these activities are fairly similar to research in general, which suggests that what one would classically call the “thinking” parts remain hard to implement AI.
They’re not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.
I’m not sure what you mean when you say ‘determining what the program needs to do’ - this sounds very general. Could you give an example?
Most programming is not about writing the code, it is about translating a human description of the problem into a computer description of the problem. This is also why all attempts so far to make a system so simple “non-programmers” can program it have failed. The difficult aptitude for programming is the ability to think abstractly and systematically, and recognize what parts of a human description of the problem need to be translated into code, and what unspoken parts also need to be translated into code.
They’re not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
Do you mean that each time you do a research task, deciding how to do it is like making a program to play chess, rather than just designing a general system for research tasks being like designing a system for chess?
I think by “things that require thinking” he means logical problems in well defined domains. Computers can solve logical puzzles much faster than humans, often through sheer brute force. From board games to scheduling to finding the shortest path.
Of course there are counter examples like theorem proving or computer programming. Though they are improving and starting to match humans at some tasks.