I think this is a very good question that should be asked more. I find it particularly important because of the example of automating research, which is probably the task I care most about.
My own best guess is that the computational work that humans are doing while they do the “thinking” tasks is probably very minimal (compared to the computation involved in perception, or to the computation currently available). However, the task of understanding which computation to do in these contexts seems quite similar to the task of understanding which computation to do in order to play a good game of chess, and automating this still seems out of reach for now. So I guess I disagree somewhat with Knuth’s characterization.
I would be really curious to get the perspectives of AI researchers involved with work in the “thinking” domains.
I find it particularly important because of the example of automating research, which is probably the task I care most about.
Neither math research nor programming or debugging are being taken over by AI, so far, and none of those require any of the complicated unconscious circuitry for sensory or motor interfacing. The programming application, at least, would also have immediate and major commercial relevance. I think these activities are fairly similar to research in general, which suggests that what one would classically call the “thinking” parts remain hard to implement AI.
They’re not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.
Most programming is not about writing the code, it is about translating a human description of the problem into a computer description of the problem. This is also why all attempts so far to make a system so simple “non-programmers” can program it have failed. The difficult aptitude for programming is the ability to think abstractly and systematically, and recognize what parts of a human description of the problem need to be translated into code, and what unspoken parts also need to be translated into code.
They’re not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
Do you mean that each time you do a research task, deciding how to do it is like making a program to play chess, rather than just designing a general system for research tasks being like designing a system for chess?
I think this is a very good question that should be asked more. I find it particularly important because of the example of automating research, which is probably the task I care most about.
My own best guess is that the computational work that humans are doing while they do the “thinking” tasks is probably very minimal (compared to the computation involved in perception, or to the computation currently available). However, the task of understanding which computation to do in these contexts seems quite similar to the task of understanding which computation to do in order to play a good game of chess, and automating this still seems out of reach for now. So I guess I disagree somewhat with Knuth’s characterization.
I would be really curious to get the perspectives of AI researchers involved with work in the “thinking” domains.
Neither math research nor programming or debugging are being taken over by AI, so far, and none of those require any of the complicated unconscious circuitry for sensory or motor interfacing. The programming application, at least, would also have immediate and major commercial relevance. I think these activities are fairly similar to research in general, which suggests that what one would classically call the “thinking” parts remain hard to implement AI.
They’re not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.
I’m not sure what you mean when you say ‘determining what the program needs to do’ - this sounds very general. Could you give an example?
Most programming is not about writing the code, it is about translating a human description of the problem into a computer description of the problem. This is also why all attempts so far to make a system so simple “non-programmers” can program it have failed. The difficult aptitude for programming is the ability to think abstractly and systematically, and recognize what parts of a human description of the problem need to be translated into code, and what unspoken parts also need to be translated into code.
They’re not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
Do you mean that each time you do a research task, deciding how to do it is like making a program to play chess, rather than just designing a general system for research tasks being like designing a system for chess?