If my job consists of 20 different tasks, and for each of them there is a separate narrow AI able to outperform me in them, combining them to automate me should not be that difficult.
I am afraid I cannot agree. For one, this would require a further 21 AI, the “managing AI”, that does the combining. Moreover, the data exchange between these narrow AI may be slower and/or worse (especially considering that many of the strong domain-specific AI don’t really have extractable internal data of any use).
Extractable internal data is only needed during troubleshooting. During normal operation, only the task result is needed.
As for the time/process-flow management, I already consider it a separate task—and probably the one that would benefit the most drastically by being automated, at least in my case.
Well, that’s not quite true. Let’s go to the initial example: you need to write a linguistic paper. To this, you need at least two things: perform the lingustic analysis of some data and actually put it in words. Yet the latter needs the internal structure of the former, not just the end result (as would most currently-practical applications of a machine that does a linguistic analysis). The logic behind trees, for instance, not just a tree-parsed syntactic corpus. A neural network (RNN or something) making better and quicker tree-parsed syntactic corpora than me would just shrug (metaphorically) if asked for the procedure of tree-making. I am near-certain other sciences would show the same pattern for their papers.
Managing AI would also have to manually handle information flow between other AIs more generally, which is kinda “automatic” for human minds (though with some important exceptions, leading to the whole idea of mental modules a la Fodor).
Creating an AI that does linguistic analysis of a given dataset better than me is easier than creating an AI that is a better linguist than me because it actually requires additional tasks such as writing academic papers.
If AI is not better than you at task “write an academic paper”, it is not at the level, specified in the question.
If a task requires output for both the end result and the analysis used to reach it, both shall be outputted. At least that is how I understand “better at every task”.
Moreover, even if my understanding is ultimately not what the survey-makers had in mind, the responding researchers having the same understanding as me would be enough to get the results in the OP.
I would say that, in ideal world, the relevant skill/task is “given the analysis already at hand, write a paper that conveys it well” (and it is alarming that this skill becomes much more valuable than the analysis itself, so people get credit for others’ analyses even when they clearly state that they merely retell it). And I fully believe that both the task of scientific analysis (outputting the results of the analysis, not its procedure, because that’s what needed for non-meta-purposes!) and the task outlined above will be achieved earlier than an AI that can actually combine them to write a paper from scratch. AND that each new simple task in the line to the occupation further removes their combination even after the simple task itself is achieved.
If my job consists of 20 different tasks, and for each of them there is a separate narrow AI able to outperform me in them, combining them to automate me should not be that difficult.
I am afraid I cannot agree. For one, this would require a further 21 AI, the “managing AI”, that does the combining. Moreover, the data exchange between these narrow AI may be slower and/or worse (especially considering that many of the strong domain-specific AI don’t really have extractable internal data of any use).
Extractable internal data is only needed during troubleshooting. During normal operation, only the task result is needed.
As for the time/process-flow management, I already consider it a separate task—and probably the one that would benefit the most drastically by being automated, at least in my case.
Well, that’s not quite true. Let’s go to the initial example: you need to write a linguistic paper. To this, you need at least two things: perform the lingustic analysis of some data and actually put it in words. Yet the latter needs the internal structure of the former, not just the end result (as would most currently-practical applications of a machine that does a linguistic analysis). The logic behind trees, for instance, not just a tree-parsed syntactic corpus. A neural network (RNN or something) making better and quicker tree-parsed syntactic corpora than me would just shrug (metaphorically) if asked for the procedure of tree-making. I am near-certain other sciences would show the same pattern for their papers.
Managing AI would also have to manually handle information flow between other AIs more generally, which is kinda “automatic” for human minds (though with some important exceptions, leading to the whole idea of mental modules a la Fodor).
If AI is not better than you at task “write an academic paper”, it is not at the level, specified in the question.
If a task requires output for both the end result and the analysis used to reach it, both shall be outputted. At least that is how I understand “better at every task”.
Moreover, even if my understanding is ultimately not what the survey-makers had in mind, the responding researchers having the same understanding as me would be enough to get the results in the OP.
I would say that, in ideal world, the relevant skill/task is “given the analysis already at hand, write a paper that conveys it well” (and it is alarming that this skill becomes much more valuable than the analysis itself, so people get credit for others’ analyses even when they clearly state that they merely retell it). And I fully believe that both the task of scientific analysis (outputting the results of the analysis, not its procedure, because that’s what needed for non-meta-purposes!) and the task outlined above will be achieved earlier than an AI that can actually combine them to write a paper from scratch. AND that each new simple task in the line to the occupation further removes their combination even after the simple task itself is achieved.
Going any further would require to taboo “task”.
I agree your reading explains the differences in responses given in the survey.
Unfortunately, it is quite difficult to taboo a term when discussing how (mis)interpretation of said term influenced a survey.