If you read any narrow/weak/specific/whatever AI papers, then I’d say you do read engineering papers—that’s how I mostly think of my field, computational linguistics, anyway.
The “experiments” I’m doing at the moment are attempts to engineer a better statistical parser of English. We have some human annotated data, and we divide it up into a training section, a development section, and an evaluation section. I write my system and use the training portion for learning, and evaluate my ideas on the development section. When I’m ready to publish, I produce a final score on the evaluation section.
In this case, my experimental error is the extent to which the accuracy figures I produce do not correlate with the accuracy that someone really using my system will see.
Both systematic and random error abounds in these “experiments”. I’d say a really common source of systematic error comes from the linguistic annotation we’re trying to replicate. We evaluate on data annotated by the same people according to the same standards as we trained on, and the scientific standards of the linguistics behind that are poor. If some aspects of the annotation are suboptimal for applications of the system, that won’t be reflected in my results.
If you read any narrow/weak/specific/whatever AI papers, then I’d say you do read engineering papers—that’s how I mostly think of my field, computational linguistics, anyway.
The “experiments” I’m doing at the moment are attempts to engineer a better statistical parser of English. We have some human annotated data, and we divide it up into a training section, a development section, and an evaluation section. I write my system and use the training portion for learning, and evaluate my ideas on the development section. When I’m ready to publish, I produce a final score on the evaluation section.
In this case, my experimental error is the extent to which the accuracy figures I produce do not correlate with the accuracy that someone really using my system will see.
Both systematic and random error abounds in these “experiments”. I’d say a really common source of systematic error comes from the linguistic annotation we’re trying to replicate. We evaluate on data annotated by the same people according to the same standards as we trained on, and the scientific standards of the linguistics behind that are poor. If some aspects of the annotation are suboptimal for applications of the system, that won’t be reflected in my results.