I’ve heard that when you play mouse-chasing-themed games with your cat, the maximal cat fun is achieved when there are between 1 and 2 successes for every 6 pounces.
Optimal performance may be maximized, but the output isn’t.
I would be surprised if there were less overall errors in the final product if it started at 2 per page, rather than say 1⁄4 per page.
This is also valid against the suggestion in the OP. Although humans will catch more errors if there are more to begin with, that doesn’t mean there will be less failures overall.
As I mentioned in my other comment, if some of the errors are injected to keep the attention at the optimal level, and then removed post-QA, the other errors are removed with better efficiency. As an added benefit, you get an automated and reliable metric of how attentive the proof-reader is.
I’ve heard that in proof-reading, optimal performance is achieved when there are about 2 errors per page.
I’ve heard that when you play mouse-chasing-themed games with your cat, the maximal cat fun is achieved when there are between 1 and 2 successes for every 6 pounces.
Optimal performance may be maximized, but the output isn’t.
I would be surprised if there were less overall errors in the final product if it started at 2 per page, rather than say 1⁄4 per page.
This is also valid against the suggestion in the OP. Although humans will catch more errors if there are more to begin with, that doesn’t mean there will be less failures overall.
As I mentioned in my other comment, if some of the errors are injected to keep the attention at the optimal level, and then removed post-QA, the other errors are removed with better efficiency. As an added benefit, you get an automated and reliable metric of how attentive the proof-reader is.