Could you clarify what you mean by “arithmetic coding operating over their output”?
The point of having teams work independently on the same project is that they’re unlikely to make exactly the same mistakes. Publishers do this for proofreading: have two proofreaders return error-sets A and B, and estimate the number of uncaught errors as a function of |A\B|, |B\A| and |A∩B|. If A=B, that would be strong evidence that there are no errors left. (But depending on priors, it might not bring P(no errors left) close to 1.) If two proofreaders worked on different parts of the book, you couldn’t use the same technique.
Could the arithmetic coding make checks like this unnecessary?
The concept of using N parallel versions accompanied with some kind of voting system is a long-established one in high-integrity engineering. The independence of the channels produces a system which is far more reliable than one channel could be. In recent years, the concept has also been applied to systems containing software using diversity of design. In such systems, it is attractive to assume that software systems, like their hardware counterparts also fail independently.
However, in a widely-quoted experiment, [1], [2] showed that this assumption is incorrect, and that programmers tended to commit certain classes of mistake dependently. It can then be argued that the benefit of having N independently developed software channels loses at least some of its appeal as the dependence of certain classes of error means that N channels are less immune to failure than N equivalent independent channels, as occurs typically in hardware implementations.
The above result then brings into question whether it is more cost-effective to develop one exceptionally good software channel or N less good channels. This is particularly relevant to aerospace systems with the Boeing 777 adopting the former policy and the Airbus adopting at least partially, the latter.
This paper attempts to resolve this issue by studying existing systems data and concludes that the evidence so far suggests that the N-version approach is significantly superior and its relative superiority is likely to increase in future.
Could the arithmetic coding make checks like htis unnecessary?
No, it would just be a more efficient and error-resistant way to do the checks that way—with overlapping sections of the work—than with a straight duplication of effort. Arithmetic coding has a wikipedia article; error-correcting output coding doesn’t seem to; but it’s closer to the actual implementation a team of teams could use.
Could you clarify what you mean by “arithmetic coding operating over their output”?
The point of having teams work independently on the same project is that they’re unlikely to make exactly the same mistakes. Publishers do this for proofreading: have two proofreaders return error-sets A and B, and estimate the number of uncaught errors as a function of |A\B|, |B\A| and |A∩B|. If A=B, that would be strong evidence that there are no errors left. (But depending on priors, it might not bring P(no errors left) close to 1.) If two proofreaders worked on different parts of the book, you couldn’t use the same technique.
Could the arithmetic coding make checks like this unnecessary?
Unlikely, but not independent. “Are N average software versions better than 1 good version?”, Hatton 1997:
No, it would just be a more efficient and error-resistant way to do the checks that way—with overlapping sections of the work—than with a straight duplication of effort. Arithmetic coding has a wikipedia article; error-correcting output coding doesn’t seem to; but it’s closer to the actual implementation a team of teams could use.
(edited because the link to ECOC disappeared)