Question about error-correcting codes that’s probably in the literature but I don’t seem to be able to find the right search terms:
How can we apply error-correcting codes to logical *algorithms*, as well as bit streams?
If we want to check that bit-stream is accurate, we know how to do this for a manageable overhead—but what happens if there’s an error in the hardware that does the checking? It’s not easy for me to construct a system that has no single point of failure—you can run the correction algorithm multiple times but how do you compare the results without ending up back with a single point of failure?
Anyone know any relevant papers or got a cool solution?
Interested for the stability of computronium-based futures!
Question about error-correcting codes that’s probably in the literature but I don’t seem to be able to find the right search terms:
How can we apply error-correcting codes to logical *algorithms*, as well as bit streams?
If we want to check that bit-stream is accurate, we know how to do this for a manageable overhead—but what happens if there’s an error in the hardware that does the checking? It’s not easy for me to construct a system that has no single point of failure—you can run the correction algorithm multiple times but how do you compare the results without ending up back with a single point of failure?
Anyone know any relevant papers or got a cool solution?
Interested for the stability of computronium-based futures!
At the risk of pointing to the obvious, the “typical” method that has been used in the past military and space is hardware redundancy (often x3).