Oh, cool! I’m not totally clear on what this means—did things like the toric code provide error correction in a linear number of extra steps, while this new result paves the way for error correction in a logarithmic number of extra steps?
Basically, the following properties hold for this code (I’m trusting quanta magazine to report the study correctly)
It is efficient like classical codes.
It can correct many more errors than previous codes.
It has constant ability to suppress errors, no matter how large the sequence of bits you’ve started with.
It sums up a very low number of bits/qubits, called the LDPC property in the quanta article.
It has local testability, that is errors can’t hide themselves, and any check can reveal a large proportion of errors, evading Goodhart’s Law.
Oh, cool! I’m not totally clear on what this means—did things like the toric code provide error correction in a linear number of extra steps, while this new result paves the way for error correction in a logarithmic number of extra steps?
Basically, the following properties hold for this code (I’m trusting quanta magazine to report the study correctly)
It is efficient like classical codes.
It can correct many more errors than previous codes.
It has constant ability to suppress errors, no matter how large the sequence of bits you’ve started with.
It sums up a very low number of bits/qubits, called the LDPC property in the quanta article.
It has local testability, that is errors can’t hide themselves, and any check can reveal a large proportion of errors, evading Goodhart’s Law.