Similar issues plague hardware correctness and security, and it will be a much larger project to replace today’s flawed and insecure hardware. Max and Steve propose formal methods grounded in mathematical physics to produce provably safe physical designs. The same AI techniques which are revolutionizing theorem proving and provable software synthesis are also applicable to provable hardware design.
I’d like to add that hardware formal verification is not a virgin area.
Of course, physics is a huge complication, but in terms of algorithms to be verified, hardware verification is much more tractable than software verification. E.g. LCF-LSM has been published in early 1980-s, M. J. C. Gordon. LCF-LSM: A system for specifying and verifying hardware. Technical Report 41, University of Cambridge Computer Laboratory, 1983, and defense-related prototypes followed soon thereafter, https://apps.dtic.mil/sti/pdfs/ADA168248.pdf (United Kingdom, 1985).
After the infamous Pentium FDIV bug, Intel has been giving multi-million grants to formal methods academic researchers in the 1990-s. What I don’t know is to what extent methods of formal verification are actually adopted by the hardware industry (we do know that hardware people are much more interested in having their systems correct, compared to their software counterparts, since bug fixes in the shipped hardware are next to impossible).
Thanks for the links! Yes, the cost of errors is high for chips, so chip companies have been very supportive of formal methods. But they’ve primarily focused on the correctness of digital circuits. There are many other physical attack surfaces.
For example, most DRAM chips are vulnerable to the “Rowhammer” attack https://en.wikipedia.org/wiki/Row_hammer and as memories get denser, they become more vulnerable. This involves memory access patterns which flip bits in neighboring rows which can be exploited to discover cryptographic keys and break security assumptions in processors. The solutions so far are fairly hacky and attackers have found work arounds. A full solution likely will involve a formal model including the electromagnetic and material physics of the memory chip. This might be used to provably characterize all problematic access patterns and to provably translate software into equivalent versions which provably don’t produce the dangerous accesses.
Another example from yesterday is the AMD “Sinkclose” flaw which affects all AMD chips back to 2006 and makes them vulnerable to attacks after which “You basically have to throw your computer away.” In a world of powerful AI-enabled attackers, that kind of flaw is unlikely to be acceptable.
With the recent developments in drones, humanoid robots, and autonomous vehicles, we are likely to need provable constraints on the behavior of a wide variety of physical systems. In the paper with Max we argue that this is possible and that the laws of physics provide an absolute constraint on the behavior of AIs. But it’s not easy! And the time is likely short.
I’d like to add that hardware formal verification is not a virgin area.
Of course, physics is a huge complication, but in terms of algorithms to be verified, hardware verification is much more tractable than software verification. E.g. LCF-LSM has been published in early 1980-s, M. J. C. Gordon. LCF-LSM: A system for specifying and verifying hardware. Technical Report 41, University of Cambridge Computer Laboratory, 1983, and defense-related prototypes followed soon thereafter, https://apps.dtic.mil/sti/pdfs/ADA168248.pdf (United Kingdom, 1985).
After the infamous Pentium FDIV bug, Intel has been giving multi-million grants to formal methods academic researchers in the 1990-s. What I don’t know is to what extent methods of formal verification are actually adopted by the hardware industry (we do know that hardware people are much more interested in having their systems correct, compared to their software counterparts, since bug fixes in the shipped hardware are next to impossible).
Thanks for the links! Yes, the cost of errors is high for chips, so chip companies have been very supportive of formal methods. But they’ve primarily focused on the correctness of digital circuits. There are many other physical attack surfaces.
For example, most DRAM chips are vulnerable to the “Rowhammer” attack https://en.wikipedia.org/wiki/Row_hammer and as memories get denser, they become more vulnerable. This involves memory access patterns which flip bits in neighboring rows which can be exploited to discover cryptographic keys and break security assumptions in processors. The solutions so far are fairly hacky and attackers have found work arounds. A full solution likely will involve a formal model including the electromagnetic and material physics of the memory chip. This might be used to provably characterize all problematic access patterns and to provably translate software into equivalent versions which provably don’t produce the dangerous accesses.
Another example from yesterday is the AMD “Sinkclose” flaw which affects all AMD chips back to 2006 and makes them vulnerable to attacks after which “You basically have to throw your computer away.” In a world of powerful AI-enabled attackers, that kind of flaw is unlikely to be acceptable.
With the recent developments in drones, humanoid robots, and autonomous vehicles, we are likely to need provable constraints on the behavior of a wide variety of physical systems. In the paper with Max we argue that this is possible and that the laws of physics provide an absolute constraint on the behavior of AIs. But it’s not easy! And the time is likely short.
“‘Sinkclose’ Flaw in Hundreds of Millions of AMD Chips Allows Deep, Virtually Unfixable Infections” https://archive.ph/MsA0m