Thanks for the links! Yes, the cost of errors is high for chips, so chip companies have been very supportive of formal methods. But they’ve primarily focused on the correctness of digital circuits. There are many other physical attack surfaces.
For example, most DRAM chips are vulnerable to the “Rowhammer” attack https://en.wikipedia.org/wiki/Row_hammer and as memories get denser, they become more vulnerable. This involves memory access patterns which flip bits in neighboring rows which can be exploited to discover cryptographic keys and break security assumptions in processors. The solutions so far are fairly hacky and attackers have found work arounds. A full solution likely will involve a formal model including the electromagnetic and material physics of the memory chip. This might be used to provably characterize all problematic access patterns and to provably translate software into equivalent versions which provably don’t produce the dangerous accesses.
Another example from yesterday is the AMD “Sinkclose” flaw which affects all AMD chips back to 2006 and makes them vulnerable to attacks after which “You basically have to throw your computer away.” In a world of powerful AI-enabled attackers, that kind of flaw is unlikely to be acceptable.
With the recent developments in drones, humanoid robots, and autonomous vehicles, we are likely to need provable constraints on the behavior of a wide variety of physical systems. In the paper with Max we argue that this is possible and that the laws of physics provide an absolute constraint on the behavior of AIs. But it’s not easy! And the time is likely short.
Thanks for the links! Yes, the cost of errors is high for chips, so chip companies have been very supportive of formal methods. But they’ve primarily focused on the correctness of digital circuits. There are many other physical attack surfaces.
For example, most DRAM chips are vulnerable to the “Rowhammer” attack https://en.wikipedia.org/wiki/Row_hammer and as memories get denser, they become more vulnerable. This involves memory access patterns which flip bits in neighboring rows which can be exploited to discover cryptographic keys and break security assumptions in processors. The solutions so far are fairly hacky and attackers have found work arounds. A full solution likely will involve a formal model including the electromagnetic and material physics of the memory chip. This might be used to provably characterize all problematic access patterns and to provably translate software into equivalent versions which provably don’t produce the dangerous accesses.
Another example from yesterday is the AMD “Sinkclose” flaw which affects all AMD chips back to 2006 and makes them vulnerable to attacks after which “You basically have to throw your computer away.” In a world of powerful AI-enabled attackers, that kind of flaw is unlikely to be acceptable.
With the recent developments in drones, humanoid robots, and autonomous vehicles, we are likely to need provable constraints on the behavior of a wide variety of physical systems. In the paper with Max we argue that this is possible and that the laws of physics provide an absolute constraint on the behavior of AIs. But it’s not easy! And the time is likely short.
“‘Sinkclose’ Flaw in Hundreds of Millions of AMD Chips Allows Deep, Virtually Unfixable Infections” https://archive.ph/MsA0m