I’m not sure what you think it means. Care to elaborate?
In computer science, provably secure software mechanisms rely on an idealized model of hardware; they aren’t and can’t be secure against hardware failures.
provably secure software mechanisms rely on an idealized model of hardware
In my experience, they also define an attacker model against which to secure. There are no guarantees against attackers with greater access, or abilities, than specified in the model.
I have yet to see any claims of a “secure” system that doesn’t state the assumptions and validations of the hardware involved. It may be only my world, but a whole lot of attention is paid to the various boundaries between containers (sandbox, VM, virtualized host, physical host, cluster, airgap), not just to the inner level.
Mostly, “provably secure” means all layers, unless there’s a fair bit of fine print with it.
Mostly, “provably secure” means all layers, unless there’s a fair bit of fine print with it.
I don’t think that’s possible. Then “provably secure” would have to include a proof that our model of physics is correct and complete.
More generally, a “proof” is something done within a strictly-defined logic system. By definition it makes assumptions, and proves something given those assumptions.
Yes, a provably secure system has assumptions about the other systems it uses, and they necessarily amount to “all those other systems work correctly and if they don’t, it’s their bug, not ours.”
Provable security means no security-affecting bugs. Precious few software is written to be provably correct (rather than just proving the underlying algorithm correct). None of it runs on proven-correct operating systems with proven-correct bioses, drivers, and chipset ROMs all the way down to the network card and hard drive microcontrollers, because such mythical beasts don’t exist. (And these microcontrollers have long been general-purpose computers capable of hosting malware vectors.)
And none of that software runs on provably-correct hardware, which doesn’t exist either: software can be proven correct because it’s an algorithm, but how can you prove the perfection of a physical device like a CPU, the absence of physical implementation errors like this rowhammer bug which aren’t reflected in any design documents?
Step one involves figuring out the fundamental laws of physics. Step two is input a complete description of your hardware. Step three is to construct a proof. I’m not sure how to order these in terms of difficulty.
I’m not sure what you think it means. Care to elaborate?
In computer science, provably secure software mechanisms rely on an idealized model of hardware; they aren’t and can’t be secure against hardware failures.
In my experience, they also define an attacker model against which to secure. There are no guarantees against attackers with greater access, or abilities, than specified in the model.
I have yet to see any claims of a “secure” system that doesn’t state the assumptions and validations of the hardware involved. It may be only my world, but a whole lot of attention is paid to the various boundaries between containers (sandbox, VM, virtualized host, physical host, cluster, airgap), not just to the inner level.
Mostly, “provably secure” means all layers, unless there’s a fair bit of fine print with it.
I don’t think that’s possible. Then “provably secure” would have to include a proof that our model of physics is correct and complete.
More generally, a “proof” is something done within a strictly-defined logic system. By definition it makes assumptions, and proves something given those assumptions.
And also a proof that Bob from accounting can’t be convinced to let the AI use his phone for a minute. That’s a very tall order.
Yup. Layer 8 issues are a lot harder to prevent than even Layer 1 issues :)
Could you prove it? :)
Btw. we have to assume that these papers are written by someone who wants slyly to switch some bits in our brain!!
Yes, a provably secure system has assumptions about the other systems it uses, and they necessarily amount to “all those other systems work correctly and if they don’t, it’s their bug, not ours.”
Provable security means no security-affecting bugs. Precious few software is written to be provably correct (rather than just proving the underlying algorithm correct). None of it runs on proven-correct operating systems with proven-correct bioses, drivers, and chipset ROMs all the way down to the network card and hard drive microcontrollers, because such mythical beasts don’t exist. (And these microcontrollers have long been general-purpose computers capable of hosting malware vectors.)
And none of that software runs on provably-correct hardware, which doesn’t exist either: software can be proven correct because it’s an algorithm, but how can you prove the perfection of a physical device like a CPU, the absence of physical implementation errors like this rowhammer bug which aren’t reflected in any design documents?
Step one involves figuring out the fundamental laws of physics. Step two is input a complete description of your hardware. Step three is to construct a proof. I’m not sure how to order these in terms of difficulty.
1-3-2 in descending order of difficulty