I think TLW’s criticism is important, and I don’t think your responses are sufficient. I also think the original example is confusing; I’ve met several people who, after reading OP, seemed to me confused about how engineers could use the concept of mutual information.
Here is my attempt to expand your argument.
We’re trying to design some secure electronic equipment. We want the internal state and some of the outputs to be secret. Maybe we want all of the outputs to be secret, but we’ve given up on that (for example, radio shielding might not be practical or reliable enough). When we’re trying to design things so that the internal state and outputs are secret, there are a couple of sources of failure.
One source of failure is failing to model the interactions between the components of our systems. Maybe there is an output we don’t know about (like the vibrations the electronics make while operating), or maybe there is an interaction we’re not aware of (like magnetic coupling between two components we’re treating as independent).
Another source of failure is that we failed to consider all the ways that an adversary could exploit the interactions we do know about. In your example, we fail to consider how an adversary could exploit higher-order correlations between emitted radio waves and the state of the electronic internals.
A true name, in principle, allows us to avoid the second kind of failure. In high-dimensional state spaces, we might need to get kind of clever to prove the lack of mutual information. But it’s a fairly delimited analytic problem, and we at least know what a good answer would look like.
The true name could also guide our investigations into our system, to help us avoid the first kind of failure. “Huh, we just made the adder have a more complicated behaviour as an optimisation. Could the unnevenness of that optimisation over the input distribution leak information about the adder’s inputs to another part of the system?”
Now, reader, you might worry that the chosen example of a True Name leaves an implementation gap wide enough for a human adversary to drive an exploit through. And I think that’s a pretty good complaint. The best defence I can muster is that it guides and organises the defender’s thinking. You get to do proofs-given-assumptions, and you get more clarity about how to think if your assumptions are wrong.
To the extent that the idea is that True Names are part of a strategy to come up with approaches that are unbounded-optimisation-proof, I think that defence doesn’t work and the strategy is kind of sunk.
On the other hand, here is an argument that I can plause. In the end, we’ve got to make some argument that when we flick some switch or continue down some road, things will be OK. And there’s a big messy space of considerations to navigate to that end. True Names are necessary to have any hope of compressing the domain enough that you can make arguments that stand up.
I think that’s basically right, and good job explaining it clearly and compactly.
I would also highlight that it’s not just about adversaries. One the main powers of proof-given-assumptions is that it allows to rule out large classes of unknown unknowns in one go. And, insofar as the things-proven-given-assumptions turn out to be false, it allows to detect previously-unknown unknowns.
I think TLW’s criticism is important, and I don’t think your responses are sufficient. I also think the original example is confusing; I’ve met several people who, after reading OP, seemed to me confused about how engineers could use the concept of mutual information.
Here is my attempt to expand your argument.
We’re trying to design some secure electronic equipment. We want the internal state and some of the outputs to be secret. Maybe we want all of the outputs to be secret, but we’ve given up on that (for example, radio shielding might not be practical or reliable enough). When we’re trying to design things so that the internal state and outputs are secret, there are a couple of sources of failure.
One source of failure is failing to model the interactions between the components of our systems. Maybe there is an output we don’t know about (like the vibrations the electronics make while operating), or maybe there is an interaction we’re not aware of (like magnetic coupling between two components we’re treating as independent).
Another source of failure is that we failed to consider all the ways that an adversary could exploit the interactions we do know about. In your example, we fail to consider how an adversary could exploit higher-order correlations between emitted radio waves and the state of the electronic internals.
A true name, in principle, allows us to avoid the second kind of failure. In high-dimensional state spaces, we might need to get kind of clever to prove the lack of mutual information. But it’s a fairly delimited analytic problem, and we at least know what a good answer would look like.
The true name could also guide our investigations into our system, to help us avoid the first kind of failure. “Huh, we just made the adder have a more complicated behaviour as an optimisation. Could the unnevenness of that optimisation over the input distribution leak information about the adder’s inputs to another part of the system?”
Now, reader, you might worry that the chosen example of a True Name leaves an implementation gap wide enough for a human adversary to drive an exploit through. And I think that’s a pretty good complaint. The best defence I can muster is that it guides and organises the defender’s thinking. You get to do proofs-given-assumptions, and you get more clarity about how to think if your assumptions are wrong.
To the extent that the idea is that True Names are part of a strategy to come up with approaches that are unbounded-optimisation-proof, I think that defence doesn’t work and the strategy is kind of sunk.
On the other hand, here is an argument that I can plause. In the end, we’ve got to make some argument that when we flick some switch or continue down some road, things will be OK. And there’s a big messy space of considerations to navigate to that end. True Names are necessary to have any hope of compressing the domain enough that you can make arguments that stand up.
I think that’s basically right, and good job explaining it clearly and compactly.
I would also highlight that it’s not just about adversaries. One the main powers of proof-given-assumptions is that it allows to rule out large classes of unknown unknowns in one go. And, insofar as the things-proven-given-assumptions turn out to be false, it allows to detect previously-unknown unknowns.