Concerns about what I do on my own computer are separate from proofs about the software I send to you—they might still be valid, and if so, trusted and influential.
Yes, I could achieve malicious ends by outsmarting you: I could give you software that does exactly what it says (and a formal proof of that), but through its seemingly innocent interactions with other servers (which are perfectly valid under that proof), actually create a malicious effect you haven’t foreseen. But that concern doesn’t invalidate the usefulness of the proof.
I was thinking of a scenario where the verifier controls the hardware and the compiler.
E.g. it sends you its source code to run on your own hardware because it wants to work for you.
So hide it in some other computer you interact with.
Concerns about what I do on my own computer are separate from proofs about the software I send to you—they might still be valid, and if so, trusted and influential.
Yes, I could achieve malicious ends by outsmarting you: I could give you software that does exactly what it says (and a formal proof of that), but through its seemingly innocent interactions with other servers (which are perfectly valid under that proof), actually create a malicious effect you haven’t foreseen. But that concern doesn’t invalidate the usefulness of the proof.