Such an obvious and easy to exploit vulnerability has existed for 20ish years, undiscovered/unexposed until one person on LW pointed it out?
It’s not a vulnerability. I trust gnupg not to leak my private key, not the OpenPGP standard. I also trust gnupg not to delete all the files on my hard disk, etc. There’s a difference between trusting software to securely implement a standard and trusting the standard itself.
For an even simpler “vulnerability” in OpenPGP look up section 13.1.1 in RFC4880; encoding a message before signing. Just replace the pseudo-random padding with bits from the private key. Decoding (section 13.1.2) does not make any requirements on the content of PS.
Thank you by the way for actually including an example of such an attack. The discussion between ChristianKI and myself covered about 10 different subjects so I wasn’t exactly sure what type of attack you were describing.
You are correct, in such an attack it would not be a question of trusting OpenPGP. It’s a general question of trusting software. These vulnerabilities are common to any software that someone might choose to download.
In this case, I would argue that a transparent, sandboxed programming language like javascript is probably one of the safer pieces of “software” someone can download. Especially because browsers basically treat all javascript like it could be malicious.
In this case, I would argue that a transparent, sandboxed programming language like javascript is probably one of the safer pieces of “software” someone can download. Especially because browsers basically treat all javascript like it could be malicious.
Why would I paste a secret key into software that my browser explicitly treats as potentially malicious? I still argue that trusting a verifiable author/distributor is safer than trusting an arbitrary website, e.g. trusting gpg is safer than trusting xxx.yyy.com/zzz.js regardless of who you think wrote zzz.js, simply because it’s easier to get that wrong in some way than it is to accidentally install an evil version of gpg, especially if you use an open source package manager that makes use of PKI, or run it from TAILS, etc. I am also likely to trust javascript crypto served from https://www.gnupg.org/ more than from any other URL, for instance.
In general I agree wholeheartedly with your comment about sandboxing being important. The problem is that sandboxing does not imply trusting. I think smartphone apps are probably better sandboxed, but I don’t necessarily trust the distribution infrastructure (app stores) not to push down evil updates, etc. Sideloading a trusted app by a trusted author is probably a more realistic goal for OpenPGP for the masses.
I agree with what you said, I just want to clarify something:
My original statements were made in a very specific context: here are some ways you can attempt to verify this specific piece of software*. At no point did I suggest that any of those methods could be used universally, or that they were foolproof. I grew weary of ChristianKI continually implying this, so I stopped responding to him.
So with that said: yes, using this program does require trusting me, the author. If you don’t trust me, I have suggested some ways you could verify for yourself. If you aren’t able to or it’s too much trouble, that’s fine; don’t use it. As mentioned before, I never meant this to be “PGP for the masses”.
The core question isn’t “how safe is X” but “what safety gurantees does X make” and “does X actually holds it’s promises”.
A decently used software downloaded from sourceforge is more trustworthy than unknown code transferred unencrypted over the internet.
Projects like Tor go even beyond that standard and provide deterministic builds to allow independent verification of check sums to make sure that you really are running the code you think you are running.
It’s a general question of trusting software.
In this case trusting software that travel unencrypted through the internet. It’s a quite easy principle to not trust code that travels unencrypted to do anything. It’s really security 101. Don’t trust unencrypted communiction channels.
Yes, there might be times when you violate that heuristic and don’t get harmed but good security practice is still “Don’t trust unencrypted communiction channels”.
The idea of saying: “Well I don’t have to trust the unencrypted communiction channels because I can do my fancy sandboxing, shouldn’t come up.” It’s not how you think in crypto. In this case, the sandboxing doesn’t work.
You could have said: “This is just a fun project, don’t put any important private keys into it.” You didn’t but started arguing that your system can do more than it can.
The fact that you made that promises as laxly makes the belief in the iPhone app providing what it claims also doubtful.
Key issues:
1) Do you make sure that the real image never get’s written into SDD storage? (There’s no way to trustworthy delete files in SDD storage) 2) Do you got the entropy production really right? 3) Do you really provide no traces in the final image? 4) No other bugs that make the crypto fail?
Given the 101 issues with the other project and the way you present it, why should someone trust that you handled those questions well?
It’s not a vulnerability. I trust gnupg not to leak my private key, not the OpenPGP standard. I also trust gnupg not to delete all the files on my hard disk, etc. There’s a difference between trusting software to securely implement a standard and trusting the standard itself.
For an even simpler “vulnerability” in OpenPGP look up section 13.1.1 in RFC4880; encoding a message before signing. Just replace the pseudo-random padding with bits from the private key. Decoding (section 13.1.2) does not make any requirements on the content of PS.
Thank you by the way for actually including an example of such an attack. The discussion between ChristianKI and myself covered about 10 different subjects so I wasn’t exactly sure what type of attack you were describing.
You are correct, in such an attack it would not be a question of trusting OpenPGP. It’s a general question of trusting software. These vulnerabilities are common to any software that someone might choose to download.
In this case, I would argue that a transparent, sandboxed programming language like javascript is probably one of the safer pieces of “software” someone can download. Especially because browsers basically treat all javascript like it could be malicious.
Why would I paste a secret key into software that my browser explicitly treats as potentially malicious? I still argue that trusting a verifiable author/distributor is safer than trusting an arbitrary website, e.g. trusting gpg is safer than trusting xxx.yyy.com/zzz.js regardless of who you think wrote zzz.js, simply because it’s easier to get that wrong in some way than it is to accidentally install an evil version of gpg, especially if you use an open source package manager that makes use of PKI, or run it from TAILS, etc. I am also likely to trust javascript crypto served from https://www.gnupg.org/ more than from any other URL, for instance.
In general I agree wholeheartedly with your comment about sandboxing being important. The problem is that sandboxing does not imply trusting. I think smartphone apps are probably better sandboxed, but I don’t necessarily trust the distribution infrastructure (app stores) not to push down evil updates, etc. Sideloading a trusted app by a trusted author is probably a more realistic goal for OpenPGP for the masses.
I agree with what you said, I just want to clarify something:
My original statements were made in a very specific context: here are some ways you can attempt to verify this specific piece of software*. At no point did I suggest that any of those methods could be used universally, or that they were foolproof. I grew weary of ChristianKI continually implying this, so I stopped responding to him.
So with that said: yes, using this program does require trusting me, the author. If you don’t trust me, I have suggested some ways you could verify for yourself. If you aren’t able to or it’s too much trouble, that’s fine; don’t use it. As mentioned before, I never meant this to be “PGP for the masses”.
The core question isn’t “how safe is X” but “what safety gurantees does X make” and “does X actually holds it’s promises”.
A decently used software downloaded from sourceforge is more trustworthy than unknown code transferred unencrypted over the internet.
Projects like Tor go even beyond that standard and provide deterministic builds to allow independent verification of check sums to make sure that you really are running the code you think you are running.
In this case trusting software that travel unencrypted through the internet. It’s a quite easy principle to not trust code that travels unencrypted to do anything. It’s really security 101. Don’t trust unencrypted communiction channels.
Yes, there might be times when you violate that heuristic and don’t get harmed but good security practice is still “Don’t trust unencrypted communiction channels”.
The idea of saying: “Well I don’t have to trust the unencrypted communiction channels because I can do my fancy sandboxing, shouldn’t come up.” It’s not how you think in crypto. In this case, the sandboxing doesn’t work.
You could have said: “This is just a fun project, don’t put any important private keys into it.” You didn’t but started arguing that your system can do more than it can.
The fact that you made that promises as laxly makes the belief in the iPhone app providing what it claims also doubtful. Key issues:
1) Do you make sure that the real image never get’s written into SDD storage? (There’s no way to trustworthy delete files in SDD storage)
2) Do you got the entropy production really right?
3) Do you really provide no traces in the final image?
4) No other bugs that make the crypto fail?
Given the 101 issues with the other project and the way you present it, why should someone trust that you handled those questions well?