This could easily turn into a book on the different security models. But I don’t think I have time for that, and you probably don’t either, so I’ll try to just respond to what you said...
Faking an ID depends only on your skills and resources.
I don’t think that’s a productive way to look at it at all. The skills and resources I will need depend on the countermeasures taken by the people who create the IDs… who, although they may be under cost constraints, are professionals, unlike most of the users. They are the ones who invest resources, and they do have all kinds of choices about how much to invest.
There’s also the question of verifier resource investment. A store clerk deciding whether to sell you beer may just glance at the card. A cop who stops you in traffic will nearly always check everything on the ID against the issuer’s database, at least if it’s a “local” ID… with the definition of “local” being expanded constantly. It’s a three-way verification between the card, your face, and the database. I suspect notaries in many places do the same now, and I would expect the number of such places to increase. An ID card is no longer just a piece of plastic.
So, for transactions big enough for your counterparty to bother investing in serious verification, faking the ID really becomes a matter of either faking the biometrics it uses (not so easy in person even if the biometric is just a facial photograph), or subverting the issuing system.
It’s true that subverting the issuing system is a class break against all of the IDs it issues, but it’s also true that finding a zero-day in code that protects keys is a class break against all of the keys protected by that code.
Also, IDs are verified by people, who can make different mistakes.
… but keys are also held by people, who can make different mistakes. And they use different ways of storing the keys.
In any case, for any particular transaction, I as an attacker don’t usually get my pick of verifiers. If I want to divert the payment for your house, I have to fool the particular person who handles that payment (and then I have to move very fast to get the money out of reach before they claw back the transaction). I can’t get $500,000 from an escrow agent by fooling the clerk down at 7-11.
Whereas a key is either leaked or it isn’t.
Well, no, actually. I said “steal your key”, but the real issue is “use your key”.
Suppose you’re using some kind of “hardware” key storage device (they’re really hardware plus quite a bit of software). The problem for me isn’t necessarily to get a copy of your key out of that device. It’s enough for me to induce that device to sign the wrong transaction… which can be done by tricking either it or you. I may be in a position to do that in some circumstances, but not in others.
You don’t just have one thing to defend against, either. I have a pretty broad choice of approaches to tricking you and/or the device, and my options multiply if I manage to own the general-purpose computer you’ve plugged the device into, let alone the device itself. You have to defend against all of my options.
If you step back further, take a timeless point of view, and look at the overall history of transactions controlling a block chain’s idea of a durable asset’s ownership, there are going to be a lot of keys and key holders in that history. Only one of them has to go wrong to permanently divert the asset. So there are still lots of different people to trick if I want to establish a new dynasty in the manor.
You’re not necessarily the only person affected if you screw up with your key, either. Arguments based on self-reliance only go so far in deciding what kind of system everybody should be using.
What I feel like I see from “blockchain people” is this sense that keys are axiomatically safe, to the point where it’s always sensible to use them for large, irrevocable transactions with no escape hatch. Even people who have personally made (or failed to make) diving catches to keep, say, Ethereum contract bugs from screwing people over, still somehow seem to maintain their gut-level faith in code as law and total trustlessness.
Frankly it feels like “just world” thinking: “Key compromise (or whatever) only happens to the clumsy and lazy (who deserve what they get). I’m not clumsy or lazy, so I’ll be fine”. Even if that were true, there are enough clumsy and lazy people out there to cause total system collapse in a lot of applications if you don’t design around them.
I actually think that block chains are a useful tool, that they can reduce the need for trust in many applications, and that that’s a very good feature. Nonetheless, the idea that they can make everything completely automatic and trustless is just not reasonable.
If we’re talking about real estate titles, you might be able to use a block chain to record everything, but somebody is always going to have to be able to override the system and transfer title against the will of the listed holder, or when the listed holder has lost the relevant key. There is going to have to be a “bureaucratic” system for managing those overrides, including trust in certain authorities.
By the way, I am not saying that the sort of magical thinking mentioned in the original post doesn’t exist. “Send in a scan of your ID card” is stupid 99 percent of the time. “You must make the signature using a pen” is stupid and usually based on ignorance of the law. It’s just that nothing else is a magic fix either.
I think we had different intuitions because we considered different user populations; a kind of “typical skill fallacy” on my part.
It might be, as you say, easier to steal an average blockchain user’s private key than to successfully fake their government ID. I don’t think I know what the average blockchain user’s security is like, and whether it’s much better than the average computer user’s security, which is very poor. (Although that statement once again bakes in some assumptions about the attacker...)
Rather, I was imagining myself, and others who like me have some relevant experience. (I’ve spent a few years helping manage a private X.509 CA and associated hardware and software in a pretty paranoid environment, so perhaps my expectations are set high!) I believe that if I wanted to strongly protect a private key, because I had a lot of value invested in it, I’d be able to make it much more secure than my government ID.
The key point is that a blockchain user can invest in security proportionally to the value being guarded. Whereas IDs provide a similar level of security to everyone; one person’s ID probably isn’t orders of magnitude harder to fake than another’s. Unless they’re e.g. very famous, or very unlikely to be found where you are or doing the things you’re doing with their ID, in which case verifiers might not believe you even if you look like the photo on the ID. (Although social engineering can work wonders.)
I wasn’t talking about any blockchain use in particular, and I don’t have a strong, thought-out defense of any particular use tied to real-world entities like real estate; I haven’t investigated the subject enough. I know my way around key management; what you do with the key afterwards is your business :-)
This could easily turn into a book on the different security models. But I don’t think I have time for that, and you probably don’t either, so I’ll try to just respond to what you said...
I don’t think that’s a productive way to look at it at all. The skills and resources I will need depend on the countermeasures taken by the people who create the IDs… who, although they may be under cost constraints, are professionals, unlike most of the users. They are the ones who invest resources, and they do have all kinds of choices about how much to invest.
There’s also the question of verifier resource investment. A store clerk deciding whether to sell you beer may just glance at the card. A cop who stops you in traffic will nearly always check everything on the ID against the issuer’s database, at least if it’s a “local” ID… with the definition of “local” being expanded constantly. It’s a three-way verification between the card, your face, and the database. I suspect notaries in many places do the same now, and I would expect the number of such places to increase. An ID card is no longer just a piece of plastic.
So, for transactions big enough for your counterparty to bother investing in serious verification, faking the ID really becomes a matter of either faking the biometrics it uses (not so easy in person even if the biometric is just a facial photograph), or subverting the issuing system.
It’s true that subverting the issuing system is a class break against all of the IDs it issues, but it’s also true that finding a zero-day in code that protects keys is a class break against all of the keys protected by that code.
… but keys are also held by people, who can make different mistakes. And they use different ways of storing the keys.
In any case, for any particular transaction, I as an attacker don’t usually get my pick of verifiers. If I want to divert the payment for your house, I have to fool the particular person who handles that payment (and then I have to move very fast to get the money out of reach before they claw back the transaction). I can’t get $500,000 from an escrow agent by fooling the clerk down at 7-11.
Well, no, actually. I said “steal your key”, but the real issue is “use your key”.
Suppose you’re using some kind of “hardware” key storage device (they’re really hardware plus quite a bit of software). The problem for me isn’t necessarily to get a copy of your key out of that device. It’s enough for me to induce that device to sign the wrong transaction… which can be done by tricking either it or you. I may be in a position to do that in some circumstances, but not in others.
You don’t just have one thing to defend against, either. I have a pretty broad choice of approaches to tricking you and/or the device, and my options multiply if I manage to own the general-purpose computer you’ve plugged the device into, let alone the device itself. You have to defend against all of my options.
If you step back further, take a timeless point of view, and look at the overall history of transactions controlling a block chain’s idea of a durable asset’s ownership, there are going to be a lot of keys and key holders in that history. Only one of them has to go wrong to permanently divert the asset. So there are still lots of different people to trick if I want to establish a new dynasty in the manor.
You’re not necessarily the only person affected if you screw up with your key, either. Arguments based on self-reliance only go so far in deciding what kind of system everybody should be using.
What I feel like I see from “blockchain people” is this sense that keys are axiomatically safe, to the point where it’s always sensible to use them for large, irrevocable transactions with no escape hatch. Even people who have personally made (or failed to make) diving catches to keep, say, Ethereum contract bugs from screwing people over, still somehow seem to maintain their gut-level faith in code as law and total trustlessness.
Frankly it feels like “just world” thinking: “Key compromise (or whatever) only happens to the clumsy and lazy (who deserve what they get). I’m not clumsy or lazy, so I’ll be fine”. Even if that were true, there are enough clumsy and lazy people out there to cause total system collapse in a lot of applications if you don’t design around them.
I actually think that block chains are a useful tool, that they can reduce the need for trust in many applications, and that that’s a very good feature. Nonetheless, the idea that they can make everything completely automatic and trustless is just not reasonable.
If we’re talking about real estate titles, you might be able to use a block chain to record everything, but somebody is always going to have to be able to override the system and transfer title against the will of the listed holder, or when the listed holder has lost the relevant key. There is going to have to be a “bureaucratic” system for managing those overrides, including trust in certain authorities.
By the way, I am not saying that the sort of magical thinking mentioned in the original post doesn’t exist. “Send in a scan of your ID card” is stupid 99 percent of the time. “You must make the signature using a pen” is stupid and usually based on ignorance of the law. It’s just that nothing else is a magic fix either.
I think we had different intuitions because we considered different user populations; a kind of “typical skill fallacy” on my part.
It might be, as you say, easier to steal an average blockchain user’s private key than to successfully fake their government ID. I don’t think I know what the average blockchain user’s security is like, and whether it’s much better than the average computer user’s security, which is very poor. (Although that statement once again bakes in some assumptions about the attacker...)
Rather, I was imagining myself, and others who like me have some relevant experience. (I’ve spent a few years helping manage a private X.509 CA and associated hardware and software in a pretty paranoid environment, so perhaps my expectations are set high!) I believe that if I wanted to strongly protect a private key, because I had a lot of value invested in it, I’d be able to make it much more secure than my government ID.
The key point is that a blockchain user can invest in security proportionally to the value being guarded. Whereas IDs provide a similar level of security to everyone; one person’s ID probably isn’t orders of magnitude harder to fake than another’s. Unless they’re e.g. very famous, or very unlikely to be found where you are or doing the things you’re doing with their ID, in which case verifiers might not believe you even if you look like the photo on the ID. (Although social engineering can work wonders.)
I wasn’t talking about any blockchain use in particular, and I don’t have a strong, thought-out defense of any particular use tied to real-world entities like real estate; I haven’t investigated the subject enough. I know my way around key management; what you do with the key afterwards is your business :-)