[LINK] The P + epsilon Attack (Precommitment in cryptoeconomics)
Vitalik Buterin has a new post about an interesting theoretical attack against Bitcoin. The idea relies on the assumption that the attacker can credibly commit to something quite crazy. The crazy thing is this: paying out 25.01 BTC to all the people who help him in his attack to steal 25 BTC from everyone, but only if the attack fails. This leads to a weird payoff matrix where the dominant strategy is to help him in the attack. The attack succeeds, and no payoff is made.
Of course, smart contracts make such crazy commitments perfectly possible, so this is a bit less theoretical than it sounds. But even as an abstract though experiment about decision theories, it looks pretty interesting.
By the way, Vitalik Buterin is really on a roll. Just a week ago he had a thought-provoking blog post about how Decentralized Autonomous Organizations could possibly utilize a concept often discussed here: decision theory in a setup where agents can inspect each others’ source code. It was shared on LW Discussion, but earned less exposure than I think it deserved.
EDIT 1: One smart commenter of the original post spotted that an isomorphic, extremely cool game was already proposed by billionaire Warren Buffett. Does this thing already have a name in game theory maybe?
EDIT 2: I wrote the game up in detail for some old-school game theorist friends:
The attacker orchestrates a game with 99 players. The attacker himself does not participate in the game.
Rules:
Each of the players can either defect or cooperate, in the usual game theoretic setup where they do announce their decisions simultaneously, without side channels. We call “aggregate outcome” the decision that was made by the majority of the players. If the aggregate outcome is defection, we say that the attack succeeds. A player’s payoff consists of two components:
1. If her decision coincides with the aggregate outcome, the player gets 10 utilons.
and simultaneously:
2. if the attack succeeds, the attacker gets 1 utilons from each of the 99 players, regardless of their own decision.
| Cooperate | Defect
Attack fails | 10 | 0
Attack succeeds | −1 | 9
There are two equilibria, but the second payoff component breaks the symmetry, and everyone will cooperate.
Now the attacker spices things up, by making a credible commitment before the game. (“Credible” simply means that somehow they make sure that the promise can not be broken. The classic way to achieve such things is an escrow, but so called smart contracts are emerging as a method for making fully unbreakable commitments.)
The attacker’s commitment is quite counterintuitive: he promises that he will pay 11 utilons to each of the defecting players, but only if the attack fails.
Now the payoff looks like this:
| Cooperate | Defect
Attack fails | 10 | 11
Attack succeeds | −1 | 9
Defection became a dominant strategy. The clever thing, of course, is that if everyone defects, then the attacker reaches his goal without paying out anything.
I’m confused about defection becoming a dominant strategy.. Because the existence of a dominant strategy suggests to me that there should exist a unique Nash equilibrium here, which is not the case. Everyone defecting is a Nash equilibrium, but 50 people cooperating and 49 defecting is a Nash equilibrium as well, and a better one at that. Something (quite likely my intuition regarding Nash equilibria in games with more than 2 players) is off here. Also, it is of course possible to calculate the optimal probability that we should defect and I agree with FeepingCreature that this should be 0.5-e, where e depends on the size of the player base and goes to 0 when the player base becomes infinite. But I highly doubt that there’s an elegant formula for it. It seems (in my head at least) that already for, say, n=5 you have to do quite a bit of calculation, let alone n=99.
Nice. If we analyze the game using Vitalik’s 2x2 payoff matrix, defection is a dominant strategy. But now I see that’s not how game theorists would use this phrase. They would work with the full 99-dimensional matrix, and there defection is not a dominant strategy, because as you say, it’s a bad strategy if we know that 49 other people are cooperating, and 49 other people are defecting.
There’s a sleight of hands going on in Vitalik’s analysis, and it is located at the phrase “regardless of one’s epistemic beliefs [one is better off defecting]”. If my epistemic belief is that 49 other people are cooperating, and 49 other people are defecting, then it’s not true that defection is my best strategy. Of course, Vitalik’s 2x2 matrix just does not allow me to have such refined epistemic beliefs: I have to get by with “attack succeeds” versus “attack fails”.
Which kind of makes sense, because it’s true that I probably won’t find myself in a situation where I know for sure that 49 other people are cooperating, and 49 other people are defecting, so the correct game theoretic definition of dominant strategy is probably less relevant here than something like Vitalik’s “aggregate” version. Still, there are assumptions here that are not clear from the original analysis.
So, I did not forget about that particular case. In my particular brand of cryptoeconomic analysis, I try to decompose cooperation incentives into three types:
Incentives generated by the protocol
Altruism
Incentives arising from the desire to have the protocol succeed because one has a stake in it
I often group (2) and (3) into one category, “altruism-prime”, but here we can separate them.
The important point is that category 1 incentives are always present as long as the protocol specifies them, category 2 incentives are always present, but the size of category 3 incentives is proportional to the “probability of being pivotal” of each node—essentially, the probability that the node actually is in a situation where its activity will determine the outcome of the game.
Note that I do not consider 49⁄50 Nash equilibria realistic; in real massively multiplayer games, the level of confusion, asynchronicity, trembling hands/irrational players, bounded rationality, etc, is such that I think it’s impossible for such a finely targeted equilibrium to maintain itself (this is also the primary keystone of my case against standard and dominant assurance contracts). Hence why I prefer to think of the probability distribution on the number of players that will play a particular strategy and from there the probability of a single node being pivotal.
In the case of cryptoeconomic consensus protocols, I consider it desirable to achieve a hard bound of the form “the attacker must spend capital of at least C/k” where C is the amount of capital invested by all participants in the network and k is some constant. Since we cannot prove that the probability of being pivotal will be above any particular 1/k, I generally prefer to assume that it is simply zero (ie, the ideal environment of an infinite number of nodes of zero size). In this environment, my usage of “dominant strategy” is indeed fully correct. However, in cases where hostile parties are involved, I assume that the hostile parties are all colluding; this maximally hard double-standard is a sort of principle of charity that I believe we should hold to.
I don’t know too much about decision theory, but I was thinking about it a bit more, and for me, the end result so far was that “dominant strategy” is just a flawed concept.
If the agents behave superrationally, they do not care about the dominant strategy, and they are safe from this attack. And the “super” in superrational is pretty misleading, because it suggests some extra-human capabilities, but in this particular case it is so easy to see through the whole ruse, one has to be pretty dumb not to behave superrationally. (That is, not to consider the fact that other agents will have to go though the same analysis as ourselves.)
Superrationality works best when we actually know that the others have the same input-output function as ourselves, for example when we know that we are clones or software copies of each others. But real life is not like that, and now I believe that the clean mathematical formulation of such dilemmas (with payoff matrices and all that) is misleading, because it sweeps under the rug another, very fuzzy, hard to formalize input variable: the things that we know about the reasoning processes of the other agents. (In the particular case of the P+epsilon attack, we don’t have to assume too much about the other agents. In general, we do.)
Weird question: superrationally speaking, wouldn’t the “correct” strategy be to switch to B with 0.49 probability? (Or with however much is needed to ensure that if everybody does this, A probably still wins)
[edit] Hm. If B wins, this strategy halves the expected payoff. So you’d have to account for the possibility of B winning accidentally. Seems to depend on the size of the player base—the larger it is, the closer you can drive your probability to 0.5? (at the limit, 0.5-e?) Not sure. I guess it depends on the size of the attacker’s epsilon as well.
I’m sure there’s some elegant formula here, but I have no idea what it is.
The superrational strategy is indeed to switch to B with some probability approaching 0.5 (or, if the system allows it, vote for A with 51% of one’s capital and for B with 49% of it).
Consider the following. If smart contracts where implementable, one of the easier rationality raising measures and most profitable things to do would be to immediately hold a number of contractually enforceable dollar auctions until there remained no one naive to that exact class of irrationality
Similarly, you could burgle people’s houses until there remained no one poorly protected against whatever sort of burglary you committed. You could go around insulting people until there remained no one so thin-skinned they couldn’t cope OK being being insulted by strangers. You could put disease-causing viruses and bacteria in the water supply until there remained no one alive who wasn’t resistant to the diseases.
I hope it’s obvious what’s unsatisfactory about those courses of action. Why is the one you propose any better?
If you put diseases in the water until there remains no one who isn’t resistant, this will happen partly because people will be killed by the disease. If everyone developed immunity against the disease but nobody was seriously harmed, it would not be such a bad idea—this is why we have universal vaccination.
I highly doubt that doing a bunch of dollar auctions would lead there being no remaining naive people because the naive people were all killed off rather than because they became non-naive.
They would become non-naive by being harmed (by losing a pile of money). Of course that’s a lesser harm than being killed, and indeed “kill the non-resistant ones” is different from “harm the non-resistant ones until they become resistant”, so I probably shouldn’t have included the diseases-in-the-water example because it uses both effects. It’s the latter that I had in mind as common to the examples I listed (as well as, of course, Clarity’s original proposal).
Unless they’re 12 years old, losing a couple of dollars is not really all that damaging.
I bet the actual gain in wisdom, relative to just telling them “don’t do that”, is in proportion to the damage.
Ethereum does have smart contracts.
No they don’t. They’re developing a platform for them. They’re not usable outside of just alleged mathematical proof of concept type stuff that have already been rubbished by academics. What they have is a great approach to marketing, a supportive community, venture capitalists and a house of cards for now. It may turn into something, but they haven’t done enough that I wouldn’t bet on the next outsider that comes along with proper cryptological credentials.
They launched on 30 July 2015.
They have enough that the prediction market Augur get’s build on their technology. There’s http://slock.it/ and http://airlock.me/ build smart locks with Ethereum
Take a look at Slockit or Airlock’s page. Can you buy or implement their system? No.
They are working on developing something contingent on something that doesn’t exist
Ethereum launched as a currency already, but that doesn’t mean it’s implemented it’s claim to value—smart contracts. It’s borderline a scam dare I say. That, or they’re is something seriously wrong about the way they are conducting themselves. Altcoins really have been underwhelming as a whole since bitcoin—and even bitcoin is so poorly integrated into existing financial infrastructure it’s not an immediately tractable concept and until it is...I’d rather not hedge my bank account into something that may very well end of catastrophic failure—some day there will probably be an ″attack″ which destroys a virtual coin economy and networks and delegitimises the movement, killing confidence in the concept.
Augur doesn’t actually implement smart contracts in a prediction market, not yet at least. The best solution we have to that problem is ″oracles″, which are somewhat decentralised but not at all original and not at all trustless smart contracts.
edit: btw, Augur specifically is a fork of ethereum. It’s a big way of saying fuck you and thanks for the code, basically. Consider for example the case of anoncoin planning to fork into zerocash—zero cash will entirely dominate anoncoin in theory so all those people who hold anoncoin can basically kiss their holdings goodbye thanks to their developers defecting to the new technology. If augur is successful, what can augur holders do that ethereum holders can? Everything. Coins of any currency become redundant when their selling point is a gimmick for which there are low barriers of entry to improving upon their inherent value (imagine being able to easily alchemise a more useful prettier version of gold, then sell it for less than gold initially—certainly doesn’t make gold feel like a good investment—particularly as in the case of Ethereum were the use is merely aspirational...)
Of course investing in new technology is always risky.
I haven’t heard of any altcoin besides Ethereum that provides a significant benefit over Bitcoin, so it’s natural that other altcoins don’t lead anywhere.
I haven’t read anything about Augur wanting to fork ethereum. Where did you get that idea? The FAQ says that Augur wants to run on Ethereum (https://augur.zendesk.com/hc/en-us)
From Augur’s perspective I don’t think it makes much sense to fork Ethereum. There’s more security against attacks without forking. Augur also profits from having Ether as the core currency that’s payed inside the network and not have a separate currency for that.
None of the direct Bitcoin forks are more useful than Bitcoin because of network effects. The same is true for Ethereum. Ethereum is better than Bitcoin because it allows applications like Augur to run on it. It’s more difficult to legally attack Augur when it runs on Ethereum instead of it’s own network.
If you are a company renting out lookers than it useful to have the payment for the lockers to be in Ether instead of your own currency XEther because Ether is a currency for which other applications exist.
Sooner or later there will be a SilkRoad type service on Ethereum. In contrast to a lot of BitCoin markets, the market owner won’t be able to run away with the money in that market.
I don’t see a huge market for those currencies. If trades happen in a Ethereum based markets it’s quite easy to provide a trustworthy mixer where the trust is based on Ethereum escrow that internally uses zerocash.
There no good reason to edit instead of writing a new post to continue the conversation.
Hey I was thinking earlier this morning that I lots of my reasoning above has been emotionally motivated after losing ~$110 in transaction fees cashing in what would otherwise have been a profitable trade in bitcoins haha.
I’m gonna come back and reply to this if/once I’m thinking more objectively
No, but Ethereum works well enough that they are in the process of building their system.
Smart contracts are a building block fo what Augur is doing.
The application of dollar auctions is much simpler. You don’t need any ‘oracles’ for it.
Isn’t this an attack against something called Schelling coin, not bitcoin?
If you read till the end, there is a scenario how this attack could be used against bitcoin.
Another fantastic post! This could be one particular and mentioned a lot of things in equal manner was wonderful. Please Visit: http://www.prostarjackets.com/product/chris-evans-captain-america-avengers-age-of-ultron-jacket.html