I don’t see what problem is being solved by smart contracts here; at the end of the day you have to interact with the real world to enforce your contracts.
The smart contract would specify the identities of other network accounts who will be able to vote on the outcome of the contract. So there could be, say, 10 third party accounts who represent “observers” who vote on whether or not they believe the target was killed. (from reading the paper). These “observers” would have a reputation built up over past predictions and might not actually be human beings who can be arrested. (or they might be living in countries where this action is not considered a crime). This means the hitman can either hunt down each anonymous observer and force them at gunpoint to vote the hitman’s way (which other mechanisms would need to zero out their reputation score for this false observation) or kill the target or forfeit the money back to the buyer when the contract expires.
You’re just delegating the problem away to an observer reputation system that has the same problem one level deeper. Who actually has incentive to align reputations of observers with what actually happened?
This is a thorny problem, and I’m not working in this space. Having thought a bit about the problem and rejected many other possibilities, what I arrived at is this:
day 0, no one has a reputation but n accounts ‘volunteer’ to be judges
Day n, each “judge” has a history log of the (evidence, decision). Automated tools detect a corrupt judge by looking at the log and looking for decisions not justified by the evidence, and then the buyer and the seller agree on a possible list of non-corrupt judges, and a random sampling of them is chosen. (simplest way is to look at a judge making a different decision from another judge, but determining who is “right” when the majority is wrong is a difficult unsolved problem)
There are some difficulties with this, namely that a judge can only make decisions on publicly available information. For example, you could in theory use it to place a bet for an event that will later happen, and these judges vote whether or not your bet is good.
The incentive that the judges have is the longer the history log of correct decisions, the more that judge is “worth” and the larger the fee they will get.
I don’t see what problem is being solved by smart contracts here; at the end of the day you have to interact with the real world to enforce your contracts.
The smart contract would specify the identities of other network accounts who will be able to vote on the outcome of the contract. So there could be, say, 10 third party accounts who represent “observers” who vote on whether or not they believe the target was killed. (from reading the paper). These “observers” would have a reputation built up over past predictions and might not actually be human beings who can be arrested. (or they might be living in countries where this action is not considered a crime).
This means the hitman can either hunt down each anonymous observer and force them at gunpoint to vote the hitman’s way (which other mechanisms would need to zero out their reputation score for this false observation) or kill the target or forfeit the money back to the buyer when the contract expires.
You’re just delegating the problem away to an observer reputation system that has the same problem one level deeper. Who actually has incentive to align reputations of observers with what actually happened?
This is a thorny problem, and I’m not working in this space. Having thought a bit about the problem and rejected many other possibilities, what I arrived at is this:
day 0, no one has a reputation but n accounts ‘volunteer’ to be judges
Day n, each “judge” has a history log of the (evidence, decision). Automated tools detect a corrupt judge by looking at the log and looking for decisions not justified by the evidence, and then the buyer and the seller agree on a possible list of non-corrupt judges, and a random sampling of them is chosen. (simplest way is to look at a judge making a different decision from another judge, but determining who is “right” when the majority is wrong is a difficult unsolved problem)
There are some difficulties with this, namely that a judge can only make decisions on publicly available information. For example, you could in theory use it to place a bet for an event that will later happen, and these judges vote whether or not your bet is good.
The incentive that the judges have is the longer the history log of correct decisions, the more that judge is “worth” and the larger the fee they will get.