If there are a large number of true-but-not-publicly-proven statements, does that impose a large computational cost on the market making mechanism?
I expect that the computers running this system might have to be fairly beefy, but they’re only checking proofs.
They’re not, though. They’re making markets on all the interrelated statements. How do they know when they’re done exhausting the standing limit orders and AMM liquidity pools? My working assumption is that this is equivalent to a full Bayesian network and explodes exponentially for all the same reasons. In practice it’s not maximally intractable, but you don’t avoid the exponential explosion either—it’s just slower than the theoretical worst case.
If every new order placed has to be checked against the limit orders on every existing market, you have a problem.
For thickly traded propositions, I can make money by investing in a proposition first, then publishing a proof. That sends the price to $1 and I can make money off the difference. Usually, it would be more lucrative to keep my proof secret, though.
The problem I’m imaging comes when the market is trading at .999, but life would really be simplified for the market maker if it was at actually, provably 1. So it could stop tracking that price as something interesting, and stop worrying about the combinatorial explosion.
So you’d really like to find a world where once everyone has bothered to run the SAT-solver trick and figure out what route someone is minting free shares through, that just becomes common knowledge and everyone’s computational costs stop growing exponentially in that particular direction. And furthermore, the first person to figure out the exact route is actually rewarded for publishing it, rather than being able to extract money at slowly declining rates of return.
In other words: at what point does a random observer start turning “probably true, the market said so” into “definitely true, I can download the Coq proof”? And after that point, is the market maker still pretending to be ignorant?
They’re not, though. They’re making markets on all the interrelated statements. How do they know when they’re done exhausting the standing limit orders and AMM liquidity pools?
There’s no reason I can’t just say: “I’m going to implement the rules listed above, and if anyone else wants to be a market-maker, they can go ahead and do that”. The rules do prevent me from losing money (other than cases where I decide to subsidize a market). I think in some sense, this kind of market does run more on letting traders make their own deals, and less on each and every asset having a well-defined price at a given point in time. (Though market makers can maintain networks tracking which statements are equivalent, which helps with combining the limit orders on the different versions of the statement.)
In other words: at what point does a random observer start turning “probably true, the market said so” into “definitely true, I can download the Coq proof”?
Good question, I’m still not sure how to handle this.
Thank you! Much to think about, but later...
They’re not, though. They’re making markets on all the interrelated statements. How do they know when they’re done exhausting the standing limit orders and AMM liquidity pools? My working assumption is that this is equivalent to a full Bayesian network and explodes exponentially for all the same reasons. In practice it’s not maximally intractable, but you don’t avoid the exponential explosion either—it’s just slower than the theoretical worst case.
If every new order placed has to be checked against the limit orders on every existing market, you have a problem.
The problem I’m imaging comes when the market is trading at .999, but life would really be simplified for the market maker if it was at actually, provably 1. So it could stop tracking that price as something interesting, and stop worrying about the combinatorial explosion.
So you’d really like to find a world where once everyone has bothered to run the SAT-solver trick and figure out what route someone is minting free shares through, that just becomes common knowledge and everyone’s computational costs stop growing exponentially in that particular direction. And furthermore, the first person to figure out the exact route is actually rewarded for publishing it, rather than being able to extract money at slowly declining rates of return.
In other words: at what point does a random observer start turning “probably true, the market said so” into “definitely true, I can download the Coq proof”? And after that point, is the market maker still pretending to be ignorant?
There’s no reason I can’t just say: “I’m going to implement the rules listed above, and if anyone else wants to be a market-maker, they can go ahead and do that”. The rules do prevent me from losing money (other than cases where I decide to subsidize a market). I think in some sense, this kind of market does run more on letting traders make their own deals, and less on each and every asset having a well-defined price at a given point in time. (Though market makers can maintain networks tracking which statements are equivalent, which helps with combining the limit orders on the different versions of the statement.)
Good question, I’m still not sure how to handle this.