Having re-read the posts and thought about it some more, I do think zero-sum competition could be applied to logical inductors to resolve the futarchy hack. It would require minor changes to the formalism to accommodate, but I don’t see how those changes would break anything else.
Trying to think this through, I’ll write a bit of a braindump just in case that’s useful:
The futachy hack can be split into two parts. The first is that is that conditioning on untaken actions makes most probabilities ill-defined. Because there are no incentives to get it right, the market can can settle to many equilibria. The second part is that there are various incentives for traders to take advantage of this for their own interests.
With your technique, I think approach would be to duplicate each trader into two traders with the same knowledge, and make their joint earnings zero sum.[1]
This removes one explicit incentive for a single trader to manipulate a value to cause a different action to happen. But only if it’s doing so to make the distribution easier to predict and thereby improving their score. Potentially there are still other incentives i.e. if the trader has preferences over the world, and these aren’t eliminated.
Why doesn’t this happen in LI already? LI is zero sum overall, because there is a finite pool of wealth. But this is shared among traders with different knowledge. If there is a wealthiest trader that has a particular piece of knowledge, it should manipulate actions to reduce variance to get a higher score. So the problem is that it’s not zero-sum with respect to each piece of knowledge.
But, the first issue is entirely unresolved. The probabilities that condition on untaken actions will be path-dependent leftovers from the convergence procedure of LI, when the market was more uncertain about which action will be taken. I’d expect these to be fairly reasonable, but they don’t have to be.
This reasonableness is coming from something though, and maybe this can be formalized.
You’d have to build a lot more structure into the LI traders to guarantee they can’t learn to cooperate and are myopic. But that seems doable. And its the sort of thing I’d want to do anyway.
Having re-read the posts and thought about it some more, I do think zero-sum competition could be applied to logical inductors to resolve the futarchy hack. It would require minor changes to the formalism to accommodate, but I don’t see how those changes would break anything else.
Trying to think this through, I’ll write a bit of a braindump just in case that’s useful:
The futachy hack can be split into two parts. The first is that is that conditioning on untaken actions makes most probabilities ill-defined. Because there are no incentives to get it right, the market can can settle to many equilibria. The second part is that there are various incentives for traders to take advantage of this for their own interests.
With your technique, I think approach would be to duplicate each trader into two traders with the same knowledge, and make their joint earnings zero sum.[1]
This removes one explicit incentive for a single trader to manipulate a value to cause a different action to happen. But only if it’s doing so to make the distribution easier to predict and thereby improving their score. Potentially there are still other incentives i.e. if the trader has preferences over the world, and these aren’t eliminated.
Why doesn’t this happen in LI already? LI is zero sum overall, because there is a finite pool of wealth. But this is shared among traders with different knowledge. If there is a wealthiest trader that has a particular piece of knowledge, it should manipulate actions to reduce variance to get a higher score. So the problem is that it’s not zero-sum with respect to each piece of knowledge.
But, the first issue is entirely unresolved. The probabilities that condition on untaken actions will be path-dependent leftovers from the convergence procedure of LI, when the market was more uncertain about which action will be taken. I’d expect these to be fairly reasonable, but they don’t have to be.
This reasonableness is coming from something though, and maybe this can be formalized.
You’d have to build a lot more structure into the LI traders to guarantee they can’t learn to cooperate and are myopic. But that seems doable. And its the sort of thing I’d want to do anyway.