Does the above paragraph mean that people with unique preferences and crazy beliefs eventually end up without having their preferences respected (whereas someone with unique preferences and accurate beliefs would still have their preferences respected)?
Yes. This might be too harsh. The “libertarian” argument in favor of it is: who are you to keep someone from betting away all of their credit in the system? If you make a rule preventing this, agents will tend to want to find some way around it. If you just give some free credit to agents who are completely out, this harms the calibration of the system by reducing the incentive to be sane about your bets.
On the other hand, there may well be a serious game-theoretic reason why it is “too harsh”: someone who is getting to cooperation from the system has no reason to cooperate in turn. I’m curious if a CCT-adjacent formalism could capture this (or some other reason to be gentler). That would be the kind of thing which might have interesting analogues when we try to import insights back into decision theory.
Also, do we have to treat the agents as well-calibrated across all domains? Or is the system able to learn that their thoughts should be given weight in some circumstances and not others?
In the formalism, no, you just win or lose points across all domains. Realistically, it seems prudent to introduce stuff like that.
A possible fix to the above is that individual agents could do this subject-specific evaluation of other agents and would update their credences based on partially-accurate agents, thus the information still gets preserved.
That’s exactly what could happen in a logical-induction like setting.
could there be a double-counting when both Critch’s mechanism and other agents pick up on the accuracy of an agent?
There might temporarily be all sorts of crazy stuff like this, but we know it would (somehow) self-correct eventually.
On the other hand, there may well be a serious game-theoretic reason why it is “too harsh”: someone who is getting to cooperation from the system has no reason to cooperate in turn.
Is there a typo here? (“getting to cooperation” → “getting no cooperation”). And the idea is that there are other ways of making an impact on the world than the decisions of the “futarchy”, so people who have no stake in the futarchy could mess things up other ways, right?
Yes. This might be too harsh. The “libertarian” argument in favor of it is: who are you to keep someone from betting away all of their credit in the system? If you make a rule preventing this, agents will tend to want to find some way around it. If you just give some free credit to agents who are completely out, this harms the calibration of the system by reducing the incentive to be sane about your bets.
On the other hand, there may well be a serious game-theoretic reason why it is “too harsh”: someone who is getting to cooperation from the system has no reason to cooperate in turn. I’m curious if a CCT-adjacent formalism could capture this (or some other reason to be gentler). That would be the kind of thing which might have interesting analogues when we try to import insights back into decision theory.
In the formalism, no, you just win or lose points across all domains. Realistically, it seems prudent to introduce stuff like that.
That’s exactly what could happen in a logical-induction like setting.
There might temporarily be all sorts of crazy stuff like this, but we know it would (somehow) self-correct eventually.
Is there a typo here? (“getting to cooperation” → “getting no cooperation”). And the idea is that there are other ways of making an impact on the world than the decisions of the “futarchy”, so people who have no stake in the futarchy could mess things up other ways, right?