The obvious answer from this crowd is some kind of prediction market, with the “group charter” being turned into a measurable utility function with which to make the judgments about the success or failure of a policy. If people are restricted to using only money from an equal “allowance”, plus whatever they have earned from predictions, over time those who have made more accurate predictions gain the most influence on the outcomes of the decisions.
Depending of the size of the group, there is a very catastrophic possible side-effect from such a system : we have to decide one of the two techniques to use to solve a problem, I make the prediction technique A will not work, but the others disagree and finally the group decides to chose technique A. What is my interest now ? To ensure the group will fail, so my prediction will be true.
A very fundamental aspect of group decision is to ensure the “dissidents” will still do their best to ensure the group success. Prediction markets may work when you can’t really change the outcome of the prediction (when you predict what others will do), but not for (relatively small) group decision when you’ll be part of the process that will finally succeed or fail.
Robin Hanson has said that prediction markets have historically been extremely resilient against manipulation attempts. Historical markets are mostly those where the “bettors on beliefs” do not have a personal stake in the success of “technique A,” like a group member would—so it seems like this futarchist method is overall better than historical group decision-making methods, even if there are some perverse incentive problems.
There are two problems with this that I see—one specific and one general.
The specific one is that one-level systems don’t handle politics very well. For example, say a person or subgroup in your system accumulates most of the “control resource.” What’s to stop them from doing a bunch of political bullshit?
The general problem is that this system assumes you’ve already managed to agree on a measurable utility function, and so breaks down when the group has to somehow agree on a utility function.
There are two factors I can think of straight away that could prevent an imbalance of power disrupting results. Firstly, the system is perpetually growing with the allowance, so unless only one person or subgroup is making good predictions there should be some balance, with the . This is not guaranteed, but I expect it to be the case. Otherwise, the less likely solution is that everyone uses whatever allowance they haven’t lost to try and cash in from the false predictions the rogue individual or subgroup is making. I do not expect this solution to be a consistent safety net, since the people without much power will have made bad predictions in the past.
And you’re right, this does not help choose a utility function. In Robin Hanson’s Futarchy proposal he advocates elected representatives to choose the utility function, and seems to dismiss the problems with that by saying that “By limiting democracy primarily to values, we would presumably focus
voters more on expressing their values, rather than their beliefs.”. If we did have elected representatives, I think they would create a utility function that explicitly encourages policies they support. I haven’t thought of a solution to this yet.
The obvious answer from this crowd is some kind of prediction market, with the “group charter” being turned into a measurable utility function with which to make the judgments about the success or failure of a policy. If people are restricted to using only money from an equal “allowance”, plus whatever they have earned from predictions, over time those who have made more accurate predictions gain the most influence on the outcomes of the decisions.
Depending of the size of the group, there is a very catastrophic possible side-effect from such a system : we have to decide one of the two techniques to use to solve a problem, I make the prediction technique A will not work, but the others disagree and finally the group decides to chose technique A. What is my interest now ? To ensure the group will fail, so my prediction will be true.
A very fundamental aspect of group decision is to ensure the “dissidents” will still do their best to ensure the group success. Prediction markets may work when you can’t really change the outcome of the prediction (when you predict what others will do), but not for (relatively small) group decision when you’ll be part of the process that will finally succeed or fail.
Robin Hanson has said that prediction markets have historically been extremely resilient against manipulation attempts. Historical markets are mostly those where the “bettors on beliefs” do not have a personal stake in the success of “technique A,” like a group member would—so it seems like this futarchist method is overall better than historical group decision-making methods, even if there are some perverse incentive problems.
There are two problems with this that I see—one specific and one general.
The specific one is that one-level systems don’t handle politics very well. For example, say a person or subgroup in your system accumulates most of the “control resource.” What’s to stop them from doing a bunch of political bullshit?
The general problem is that this system assumes you’ve already managed to agree on a measurable utility function, and so breaks down when the group has to somehow agree on a utility function.
There are two factors I can think of straight away that could prevent an imbalance of power disrupting results. Firstly, the system is perpetually growing with the allowance, so unless only one person or subgroup is making good predictions there should be some balance, with the . This is not guaranteed, but I expect it to be the case. Otherwise, the less likely solution is that everyone uses whatever allowance they haven’t lost to try and cash in from the false predictions the rogue individual or subgroup is making. I do not expect this solution to be a consistent safety net, since the people without much power will have made bad predictions in the past.
And you’re right, this does not help choose a utility function. In Robin Hanson’s Futarchy proposal he advocates elected representatives to choose the utility function, and seems to dismiss the problems with that by saying that “By limiting democracy primarily to values, we would presumably focus voters more on expressing their values, rather than their beliefs.”. If we did have elected representatives, I think they would create a utility function that explicitly encourages policies they support. I haven’t thought of a solution to this yet.