There are two problems with this that I see—one specific and one general.
The specific one is that one-level systems don’t handle politics very well. For example, say a person or subgroup in your system accumulates most of the “control resource.” What’s to stop them from doing a bunch of political bullshit?
The general problem is that this system assumes you’ve already managed to agree on a measurable utility function, and so breaks down when the group has to somehow agree on a utility function.
There are two factors I can think of straight away that could prevent an imbalance of power disrupting results. Firstly, the system is perpetually growing with the allowance, so unless only one person or subgroup is making good predictions there should be some balance, with the . This is not guaranteed, but I expect it to be the case. Otherwise, the less likely solution is that everyone uses whatever allowance they haven’t lost to try and cash in from the false predictions the rogue individual or subgroup is making. I do not expect this solution to be a consistent safety net, since the people without much power will have made bad predictions in the past.
And you’re right, this does not help choose a utility function. In Robin Hanson’s Futarchy proposal he advocates elected representatives to choose the utility function, and seems to dismiss the problems with that by saying that “By limiting democracy primarily to values, we would presumably focus
voters more on expressing their values, rather than their beliefs.”. If we did have elected representatives, I think they would create a utility function that explicitly encourages policies they support. I haven’t thought of a solution to this yet.
There are two problems with this that I see—one specific and one general.
The specific one is that one-level systems don’t handle politics very well. For example, say a person or subgroup in your system accumulates most of the “control resource.” What’s to stop them from doing a bunch of political bullshit?
The general problem is that this system assumes you’ve already managed to agree on a measurable utility function, and so breaks down when the group has to somehow agree on a utility function.
There are two factors I can think of straight away that could prevent an imbalance of power disrupting results. Firstly, the system is perpetually growing with the allowance, so unless only one person or subgroup is making good predictions there should be some balance, with the . This is not guaranteed, but I expect it to be the case. Otherwise, the less likely solution is that everyone uses whatever allowance they haven’t lost to try and cash in from the false predictions the rogue individual or subgroup is making. I do not expect this solution to be a consistent safety net, since the people without much power will have made bad predictions in the past.
And you’re right, this does not help choose a utility function. In Robin Hanson’s Futarchy proposal he advocates elected representatives to choose the utility function, and seems to dismiss the problems with that by saying that “By limiting democracy primarily to values, we would presumably focus voters more on expressing their values, rather than their beliefs.”. If we did have elected representatives, I think they would create a utility function that explicitly encourages policies they support. I haven’t thought of a solution to this yet.