Have you got anything (blog posts) up already about what you are doing? I’m slowly making a website about alternative ways for groups to decide who makes the decisions and would like to include it. My own first stab is control markets, but I am very interested in others!
That’s intersting; I’m approaching it from the perspective of welfare economics, not computer science, but the approaches you are describing sound promising. I’ll need to look into them more. The problem is that there is a wide gulf between making decisions and delegating someone to do so.
My view is that if we have no metric for assessing whether a decision is good, it’s hard to talk about making good decisions. We need coherent metrics, and partial orderings like pareto are only useful when we constrain the decision models to fit what our math can handle! (Instead of handling whatever portions of reality we can using our math, and admitting that the world is more complex than we can currently model.)
Just reading up on welfare economics a bit, so apologies if I say anything incorrect. I have a lot to read up on!
The problem is that there is a wide gulf between making decisions and delegating someone to do so.
True. My approach does rely on delegating decisions to actors. The open question in my mind is if we can create systems such that actors are encouraged to make good decisions or adopt good decision making processes*.
From my point of view some form of welfare economics (or decision markets) may be one of the processes that actor would be have incentives to adopt. But it may well be able to stand on its own two feet.
*And decision processes would be evaluated by a general situation rather than specific decisions.
At the moment I would ideally sum normalised deltas in utility of a situation for an agent. But that is the weakest part of the system, it is open to manipulation somewhat.
The problem with electing (human) agents is that you suddenly have principle-agent problems. Their priorities change if they gain status from being selected, whether it’s because they want to be re-selected, or because their time in power is limited, or because it is unlimited. If they don’t gain anything by being selected, you are likely to have no incentive for them to invest in making optimal decisions.
Even if this is untrue, you need to project their decisions, typically by assuming you know something about their utility; if this projection is mis-specified even a bit, the difference can be catastrophic; their utility also may not be stationary. So their are some issues there, but they are interesting ones.
Thanks! I suspected there was terminology for the principle-agent problem, but didn’t know what to google.
Agreed upon it being a big issue. I suppose I am interested in whether we can ameliorate them, so that there are fewer of the issues, rather than eliminate them entirely.
I’ll keep an eye out for any follow ups you do to this meetup.
Have you got anything (blog posts) up already about what you are doing? I’m slowly making a website about alternative ways for groups to decide who makes the decisions and would like to include it. My own first stab is control markets, but I am very interested in others!
That’s intersting; I’m approaching it from the perspective of welfare economics, not computer science, but the approaches you are describing sound promising. I’ll need to look into them more. The problem is that there is a wide gulf between making decisions and delegating someone to do so.
My view is that if we have no metric for assessing whether a decision is good, it’s hard to talk about making good decisions. We need coherent metrics, and partial orderings like pareto are only useful when we constrain the decision models to fit what our math can handle! (Instead of handling whatever portions of reality we can using our math, and admitting that the world is more complex than we can currently model.)
Just reading up on welfare economics a bit, so apologies if I say anything incorrect. I have a lot to read up on!
True. My approach does rely on delegating decisions to actors. The open question in my mind is if we can create systems such that actors are encouraged to make good decisions or adopt good decision making processes*.
From my point of view some form of welfare economics (or decision markets) may be one of the processes that actor would be have incentives to adopt. But it may well be able to stand on its own two feet.
*And decision processes would be evaluated by a general situation rather than specific decisions.
At the moment I would ideally sum normalised deltas in utility of a situation for an agent. But that is the weakest part of the system, it is open to manipulation somewhat.
The problem with electing (human) agents is that you suddenly have principle-agent problems. Their priorities change if they gain status from being selected, whether it’s because they want to be re-selected, or because their time in power is limited, or because it is unlimited. If they don’t gain anything by being selected, you are likely to have no incentive for them to invest in making optimal decisions.
Even if this is untrue, you need to project their decisions, typically by assuming you know something about their utility; if this projection is mis-specified even a bit, the difference can be catastrophic; their utility also may not be stationary. So their are some issues there, but they are interesting ones.
Thanks! I suspected there was terminology for the principle-agent problem, but didn’t know what to google.
Agreed upon it being a big issue. I suppose I am interested in whether we can ameliorate them, so that there are fewer of the issues, rather than eliminate them entirely.
I’ll keep an eye out for any follow ups you do to this meetup.