It seems that to lift up one candidate to the top of our ballots we are implicitly expressing a judgement that they’re better than every other candidate in the contest. Problem is, most of us know nothing about most of the candidates in most contests. We shouldn’t be expressing things like “A > B > all others”. We should prefer to say “A > B” and leave leave every other comparison grey. Let other voters, who actually know about C and D and E fill in the rest of the picture. There might be candidates who are better than A. We shouldn’t pretend to know.
So my question is.. do you know of any systems that incent or at least allow voters to express their uncertainties?
I’m seeing Dark Horses everywhere, when I think about specific ways of implementing that.
[I’m having some thoughts about collaborative filters/content ranking/recommender systems. I’m not sure how relevant they are to voting theory in elections, related, I suppose, so I wont omit them.
Imagine a gang of <bad people who do not represent the demographic of the site> going to every old thread that not many people are paying attention to, posting a comment espousing <bad> views, then casting, say, 10 Comparison Judgements saying “our <bad> comment is better than the incumbent top comment”. When you break up the task of ranking the whole set, so that we can each focus on comparing the candidates we know, leaving the rest to others, non-representative sections of the population will dominate some sections of the job, malicious or not. When they can create any number of new sectors at will, at the top of the ranking, in ways people are unlikely to notice quickly enough to fix, that will create many problems. I suppose… the solution to monitor the way the full ranking is being broken up, in some way, and look for skewed samples?
This might be easier to tackle in collaborative filtering if the system had some general way of noticing that the <bad> users are generally, reliably very different from <normal> users, that <bad> users and <normal> users don’t generally like the same things, and that the preferences of <bad> users shouldn’t be considered in forming content rankings for normal users. I think a lot could be done with that. I know that sounds like it would thicken our tribal filter bubbles a whole lot, but once we can name and characterise the people of each filter bubble it becomes a lot easier to figure out how to bring them back together. They were already apart, you see. Identifying them doesn’t make it worse. Once you can identify them, detecting shared ground between hostile tribes becomes trivial. Being able to identify judgements that even very different sorts of people agree on could be generally useful as a heuristic for finding objective truths. I don’t know what would happen if you told the system to strongly amplify anything that fits into those hot intersections. It’s not inconceivable to me that it’d give rise to some unprecedented sort of harmony.]
There are indeed various methods which allow for expressing uncertaintly, in principle. For instance, in score voting, you can count average instead of total score. Similar adjustments work for Bucklin, Condorcet, 3-2-1, STAR, etc.
The problem is, as you say, the “Dark Horse” one, which in this case can be seen as a form of selection bias: if those who like a candidate are more likely to give them a score than those who hate them, the average is biased, so it becomes advantageous to be unknown.
The best way to deal with this problem is a “soft quota” in the form of N pseudo-votes against every candidate. More-or-less equivalently, you can have every blank ballot count as X% of a vote against.
It’s usually my feeling that as a practical matter, these kinds of extra epicycles in a voting method add more complexity than they’re worth, but I can imagine situations (such as a highly geeky target population, or conversely one that doesn’t care enough about the voting method to notice the complexity) where this might be worth it.
For instance, in score voting, you can count average instead of total score.
I’m not sure I understand how that would do it. I’ve thought of a way that it could, but I have had to independently generate so much that I’m going to have to ask and make sure this is what you were thinking of. So, the best scheme I can think of for averaged score voting still punishes unknowns by averaging their scores towards zero (or whatever you’ve stipulated as the default score for unmentioned candidates) every time a ballot doesn’t mention them, but if the lowest assignable score was, say, negative ten, a person would not be punished as severely for being unknown, as they would be punished if they were known and hated. Have I got that right? And that’s why it’s better, not because it’s actually fair to unknowns but because it’s less unfair?
[Hmm, would it be a good idea to make the range of available scores uneven, for instance [5, −10], so that ignorers and approvers have less of an effect than oppponents (or the other way around, if opponents seem to be less judicious on average than approvers, whichever).]
The best way to deal with this problem is a “soft quota” in the form of N pseudo-votes against every candidate. More-or-less equivalently, you can have every blank ballot count as X% of a vote against.
But that’s just artificially amplifying the bias against unknowns, isn’t it? Have you had so much trouble with dark horses that you’ve come to treat obscurity as a heuristic for inferiority? You know what else is obscure? Actual qualifications. Most voters don’t have them and don’t know how to recognise them. I worry that we will find the best domain experts in economics, civics, and (most importantly for a head of state) social preference aggregation, buried so deeply in the Dark Horse pile that under present systems they are not even bothering to put their names into the hat.
[Hmm, back to recommender systems, because I’ve noticed a concise way to say this; It’s reasonable to be less concerned about Dark Horses in recommender systems, because we will have the opportunity to measure and control how the set {people who know about the candidate} came to be. We know a lot about how people came to know the candidate, because if they are using our platform for its intended purpose, it was probably us who introduced them to it. We will know a lot more- if not everything there is to know- about the forces that bias the voting group, so we can partially correct for them.]
I’m going to keep talking about recommender systems, because it’s not obvious to me that election systems and general purpose demography analysis, discovery, and discussion systems should be distinct.
It seems that to lift up one candidate to the top of our ballots we are implicitly expressing a judgement that they’re better than every other candidate in the contest. Problem is, most of us know nothing about most of the candidates in most contests. We shouldn’t be expressing things like “A > B > all others”. We should prefer to say “A > B” and leave leave every other comparison grey. Let other voters, who actually know about C and D and E fill in the rest of the picture. There might be candidates who are better than A. We shouldn’t pretend to know.
So my question is.. do you know of any systems that incent or at least allow voters to express their uncertainties?
I’m seeing Dark Horses everywhere, when I think about specific ways of implementing that.
[I’m having some thoughts about collaborative filters/content ranking/recommender systems. I’m not sure how relevant they are to voting theory in elections, related, I suppose, so I wont omit them.
Imagine a gang of <bad people who do not represent the demographic of the site> going to every old thread that not many people are paying attention to, posting a comment espousing <bad> views, then casting, say, 10 Comparison Judgements saying “our <bad> comment is better than the incumbent top comment”. When you break up the task of ranking the whole set, so that we can each focus on comparing the candidates we know, leaving the rest to others, non-representative sections of the population will dominate some sections of the job, malicious or not. When they can create any number of new sectors at will, at the top of the ranking, in ways people are unlikely to notice quickly enough to fix, that will create many problems. I suppose… the solution to monitor the way the full ranking is being broken up, in some way, and look for skewed samples?
This might be easier to tackle in collaborative filtering if the system had some general way of noticing that the <bad> users are generally, reliably very different from <normal> users, that <bad> users and <normal> users don’t generally like the same things, and that the preferences of <bad> users shouldn’t be considered in forming content rankings for normal users. I think a lot could be done with that. I know that sounds like it would thicken our tribal filter bubbles a whole lot, but once we can name and characterise the people of each filter bubble it becomes a lot easier to figure out how to bring them back together. They were already apart, you see. Identifying them doesn’t make it worse. Once you can identify them, detecting shared ground between hostile tribes becomes trivial. Being able to identify judgements that even very different sorts of people agree on could be generally useful as a heuristic for finding objective truths. I don’t know what would happen if you told the system to strongly amplify anything that fits into those hot intersections. It’s not inconceivable to me that it’d give rise to some unprecedented sort of harmony.]
There are indeed various methods which allow for expressing uncertaintly, in principle. For instance, in score voting, you can count average instead of total score. Similar adjustments work for Bucklin, Condorcet, 3-2-1, STAR, etc.
The problem is, as you say, the “Dark Horse” one, which in this case can be seen as a form of selection bias: if those who like a candidate are more likely to give them a score than those who hate them, the average is biased, so it becomes advantageous to be unknown.
The best way to deal with this problem is a “soft quota” in the form of N pseudo-votes against every candidate. More-or-less equivalently, you can have every blank ballot count as X% of a vote against.
It’s usually my feeling that as a practical matter, these kinds of extra epicycles in a voting method add more complexity than they’re worth, but I can imagine situations (such as a highly geeky target population, or conversely one that doesn’t care enough about the voting method to notice the complexity) where this might be worth it.
I’m not sure I understand how that would do it. I’ve thought of a way that it could, but I have had to independently generate so much that I’m going to have to ask and make sure this is what you were thinking of. So, the best scheme I can think of for averaged score voting still punishes unknowns by averaging their scores towards zero (or whatever you’ve stipulated as the default score for unmentioned candidates) every time a ballot doesn’t mention them, but if the lowest assignable score was, say, negative ten, a person would not be punished as severely for being unknown, as they would be punished if they were known and hated. Have I got that right? And that’s why it’s better, not because it’s actually fair to unknowns but because it’s less unfair?
[Hmm, would it be a good idea to make the range of available scores uneven, for instance [5, −10], so that ignorers and approvers have less of an effect than oppponents (or the other way around, if opponents seem to be less judicious on average than approvers, whichever).]
But that’s just artificially amplifying the bias against unknowns, isn’t it? Have you had so much trouble with dark horses that you’ve come to treat obscurity as a heuristic for inferiority? You know what else is obscure? Actual qualifications. Most voters don’t have them and don’t know how to recognise them. I worry that we will find the best domain experts in economics, civics, and (most importantly for a head of state) social preference aggregation, buried so deeply in the Dark Horse pile that under present systems they are not even bothering to put their names into the hat.
[Hmm, back to recommender systems, because I’ve noticed a concise way to say this; It’s reasonable to be less concerned about Dark Horses in recommender systems, because we will have the opportunity to measure and control how the set {people who know about the candidate} came to be. We know a lot about how people came to know the candidate, because if they are using our platform for its intended purpose, it was probably us who introduced them to it. We will know a lot more- if not everything there is to know- about the forces that bias the voting group, so we can partially correct for them.]
I’m going to keep talking about recommender systems, because it’s not obvious to me that election systems and general purpose demography analysis, discovery, and discussion systems should be distinct.