I decided to stop thinking about the Copeland method (method where you count how many victories each candidate has had and sort everyone according to that). They don’t mention it in the analysis (pricks!) but the flaw is so obvious I’m not gonna be humble about this
Say you have a set of order judgements like this:
< = { (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (p u) (p u) (p u) (p u) }
It’s a situation where the candidate “s” is a strawman. No one actually thinks s is good. It isn’t relevant and we probably shouldn’t be discussing it. (But we must discuss it, because no informed process is setting the agenda, and this system will be responsible for fixing the agenda. Being able to operate in a situation where the attention of the collective is misdirected is mandatory)
p is popular. p is better than the strawman, but that isn’t saying much.
u is the ultimate, and is known by some to be better than p in every way. There is no controversy about that, among those who know u.
Under the copeland method, u still loses to p because p has fought more times and won more times.
The Copeland method is just another popularity contest. It is not meritocratic. It cannot overturn an incumbency by helping a few trusted seekers to spread word about their finding. It does not spread findings. It cannot help new things rise to prominence. Disregard the Copeland method.
---
A couple days ago I started thinking about defining a metric by thinking of every edge in the graph (every judgement) as having a “charge” and then defining a way of reducing serial wires and a way of reducing parallel wires, then getting the total charge between each pair of points (it’ll have time complexity n^3 at first but I can think of lots of ways to optimise that. I wouldn’t expect much better from a formal objective measure), then assembling that into a ranking.
Finding serial and parallel reducers with the right properties didn’t seem difficult (I’m currently looking at parallel(a, b)→ a + b and serial(a, b)→ 1/(1/a + 1/b)). That was very exciting to realise. The current problem is, it’s not clear that every tangle can be trivially reduced to an expression of parallels and serials, consider the paths between the top left and bottom right nodes in a network shaped like “▥”, for instance.
Calculating the conductance between two points in a tangled circuit may be a good analogy here… and I have a little intuition that this would be NP hard in the most general case despite being deceptively tractable in real-world cases. Someone here might be able to dismiss or confirm that. I’m sure it’s been studied, but I can’t find a general method, nor a proof of hardness.
If true, it would make this not so obviously useful as a formal measure sufficient for use in elections.
Update on preference graph order recovery
I decided to stop thinking about the Copeland method (method where you count how many victories each candidate has had and sort everyone according to that). They don’t mention it in the analysis (pricks!) but the flaw is so obvious I’m not gonna be humble about this
Say you have a set of order judgements like this:
< = { (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (p u) (p u) (p u) (p u) }
It’s a situation where the candidate “s” is a strawman. No one actually thinks s is good. It isn’t relevant and we probably shouldn’t be discussing it. (But we must discuss it, because no informed process is setting the agenda, and this system will be responsible for fixing the agenda. Being able to operate in a situation where the attention of the collective is misdirected is mandatory)
p is popular. p is better than the strawman, but that isn’t saying much.
u is the ultimate, and is known by some to be better than p in every way. There is no controversy about that, among those who know u.
Under the copeland method, u still loses to p because p has fought more times and won more times.
The Copeland method is just another popularity contest. It is not meritocratic. It cannot overturn an incumbency by helping a few trusted seekers to spread word about their finding. It does not spread findings. It cannot help new things rise to prominence. Disregard the Copeland method.
---
A couple days ago I started thinking about defining a metric by thinking of every edge in the graph (every judgement) as having a “charge” and then defining a way of reducing serial wires and a way of reducing parallel wires, then getting the total charge between each pair of points (it’ll have time complexity n^3 at first but I can think of lots of ways to optimise that. I wouldn’t expect much better from a formal objective measure), then assembling that into a ranking.
Finding serial and parallel reducers with the right properties didn’t seem difficult (I’m currently looking at parallel(a, b)→ a + b and serial(a, b)→ 1/(1/a + 1/b)). That was very exciting to realise. The current problem is, it’s not clear that every tangle can be trivially reduced to an expression of parallels and serials, consider the paths between the top left and bottom right nodes in a network shaped like “▥”, for instance.
Calculating the conductance between two points in a tangled circuit may be a good analogy here… and I have a little intuition that this would be NP hard in the most general case despite being deceptively tractable in real-world cases. Someone here might be able to dismiss or confirm that. I’m sure it’s been studied, but I can’t find a general method, nor a proof of hardness.
If true, it would make this not so obviously useful as a formal measure sufficient for use in elections.