Much of my current research (in philosophy, at LSE) concerns the general themes of “objectivity” and (strategies for strengthening) “co-operation”, especially in politics. I didn’t start doing research on these themes because of any concern with existential risk. However, it could be argued that in order to reduce X-risk, the political system needs to be improved. People need to become less tribal, more co-operative, and more inclined to accept rational arguments, both between and within nation states (though I mostly deal do research on the latter). In any case, here is what I’m working on/considering working on, in more precise terms:
1) Strategies for detecting tribalism. People’s beliefs on independent but politically controversial questions, such as to what extent stricter gun laws would reduce the number of homicides and to what extent climate change is man-made, tend to be “suspiciously coherent” (i.e. either you take the pro-republican position on all of these questions, or the pro-democrat on all of them). The best explanation of this is that most people acquire whatever empirical beliefs the majority of their fellow tribe members hold instead of considering the actual evidence. I’m developing statistical techniques intended to detect this sort of tribalism or bias. For instance, people could take tests of their degree of political bias. Alternatively, you could try to read off their degree of bias from existing data. To make these inferences sufficiently precise and reliable promises to be a tough task, however.
2) Strategies for detecting “degrees of selfishness”. This strategy is quite similar, but rather than testing the correlation between your empirical beliefs on controversial questions and those of the party mainstream, what is tested is rather the correlation betwen your opinions on policy and the policies that suit your interests. For instance, if you are male, have a high income, drive a lot and don’t smoke and at the same time take an anti-feminist stance, are against progressive taxes, are against petrol taxes, and want to outlaw smoking, you would be given a high “selfishness score” (probably this score should be given another, less toxic, name). This would serve to highlight selfish behaviour among voters and politicians and promote objective and altruistic behaviour.
3) Voting Advice Applications (VAA) - i.e tests of what party is closest to your own views—are already being used to try to increase interest in politics and make people vote more on the basis of policy issues, and less on more emotional factors such as which politician they find most attractive or which party enjoys success at the moment the bandwagon effect. However, most voting advice applications are still appalingly bad, since many important questions are typically left out. Hence it’s quite rational, in my opinion, for voters to discard their advice. I’d like to try to construct a radically improved VAA, which would be more than just a toy. Instead, the goal would be to construct a test which would be better at identifying which party best satisfies the voters’ considered preferences than the voters intuitive judgments. If people then actually used these VAA’s, this would, hopefully, lead to the politicians whose policies correspond most to those of the voters getting elected, as is intended in a democracy, and to politics getting more rational in general. The downside of this is that it is very hard to do in practice and that the market for VAA’s is big.
4) Systematically criticizing politicians and other influential people’s arguments. This could be done either by professionals (e.g. philosophers) or on a wiki-like webpage, something that is described here. What would be great would be if you somehow could gamify this; e.g., if, in election debates, referees would give and deduct points real-time, and that viewers could see this (e.g. through an app) instanteneously while watching the debate.
Any input regarding how tenable and important these ideas are (especially in relation to each other) in general, and how important they are for addressing x-risk are welcome.
I like these ideas, but when speaking about political topics, even more attention must be paid to connotations. For example, most people would consider “cooperation” good, until you explain them that, technically, cooperation also includes things like “trading with slave-owners without trying to liberate their slaves”. Suddenly it doesn’t feel so good. Similarly, your definition of “selfishness” includes “not wanting to eat rat poison and supporting ban against adding rat poison to human food”; and a psychopath who doesn’t want to eat rat poison but is perfectly okay with feeding it to other people is most “unselfish” in this specific topic.
Speaking about correlations (which are not always causations), it seems to me important to distinguish things like “I happen to have trait X, therefore I support law Y (which benefits X)” from things like “I honestly believe that having people with trait X benefits the society, which is why I developed a trait X, and why I also support law Y (which benefits X) to motivate more people to become X”. Without this distinction we are putting opinions “I am white, therefore slavery is okay” and “I am a surgeon, and I think only people who studied medicine should be allowed to practice surgery” into the same category.
I guess the lesson here is that to avoid applause lights, to each noble-sounding definition you should try to also give an example of a horrible thing which technically matches the definition.
Thanks, good comments. I very much agree with the first point. It is very important to pay attention to connotations. I’ll put some serious thought into what terms to use if I get around actually doing this.
I agree with the second point too. It’s not that far-fetched regarding many questions. For instance, it seems to me that people who are right-wing and left-wing, respectively, when they are young, tend to choose careers on different bases. Right-wing people tend to want to earn more money than left-wing people. Thus to some extent they earn a lot of money because they are right-wing, rather than are right-wing because they earn a lot of money.
You could give various further questions intended to distinguish “selfishness” from other explanations of the correlation; for instance, questions regarding your views have changed as a result of changes in interest. In practice, this might be hard to do efficiently, though.
I guess, though, that if you have a sufficient number of questions, and your political views consistently correlate with your interests, then it’d be hard to explain this with anything but “selfishness”.
I agree with the last point, also, that the score would need to be accompanied with a thorough discussion of how to interpret it. For instance, a left-wing feminist member of NAACP who is also a working class African-American female might be given the same selfishness score, on this test, as a low-tax anti-feminist racist who is also rich, white and male, but most people would find the former pattern of preferences less objectionable than the latter.
For instance, a left-wing feminist member of NAACP who is also a working class African-American female might be given the same selfishness score, on this test, as a low-tax anti-feminist racist who is also rich, white and male, but most people would find the former pattern of preferences less objectionable than the latter.
LOL. Don’t you want to say “most people in my social circles”? Your political preferences are on full display here.
“selfishness score” (probably this score should be given another, less toxic, name).
How about measuring the “altruism score”.
3) Voting Advice Applications (VAA)
I think a huge issue with most of these is that politicians get asked in front of an election to take stances on questions. They usually don’t evaluate at all what politicians actually do when in office. For a healthy democracy it’s much more important to have feedback mechanism that punish politicians for doing the wrong things while in office instead of punishing them for not saying what voters want to hear on election eve.
Let’s say the journalist Bob is quite good at finding out which politicians violate their promises. Whenever a politician violates a promise Bob writes angry articles. Then when the election comes Bob recommends his readers to vote for the politician that fulfilled the most of his promises.
Efficient voting advice application that actually get used by voters to make voting decisions reduce the power that people like Bob have for punishing politicians who violate promises. Your system reduces the power of the fourth estate. Journalists get less powerful in their job of holding politician accountable.
I would rather focus on finding better ways to evaluate in-office performance of politicians, than in investing effort to match promises made on election eve with voters preferences on election eve.
You don’t want to create incentives for politicians to be even more dishonest about what they promise on election eve then they are at the moment. Be careful what you wish for when you set of Moloch to optimize for something specific.
Yes, that’s an alternative, and perhaps better, term. Adding to my comments on Viliam’s comment, I think it’s easier to infer absence of selfishness than presence of selfishness. If your views don’t correlate with your interests, then clearly you’re not taking positions on selfish reasons. But if they do, then you still might not necessarily take positions because of selfish reasons. As Viliam points out, the correlation might have other causes.
I don’t quite follow your argument concerning VAA. You’re saying in the last paragraph that this would create incentives for politicians to be even more dishonest about what they promise on election eve. At the same time you say above that this would reduce the power of journalists that find out which politicians vioate their promises. This seems to imply that politicians would violate less promises under this system, in your view. Or how do you mean?
At the same time you say above that this would reduce the power of journalists that find out which politicians vioate their promises. This seems to imply that politicians would violate less promises under this system, in your view.
No. Powerful journalists who punish politicians who violate promises put a disincentive on politicians violating promises. If journalists get less powerful in their function of holding politicians accountable for promises, politician will violate more promises.
At the present a lot of people read newspapers. Bob the journalist might tell a voter: “Don’t vote for Dave”
Dave being a clever guy raises money from some lobbyist. When it comes to answering the questions for your VAA he uses that money to hire a polling firm to find out which answers to the VAA maximize the amount of votes that Dave gets. Then he gives exactly those answers to the VAA.
When voters listen to the VAA instead of listening to the newspaper Bob loses his political power. Bob can’t punish the politician anymore for violating his election promises. The VAA cuts Bob out of the process. In a world where Bob has readers who follow his advice to vote for politicians who don’t violate their promises a politician incurs a cost for violating election promises. Readers who trust Bob give Bob political power that Bob can use to hold politicians accountable.
We don’t live in a world where our newspapers journalists manage to punish politicians enough that no politician dares to lie on election eve but we do live in a world where newspaper journalists can put pressure on politicians to prevent the worst excesses of behavior of politicians. Getting voters to make decisions based on a VAA that matches their own policy wishes with election promises of the politician means that those voters put less weight on the opinions of newspapers and the 4th estate.
In a world where 90% of voters make their voting decisions via VAAs Moloch will prevent any politician who doesn’t hire a pollster to give optimal answers to the VAA, but who openly speaks about what he wants to do when being elected, from getting elected.
But why would the voters be less interested in whether politicians violate promises if they follow the advice of VAA’s? It seems to me that if anything they’d be more interested. Under a VAA system in effect politicians have made many more and more precise promises to voters than they have under the present system.
Probably a “fulfillment of promises” score could and should be worked out along with the VAA. Politicians could be forced to sign beforehand how highly they value different questions. If they, e.g. broke a promise that they assign 5 % value to (out of a total of 100 %), they’d get a fulfillment of promises score of −5 %. That way it would be more transparent which politicians break their promises and which don’t, too.
But why would the voters be less interested in whether politicians violate promises if they follow the advice of VAA’s?
The VAA doesn’t give the voter Alice any information about whether the politician held the promises they made 4 years ago. At least I’m not aware of a VAA that does this.
For many political promises it’s not trivial to judge whether or not the politician holds a promise. It’s often a qualitative judgement for which you need a trusted authority Carol. Any Carol that serves as such an authority is going to be attacked for perceived bias when Carol says that the politician of the Green party hold their promises more often than the politicians of the Blue party. Bootstrapping the trust relationship between Alice and Carol so that Alice can trust Carol to make honest judgements isn’t trival.
A lot of existing parties that run VAA like the German “Bundeszentrale für politische Bildung” also want to stay political neutral and can’t afford to pick the political battles that come with saying that one party is holding their promises more than another party.
VAA also focus on equal treatment of politicians. You can’t treat politicians who never held office equally to politicians who do when it comes to judging whether they uphold promises. Judging whether promises are held for the majority party than can pass laws and the minority party that can’t pass laws is also not trival.
You need a Carol that does value judgements and you can’t just shut up and calculate.
The VAA doesn’t give the voter Alice any information about whether the politician held the promises they made 4 years ago. At least I’m not aware of a VAA that does this.
No, but that’s in effect what I’m suggesting the VAA’s should do in my previous comment.
It’s true that it is not always trivial to judge whether or not a politician has held his/her promise. Of course, the more exact the promises are written, the easier it becomes, but there will always be room for interpretation. (This we know from the legal sphere, in particular.)
You could set up a court-like system where respected judges decided whether promises had been violated or not. (In principle, this could even be determined in real courts.) It’s true that some people probably would think that such a court is biased, but if it’s broadly accepted by the middle range of the voters (which effectively decide the elections anyway), that would be sufficient.
One is Mr. Smith and the other is Mr. Cook.
Mr.Smith is from the Blue mainstream party that you prefer and that party has mostly the right positions in their party program. But you also know that Mr. Smith has a low IQ, is corrupt and generally doesn’t care about the promises he made last month.
On the other hand Mr.Cook is from the Green mainstream party that you don’t like. On the other hand you know that Mr. Cook has a high IQ, is incorruptible, values holding his promises and generally does what he thinks is in the best interest of the country.
The policy differences between the parties as big as between the US Republican and the US Democratic party.
Which of those politicians would you rather want to have in office?
I listed keeping up promises as one of those things journalists are supposed to look out for. It’s not the only one. There are a bunch of issues where everyone agrees that certain behavior of politicians is bad. It much more important to prevent that behavior than it is to have a politician favor policy A over B where there are good arguments for both A and B and 40% of the population might like policy A and 60% might like policy B.
I don’t think the greatest thing about democracy is that the elected politicians do exactly what the voters want them to do. The greatest thing about democracy is that you have an efficient mechanism to get rid of bad politicians without having to run a violent revolution. That puts pressure on politicians to make good policy to stay in office.
I don’t think the greatest thing about democracy is that the elected politicians do exactly what the voters want them to do.
I don’t think so either. The VAA idea should go in tandem with the other ideas, intended to reduce voters’ degree of selfishness and tribalism, and to enhance the quality of political debates. We want an electorate of rational and altruistic voters to rule.
It’s an interesting topic. I might write a longer post about it later on.
I don’t think so either. The VAA idea should go in tandem with the other ideas, intended to reduce voters’ degree of selfishness and tribalism, and to enhance the quality of political debates. We want an electorate of rational and altruistic voters to rule.
No. We want a group of rational and altruistic politicians to rule. In a representative democracy it’s not the role of a voter to rule.
Besides rational and altruistic politicians we also want a public debate with multiple actors that each have different incentives. Newspaper journalists who care about the trust of their readers. Academics who care about getting citations for their work. Debates inside political parties where people want to get more political power within the party. Trade unions who care about workers issues. Corporations who care about making a profit. Various foundations that have complex motivations. Forums like the one in which we are debating.
You will never have a situation where every voter has an informed opinion on every topic. That simply takes to much time and the average voter has an IQ of 100. Tribalism isn’t bad. If a tribe has a few individuals who spend the effort required to get an informed opinion about a topic we want that members of the tribe who spent less effort on developing an informed opinion copy their opinions and thereby giving those people who spend the time to develop an informed opinion more power.
The academic community is a tribe. I personally do read original research papers if I care about an issue but many people don’t and for most people it’s a decent heuristic to simply go with the academic tribal consensus instead of forming their own opinion.
It’s just that you want more than two tribes in a pluralistic society. There’s also again the theme of establishing trust relationships.
One is Mr. Smith and the other is Mr. Cook. Mr.Smith is from the Blue mainstream party that you prefer and that party has mostly the right positions in their party program. But you also know that Mr. Smith has a low IQ, is corrupt and generally doesn’t care about the promises he made last month.
On the other hand Mr.Cook is from the Green mainstream party that you don’t like. On the other hand you know that Mr. Cook has a high IQ, is incorruptible, values holding his promises and generally does what he thinks is in the best interest of the country.
And yet not smart enough to realize that the Green party’s policies aren’t in the best interest of the country.
You have probably spend less time investigating the issue in detail and if you would be an average voter and not average LW user you would also be less smart than the politician.
That means you are likely mindkilled when you have a high confidence that your judgement of what’s best for the country highly correlates with what’s best for the country.
Yes, but smart well meaning politicians are more likely to implement good policies than politicians who aren’t. There way to much energy invested in fighting blue vs. green and this means that less attention is payed on optimizing for smart well meaning politicians.
Of course that’s very much in the interest of people who want to corrupt politicians. Distract the population with fighting about issues where both sides have valid arguments and then get politicians to enact policies that are beneficial for the lobbyists but which aren’t in the spotlight of the general discourse for which there aren’t good arguments.
You could set up a court-like system where respected judges decided whether promises had been violated or not.
Which authority decides who’s a respected judge? You not only have to defend the court against perception of bias but also about being actually biased because you pick the wrong people to be judges.
If the judges are on government payroll, can the politicians who control the government reduce the salary of the judges if the judges come to a conclusion that the government doesn’t like?
There are already supreme courts with political powers which are paid from the public purse. It’s not an easy problem to solve, but it is not unsolvable.
It’s not an easy problem to solve, but it is not unsolvable.
I don’t think ‘solve’ is binary. Different solutions come with different tradeoffs.
Membership of supreme courts is made up by judges chosen by political majorities. There are advantages to having a powerful fourth estate that’s independent of the other three.
As a practical matter getting a parliament to pass legislation that introduces a new class of people that check whether or not the members of that parliament are holding their promises also seems unrealistic.
Much of my current research (in philosophy, at LSE) concerns the general themes of “objectivity” and (strategies for strengthening) “co-operation”, especially in politics. I didn’t start doing research on these themes because of any concern with existential risk. However, it could be argued that in order to reduce X-risk, the political system needs to be improved. People need to become less tribal, more co-operative, and more inclined to accept rational arguments, both between and within nation states (though I mostly deal do research on the latter). In any case, here is what I’m working on/considering working on, in more precise terms:
1) Strategies for detecting tribalism. People’s beliefs on independent but politically controversial questions, such as to what extent stricter gun laws would reduce the number of homicides and to what extent climate change is man-made, tend to be “suspiciously coherent” (i.e. either you take the pro-republican position on all of these questions, or the pro-democrat on all of them). The best explanation of this is that most people acquire whatever empirical beliefs the majority of their fellow tribe members hold instead of considering the actual evidence. I’m developing statistical techniques intended to detect this sort of tribalism or bias. For instance, people could take tests of their degree of political bias. Alternatively, you could try to read off their degree of bias from existing data. To make these inferences sufficiently precise and reliable promises to be a tough task, however.
2) Strategies for detecting “degrees of selfishness”. This strategy is quite similar, but rather than testing the correlation between your empirical beliefs on controversial questions and those of the party mainstream, what is tested is rather the correlation betwen your opinions on policy and the policies that suit your interests. For instance, if you are male, have a high income, drive a lot and don’t smoke and at the same time take an anti-feminist stance, are against progressive taxes, are against petrol taxes, and want to outlaw smoking, you would be given a high “selfishness score” (probably this score should be given another, less toxic, name). This would serve to highlight selfish behaviour among voters and politicians and promote objective and altruistic behaviour.
3) Voting Advice Applications (VAA) - i.e tests of what party is closest to your own views—are already being used to try to increase interest in politics and make people vote more on the basis of policy issues, and less on more emotional factors such as which politician they find most attractive or which party enjoys success at the moment the bandwagon effect. However, most voting advice applications are still appalingly bad, since many important questions are typically left out. Hence it’s quite rational, in my opinion, for voters to discard their advice. I’d like to try to construct a radically improved VAA, which would be more than just a toy. Instead, the goal would be to construct a test which would be better at identifying which party best satisfies the voters’ considered preferences than the voters intuitive judgments. If people then actually used these VAA’s, this would, hopefully, lead to the politicians whose policies correspond most to those of the voters getting elected, as is intended in a democracy, and to politics getting more rational in general. The downside of this is that it is very hard to do in practice and that the market for VAA’s is big.
4) Systematically criticizing politicians and other influential people’s arguments. This could be done either by professionals (e.g. philosophers) or on a wiki-like webpage, something that is described here. What would be great would be if you somehow could gamify this; e.g., if, in election debates, referees would give and deduct points real-time, and that viewers could see this (e.g. through an app) instanteneously while watching the debate.
Any input regarding how tenable and important these ideas are (especially in relation to each other) in general, and how important they are for addressing x-risk are welcome.
I like these ideas, but when speaking about political topics, even more attention must be paid to connotations. For example, most people would consider “cooperation” good, until you explain them that, technically, cooperation also includes things like “trading with slave-owners without trying to liberate their slaves”. Suddenly it doesn’t feel so good. Similarly, your definition of “selfishness” includes “not wanting to eat rat poison and supporting ban against adding rat poison to human food”; and a psychopath who doesn’t want to eat rat poison but is perfectly okay with feeding it to other people is most “unselfish” in this specific topic.
Speaking about correlations (which are not always causations), it seems to me important to distinguish things like “I happen to have trait X, therefore I support law Y (which benefits X)” from things like “I honestly believe that having people with trait X benefits the society, which is why I developed a trait X, and why I also support law Y (which benefits X) to motivate more people to become X”. Without this distinction we are putting opinions “I am white, therefore slavery is okay” and “I am a surgeon, and I think only people who studied medicine should be allowed to practice surgery” into the same category.
I guess the lesson here is that to avoid applause lights, to each noble-sounding definition you should try to also give an example of a horrible thing which technically matches the definition.
Thanks, good comments. I very much agree with the first point. It is very important to pay attention to connotations. I’ll put some serious thought into what terms to use if I get around actually doing this.
I agree with the second point too. It’s not that far-fetched regarding many questions. For instance, it seems to me that people who are right-wing and left-wing, respectively, when they are young, tend to choose careers on different bases. Right-wing people tend to want to earn more money than left-wing people. Thus to some extent they earn a lot of money because they are right-wing, rather than are right-wing because they earn a lot of money.
You could give various further questions intended to distinguish “selfishness” from other explanations of the correlation; for instance, questions regarding your views have changed as a result of changes in interest. In practice, this might be hard to do efficiently, though.
I guess, though, that if you have a sufficient number of questions, and your political views consistently correlate with your interests, then it’d be hard to explain this with anything but “selfishness”.
I agree with the last point, also, that the score would need to be accompanied with a thorough discussion of how to interpret it. For instance, a left-wing feminist member of NAACP who is also a working class African-American female might be given the same selfishness score, on this test, as a low-tax anti-feminist racist who is also rich, white and male, but most people would find the former pattern of preferences less objectionable than the latter.
LOL. Don’t you want to say “most people in my social circles”? Your political preferences are on full display here.
How about measuring the “altruism score”.
I think a huge issue with most of these is that politicians get asked in front of an election to take stances on questions. They usually don’t evaluate at all what politicians actually do when in office. For a healthy democracy it’s much more important to have feedback mechanism that punish politicians for doing the wrong things while in office instead of punishing them for not saying what voters want to hear on election eve.
Let’s say the journalist Bob is quite good at finding out which politicians violate their promises. Whenever a politician violates a promise Bob writes angry articles. Then when the election comes Bob recommends his readers to vote for the politician that fulfilled the most of his promises.
Efficient voting advice application that actually get used by voters to make voting decisions reduce the power that people like Bob have for punishing politicians who violate promises. Your system reduces the power of the fourth estate. Journalists get less powerful in their job of holding politician accountable.
I would rather focus on finding better ways to evaluate in-office performance of politicians, than in investing effort to match promises made on election eve with voters preferences on election eve.
You don’t want to create incentives for politicians to be even more dishonest about what they promise on election eve then they are at the moment. Be careful what you wish for when you set of Moloch to optimize for something specific.
Yes, that’s an alternative, and perhaps better, term. Adding to my comments on Viliam’s comment, I think it’s easier to infer absence of selfishness than presence of selfishness. If your views don’t correlate with your interests, then clearly you’re not taking positions on selfish reasons. But if they do, then you still might not necessarily take positions because of selfish reasons. As Viliam points out, the correlation might have other causes.
I don’t quite follow your argument concerning VAA. You’re saying in the last paragraph that this would create incentives for politicians to be even more dishonest about what they promise on election eve. At the same time you say above that this would reduce the power of journalists that find out which politicians vioate their promises. This seems to imply that politicians would violate less promises under this system, in your view. Or how do you mean?
No. Powerful journalists who punish politicians who violate promises put a disincentive on politicians violating promises. If journalists get less powerful in their function of holding politicians accountable for promises, politician will violate more promises.
At the present a lot of people read newspapers. Bob the journalist might tell a voter: “Don’t vote for Dave” Dave being a clever guy raises money from some lobbyist. When it comes to answering the questions for your VAA he uses that money to hire a polling firm to find out which answers to the VAA maximize the amount of votes that Dave gets. Then he gives exactly those answers to the VAA.
When voters listen to the VAA instead of listening to the newspaper Bob loses his political power. Bob can’t punish the politician anymore for violating his election promises. The VAA cuts Bob out of the process. In a world where Bob has readers who follow his advice to vote for politicians who don’t violate their promises a politician incurs a cost for violating election promises. Readers who trust Bob give Bob political power that Bob can use to hold politicians accountable.
We don’t live in a world where our newspapers journalists manage to punish politicians enough that no politician dares to lie on election eve but we do live in a world where newspaper journalists can put pressure on politicians to prevent the worst excesses of behavior of politicians. Getting voters to make decisions based on a VAA that matches their own policy wishes with election promises of the politician means that those voters put less weight on the opinions of newspapers and the 4th estate.
In a world where 90% of voters make their voting decisions via VAAs Moloch will prevent any politician who doesn’t hire a pollster to give optimal answers to the VAA, but who openly speaks about what he wants to do when being elected, from getting elected.
But why would the voters be less interested in whether politicians violate promises if they follow the advice of VAA’s? It seems to me that if anything they’d be more interested. Under a VAA system in effect politicians have made many more and more precise promises to voters than they have under the present system.
Probably a “fulfillment of promises” score could and should be worked out along with the VAA. Politicians could be forced to sign beforehand how highly they value different questions. If they, e.g. broke a promise that they assign 5 % value to (out of a total of 100 %), they’d get a fulfillment of promises score of −5 %. That way it would be more transparent which politicians break their promises and which don’t, too.
The VAA doesn’t give the voter Alice any information about whether the politician held the promises they made 4 years ago. At least I’m not aware of a VAA that does this.
For many political promises it’s not trivial to judge whether or not the politician holds a promise. It’s often a qualitative judgement for which you need a trusted authority Carol. Any Carol that serves as such an authority is going to be attacked for perceived bias when Carol says that the politician of the Green party hold their promises more often than the politicians of the Blue party. Bootstrapping the trust relationship between Alice and Carol so that Alice can trust Carol to make honest judgements isn’t trival.
A lot of existing parties that run VAA like the German “Bundeszentrale für politische Bildung” also want to stay political neutral and can’t afford to pick the political battles that come with saying that one party is holding their promises more than another party.
VAA also focus on equal treatment of politicians. You can’t treat politicians who never held office equally to politicians who do when it comes to judging whether they uphold promises. Judging whether promises are held for the majority party than can pass laws and the minority party that can’t pass laws is also not trival.
You need a Carol that does value judgements and you can’t just shut up and calculate.
No, but that’s in effect what I’m suggesting the VAA’s should do in my previous comment.
It’s true that it is not always trivial to judge whether or not a politician has held his/her promise. Of course, the more exact the promises are written, the easier it becomes, but there will always be room for interpretation. (This we know from the legal sphere, in particular.)
You could set up a court-like system where respected judges decided whether promises had been violated or not. (In principle, this could even be determined in real courts.) It’s true that some people probably would think that such a court is biased, but if it’s broadly accepted by the middle range of the voters (which effectively decide the elections anyway), that would be sufficient.
Just a few things added to what I said already:
A thought experiment. You have two politicians.
One is Mr. Smith and the other is Mr. Cook. Mr.Smith is from the Blue mainstream party that you prefer and that party has mostly the right positions in their party program. But you also know that Mr. Smith has a low IQ, is corrupt and generally doesn’t care about the promises he made last month.
On the other hand Mr.Cook is from the Green mainstream party that you don’t like. On the other hand you know that Mr. Cook has a high IQ, is incorruptible, values holding his promises and generally does what he thinks is in the best interest of the country.
The policy differences between the parties as big as between the US Republican and the US Democratic party.
Which of those politicians would you rather want to have in office?
I listed keeping up promises as one of those things journalists are supposed to look out for. It’s not the only one. There are a bunch of issues where everyone agrees that certain behavior of politicians is bad. It much more important to prevent that behavior than it is to have a politician favor policy A over B where there are good arguments for both A and B and 40% of the population might like policy A and 60% might like policy B.
I don’t think the greatest thing about democracy is that the elected politicians do exactly what the voters want them to do. The greatest thing about democracy is that you have an efficient mechanism to get rid of bad politicians without having to run a violent revolution. That puts pressure on politicians to make good policy to stay in office.
I don’t think so either. The VAA idea should go in tandem with the other ideas, intended to reduce voters’ degree of selfishness and tribalism, and to enhance the quality of political debates. We want an electorate of rational and altruistic voters to rule.
It’s an interesting topic. I might write a longer post about it later on.
No. We want a group of rational and altruistic politicians to rule. In a representative democracy it’s not the role of a voter to rule.
Besides rational and altruistic politicians we also want a public debate with multiple actors that each have different incentives. Newspaper journalists who care about the trust of their readers. Academics who care about getting citations for their work. Debates inside political parties where people want to get more political power within the party. Trade unions who care about workers issues. Corporations who care about making a profit. Various foundations that have complex motivations. Forums like the one in which we are debating.
You will never have a situation where every voter has an informed opinion on every topic. That simply takes to much time and the average voter has an IQ of 100. Tribalism isn’t bad. If a tribe has a few individuals who spend the effort required to get an informed opinion about a topic we want that members of the tribe who spent less effort on developing an informed opinion copy their opinions and thereby giving those people who spend the time to develop an informed opinion more power.
The academic community is a tribe. I personally do read original research papers if I care about an issue but many people don’t and for most people it’s a decent heuristic to simply go with the academic tribal consensus instead of forming their own opinion.
It’s just that you want more than two tribes in a pluralistic society. There’s also again the theme of establishing trust relationships.
Yes, it is.
And yet not smart enough to realize that the Green party’s policies aren’t in the best interest of the country.
You have probably spend less time investigating the issue in detail and if you would be an average voter and not average LW user you would also be less smart than the politician. That means you are likely mindkilled when you have a high confidence that your judgement of what’s best for the country highly correlates with what’s best for the country.
It’s not like smart well meaning politicians have never implemented policies that proved disastrous.
Yes, but smart well meaning politicians are more likely to implement good policies than politicians who aren’t. There way to much energy invested in fighting blue vs. green and this means that less attention is payed on optimizing for smart well meaning politicians.
Of course that’s very much in the interest of people who want to corrupt politicians. Distract the population with fighting about issues where both sides have valid arguments and then get politicians to enact policies that are beneficial for the lobbyists but which aren’t in the spotlight of the general discourse for which there aren’t good arguments.
Which authority decides who’s a respected judge? You not only have to defend the court against perception of bias but also about being actually biased because you pick the wrong people to be judges.
If the judges are on government payroll, can the politicians who control the government reduce the salary of the judges if the judges come to a conclusion that the government doesn’t like?
There are already supreme courts with political powers which are paid from the public purse. It’s not an easy problem to solve, but it is not unsolvable.
Yes, and they suffer from the problems Christian describes to the extent that they actually use their political powers.
I don’t think ‘solve’ is binary. Different solutions come with different tradeoffs. Membership of supreme courts is made up by judges chosen by political majorities. There are advantages to having a powerful fourth estate that’s independent of the other three.
As a practical matter getting a parliament to pass legislation that introduces a new class of people that check whether or not the members of that parliament are holding their promises also seems unrealistic.