Unless you have a really weird utility function that values voting in and of itself, what matters is the outcome of your vote.
Not at all. My utility function might value my self-perception as a person who votes for X. It might value the ability to rant about how I did or did not vote for X and therefore all the bad policies are not my responsibility—if only they have listened to me! It might value the warm glow of having done my civic duty of helping the forces of light triumph over the spawn of evil. Etc, etc. None of this is particularly weird.
I would argue all those values are irrational. Ticking a box that has no effect on the world, and that no one will ever know about, should not matter. And I don’t think many people would claim that they value that, if they accepted that premise. I think people value voting because they don’t accept that premise, and think there is some value in their vote.
You’re right that “those values are irrational” is a category mistake, if we’re being precise. But Houshalter has an important point...
Any time you violate the axioms of a coherent utility-maximization agent, e.g. falling for the Allais paradox, you can always use meta factors to argue why your revealed preferences actually were coherent.
Like, “Yes the money pump just took some of my money, but you haven’t considered that the pump made a pleasing whirring sound which I enjoyed, which definitely outweighed the value of the money it pumped from me.”
While that may be a coherent response, we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.
A “rationality test” is a test that provides Bayesian evidence to distinguish people earlier vs. later on this path toward a more reflectively coherent utility function.
Having so grounded all the terms, I mostly agree with pwno and Houshalter.
you can always use meta factors to argue why your revealed preferences actually were coherent.
Three observations. First, those aren’t meta factors, those are just normal positive terms in the utility function that one formulation ignores and another one includes. Second, “you can always use” does not necessarily imply that the argument is wrong. Third, we are not arguing about coherency—why would the claim that, say, I value the perception of myself as someone who votes for X more than 10c be incoherent?
we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.
I disagree, both with the claim that getting closer to the ideal of a perfect utility maximizer necessarily adds value to people’s lives, and with the interpretation of the art of rationality as the art of getting people to be more like that utility maximizer.
Besides, there is still the original point: even if you posit some entilty as a perfect utility maximizer, what would its utility function include? Can you use the utility function to figure out which terms should go into the utility function? Colour me doubtful. In crude terms, how do you know what to maximize?
Well I guess I’ll focus on what seems to be our most fundamental disagreement, my claim that getting value from studying rationality usually involves getting yourself to be closer to an ideal utility maximizer (not necessarily all the way there).
Reading the Allais Paradox post can make a reader notice their contradictory preferences, and reflect on it, and subsequently be a little less contradictory, to their benefit. That seems like a good representative example of what studying rationality looks like and how it adds value.
Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it’s all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.
As a utility maximization agent, based on what you just wrote, you should.
Only if your utility function gives negligible weight to her welfare. Having a utility function is not at all the same thing as being wholly selfish.
(Also, your scenario is unrealistic; you couldn’t really be sure of not getting caught. If you’re very rich, the probability of getting caught doesn’t have to be very large to make this an expected loss even from a purely selfish point of view.)
Surely you ‘should’ only do something like this iff acquiring this amount of money has a higher utility to you than not ruining this lady’s day. Which, for most people, it doesn’t.
Since you’re saying ‘you are very rich’ and ‘some money which is a lot from her perspective’, you seem to be deliberately presenting gaining this money as very low utility, which you seem to assume should logically still outweigh what you seem to consider the zero utility of leaving the lady alone. But since I do actually give a duck about old ladies getting home safely (and, for that matter, about not feeling horribly guilty), mugging one has a pretty huge negative utility.
Have you read the LW sequences? Because like gjm explained, your question reveals a simple and objective misunderstanding of what utility functions look like when they model realistic people’s preferences.
The expression “irrational values” sounds like a category mistake to me.
I’d be comfortable describing someone with a preference set that violates the axiom of quasi-transitivity as having “irrational values,” but certainly not for valuing a “self-perception as a person who” engages is some kind of activity, such as voting.
What you’re really doing by saying “My utility function might value my self-perception as a person who votes for X” is phrasing virtue ethics as utilitarianism. That’s a move which confuses rather than clarifies. If you value your self-perception as a person who votes for X, you aren’t a consequentialist; you believe in virtue ethics.
Can you say it? Yes; you can in theory be a virtue utilitarian. But no real-life virtue ethicists are utilitarians. Hence, confusion.
Why would valuing a particular aspect of my self-perception be virtue ethics? I’m not saying I’m becaming morally better, only that it provides more warm glow, that is, just feels more pleasant.
First of all, humans are 99.99% similar to each other. So I think we reasonably can have arguments about values. It’s possible to be mistaken about what their values are. And people can come to agree on different values after hearing arguments and thought experiments. That’s what debates about morality and ethics are after all.
I don’t think there is a human being that actually values ticking a box that says “democrat”, knowing that it will have no consequence whatsoever. I think there are many beliefs and feelings that lead people to vote. Like “if everyone like me did this, it would make a difference”, or perhaps “it’s a duty as a member of my tribe to do this”, etc.
Some people cast spoiled ballots for similar reasons. Though they aren’t changing the election, they believe just the statistic matters. Like how voting for a third party shows that the third party has some support in the population, and encourages them to keep trying.
But all these arguments for voting are about some tangible effect on the world. And they could empirically be shown incorrect. E.g. maybe no one does read those statistics, or you live in a heavily gerrymandered district.
Now imagine you find someone that really believes their vote matters. And somehow you explained all this to them and came to agreement that it really doesn’t. And then they went and voted anyway.
You could reasonably ask if they are being irrational. If they haven’t really updated their beliefs. If their stated reason for doing a thing is shown wrong, and they don’t change their behavior.
You could ask them why they voted, and I doubt they would say “because it gives me good feelies” or whatever. Because people never say they do things because of that. And so somewhere they must hold a belief that is false and inconsistent.
If they did admit that, at least to themselves, then fine. They are at least consistent. But then I think, they would probably stop voting. When people honestly admit the only reason they do a thing is because it feels good, but has no effect on the world. Well it tends to stop feeling good. Realizing something is pointless tends to make it feel pointless.
Our feelings are not independent our beliefs after all. We feel good feelings because we believe we are doing a good thing.
I would not assume that people necessarily have any reason, at least the kind that can be formulated as a statement about the world, like “this gives me good feelings,” before you ask them why they did it. Of course, once you ask, they will come up with something, but it may be something that in fact had nothing to do with the fact that they did it.
I would say that values that may not be utility maximizing on the individual level, but which are on the cultural or national or even species level so long as most people hold those values, are totally rational. It’s like chosing cooperate in the prisoner’s dillema but with billions of players; so long as most of us choose cooperate we are all better off. So in that situation it’s rational to cooperate, to encourage others to cooperate, and to signal that you cooperate and reward others who do.
“The civic virtue of voting and taking your vote seriously” is a great example of a virtue like that. It doesn’t directly matter to you if you do, but we all are much better off if most people do.
Not at all. My utility function might value my self-perception as a person who votes for X. It might value the ability to rant about how I did or did not vote for X and therefore all the bad policies are not my responsibility—if only they have listened to me! It might value the warm glow of having done my civic duty of helping the forces of light triumph over the spawn of evil. Etc, etc. None of this is particularly weird.
I would argue all those values are irrational. Ticking a box that has no effect on the world, and that no one will ever know about, should not matter. And I don’t think many people would claim that they value that, if they accepted that premise. I think people value voting because they don’t accept that premise, and think there is some value in their vote.
Please do.
The expression “irrational values” sounds like a category mistake to me.
You’re right that “those values are irrational” is a category mistake, if we’re being precise. But Houshalter has an important point...
Any time you violate the axioms of a coherent utility-maximization agent, e.g. falling for the Allais paradox, you can always use meta factors to argue why your revealed preferences actually were coherent.
Like, “Yes the money pump just took some of my money, but you haven’t considered that the pump made a pleasing whirring sound which I enjoyed, which definitely outweighed the value of the money it pumped from me.”
While that may be a coherent response, we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.
A “rationality test” is a test that provides Bayesian evidence to distinguish people earlier vs. later on this path toward a more reflectively coherent utility function.
Having so grounded all the terms, I mostly agree with pwno and Houshalter.
Three observations. First, those aren’t meta factors, those are just normal positive terms in the utility function that one formulation ignores and another one includes. Second, “you can always use” does not necessarily imply that the argument is wrong. Third, we are not arguing about coherency—why would the claim that, say, I value the perception of myself as someone who votes for X more than 10c be incoherent?
I disagree, both with the claim that getting closer to the ideal of a perfect utility maximizer necessarily adds value to people’s lives, and with the interpretation of the art of rationality as the art of getting people to be more like that utility maximizer.
Besides, there is still the original point: even if you posit some entilty as a perfect utility maximizer, what would its utility function include? Can you use the utility function to figure out which terms should go into the utility function? Colour me doubtful. In crude terms, how do you know what to maximize?
Well I guess I’ll focus on what seems to be our most fundamental disagreement, my claim that getting value from studying rationality usually involves getting yourself to be closer to an ideal utility maximizer (not necessarily all the way there).
Reading the Allais Paradox post can make a reader notice their contradictory preferences, and reflect on it, and subsequently be a little less contradictory, to their benefit. That seems like a good representative example of what studying rationality looks like and how it adds value.
You assert this as if it were an axiom. It doesn’t look like one to me. Show me the benefit.
And I still don’t understand why would I want to become an ideal utility maximizer.
For the sake of organization, I suggest discussing such things on the comment threads of Sequence posts.
If you could flip a switch right now that makes you an ideal utility maximizer, you wouldn’t do it?
Who gets to define my utility function? I don’t have one at the moment.
I would never flip a switch like that.
And why should we be utility maximization agents?
Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it’s all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.
Would you?
Only if your utility function gives negligible weight to her welfare. Having a utility function is not at all the same thing as being wholly selfish.
(Also, your scenario is unrealistic; you couldn’t really be sure of not getting caught. If you’re very rich, the probability of getting caught doesn’t have to be very large to make this an expected loss even from a purely selfish point of view.)
Surely you ‘should’ only do something like this iff acquiring this amount of money has a higher utility to you than not ruining this lady’s day. Which, for most people, it doesn’t.
Since you’re saying ‘you are very rich’ and ‘some money which is a lot from her perspective’, you seem to be deliberately presenting gaining this money as very low utility, which you seem to assume should logically still outweigh what you seem to consider the zero utility of leaving the lady alone. But since I do actually give a duck about old ladies getting home safely (and, for that matter, about not feeling horribly guilty), mugging one has a pretty huge negative utility.
Have you read the LW sequences? Because like gjm explained, your question reveals a simple and objective misunderstanding of what utility functions look like when they model realistic people’s preferences.
I’d be comfortable describing someone with a preference set that violates the axiom of quasi-transitivity as having “irrational values,” but certainly not for valuing a “self-perception as a person who” engages is some kind of activity, such as voting.
At a particular moment in time, right? There is nothing which says preferences have to be stable as time passes.
What you’re really doing by saying “My utility function might value my self-perception as a person who votes for X” is phrasing virtue ethics as utilitarianism. That’s a move which confuses rather than clarifies. If you value your self-perception as a person who votes for X, you aren’t a consequentialist; you believe in virtue ethics.
Can you say it? Yes; you can in theory be a virtue utilitarian. But no real-life virtue ethicists are utilitarians. Hence, confusion.
Why would valuing a particular aspect of my self-perception be virtue ethics? I’m not saying I’m becaming morally better, only that it provides more warm glow, that is, just feels more pleasant.
First of all, humans are 99.99% similar to each other. So I think we reasonably can have arguments about values. It’s possible to be mistaken about what their values are. And people can come to agree on different values after hearing arguments and thought experiments. That’s what debates about morality and ethics are after all.
I don’t think there is a human being that actually values ticking a box that says “democrat”, knowing that it will have no consequence whatsoever. I think there are many beliefs and feelings that lead people to vote. Like “if everyone like me did this, it would make a difference”, or perhaps “it’s a duty as a member of my tribe to do this”, etc.
Some people cast spoiled ballots for similar reasons. Though they aren’t changing the election, they believe just the statistic matters. Like how voting for a third party shows that the third party has some support in the population, and encourages them to keep trying.
But all these arguments for voting are about some tangible effect on the world. And they could empirically be shown incorrect. E.g. maybe no one does read those statistics, or you live in a heavily gerrymandered district.
Now imagine you find someone that really believes their vote matters. And somehow you explained all this to them and came to agreement that it really doesn’t. And then they went and voted anyway.
You could reasonably ask if they are being irrational. If they haven’t really updated their beliefs. If their stated reason for doing a thing is shown wrong, and they don’t change their behavior.
You could ask them why they voted, and I doubt they would say “because it gives me good feelies” or whatever. Because people never say they do things because of that. And so somewhere they must hold a belief that is false and inconsistent.
If they did admit that, at least to themselves, then fine. They are at least consistent. But then I think, they would probably stop voting. When people honestly admit the only reason they do a thing is because it feels good, but has no effect on the world. Well it tends to stop feeling good. Realizing something is pointless tends to make it feel pointless.
Our feelings are not independent our beliefs after all. We feel good feelings because we believe we are doing a good thing.
I would not assume that people necessarily have any reason, at least the kind that can be formulated as a statement about the world, like “this gives me good feelings,” before you ask them why they did it. Of course, once you ask, they will come up with something, but it may be something that in fact had nothing to do with the fact that they did it.
I would say that values that may not be utility maximizing on the individual level, but which are on the cultural or national or even species level so long as most people hold those values, are totally rational. It’s like chosing cooperate in the prisoner’s dillema but with billions of players; so long as most of us choose cooperate we are all better off. So in that situation it’s rational to cooperate, to encourage others to cooperate, and to signal that you cooperate and reward others who do.
“The civic virtue of voting and taking your vote seriously” is a great example of a virtue like that. It doesn’t directly matter to you if you do, but we all are much better off if most people do.