Privileging the Question
Related to: Privileging the Hypothesis
Remember the exercises in critical reading you did in school, where you had to look at a piece of writing and step back and ask whether the author was telling the whole truth? If you really want to be a critical reader, it turns out you have to step back one step further, and ask not just whether the author is telling the truth, but why he’s writing about this subject at all.
-- Paul Graham
There’s an old saying in the public opinion business: we can’t tell people what to think, but we can tell them what to think about.
-- Doug Henwood
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Here are some political questions that seem to commonly get discussed in US media: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed?
These are all examples of what I’ll call privileged questions (if there’s an existing term for this, let me know): questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. The questions above are probably not the most important questions we could be answering right now, even in politics (I’d guess that the economy is more important). Outside of politics, many LWers probably think “what can we do about existential risks?” is one of the most important questions to answer, or possibly “how do we optimize charity?”
Why has the media privileged these questions? I’d guess that the media is incentivized to ask whatever questions will get them the most views. That’s a very different goal from asking the most important questions, and is one reason to stop paying attention to the media.
The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like “should Congress pass stricter gun control laws?” and never once ask yourself where that question came from and whether there are better questions you could be answering instead.
I suspect this is a problem in academia too. Richard Hamming once gave a talk in which he related the following story:
Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, “Do you mind if I join you?” They can’t say no, so I started eating with them for a while. And I started asking, “What are the important problems of your field?” And after a week or so, “What important problems are you working on?” And after some more time I came in one day and said, “If what you are doing is not important, and if you don’t think it is going to lead to something important, why are you at Bell Labs working on it?” I wasn’t welcomed after that; I had to find somebody else to eat with!
Academics answer questions that have been privileged in various ways: perhaps the questions their advisor was interested in, or the questions they’ll most easily be able to publish papers on. Neither of these are necessarily well-correlated with the most important questions.
So far I’ve found one tool that helps combat the worst privileged questions, which is to ask the following counter-question:
What do I plan on doing with an answer to this question?
With the worst privileged questions I frequently find that the answer is “nothing,” sometimes with the follow-up answer “signaling?” That’s a bad sign. (Edit: but “nothing” is different from “I’m just curious,” say in the context of an interesting mathematical or scientific question that isn’t motivated by a practical concern. Intellectual curiosity can be a useful heuristic.)
(I’ve also found the above counter-question generally useful for dealing with questions. For example, it’s one way to notice when a question should be dissolved, and asked of someone else it’s one way to help both of you clarify what they actually want to know.)
- Life in a Day: The film that opened my heart to effective altruism by 27 Apr 2023 22:44 UTC; 179 points) (EA Forum;
- Recovering from Rejection by 3 Jul 2023 9:42 UTC; 172 points) (EA Forum;
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- Useful Concepts Repository by 10 Jun 2013 6:12 UTC; 48 points) (
- Rudimentary Categorization of Less Wrong Topics by 5 Sep 2015 7:32 UTC; 39 points) (
- 21 Jun 2023 15:17 UTC; 35 points) 's comment on Why Altruists Can’t Have Nice Things by (EA Forum;
- 12 May 2013 20:29 UTC; 31 points) 's comment on How to calibrate your political beliefs by (
- 5 Aug 2015 2:41 UTC; 17 points) 's comment on Which LW / rationalist blog posts aren’t covered by my books & courses? by (
- 14 Aug 2015 8:07 UTC; 16 points) 's comment on Politics: an undervalued opportunity to change the world? by (
- 1 Dec 2014 16:48 UTC; 16 points) 's comment on December 2014 Media Thread by (
- Tom Chivers, author of “The AI Does Not Hate You”, is running an AMA on the EA Forum by 11 Mar 2021 6:22 UTC; 14 points) (
- 1 May 2013 22:10 UTC; 8 points) 's comment on Rationality Quotes May 2013 by (
- 20 Feb 2014 18:14 UTC; 6 points) 's comment on [Open Thread] Stupid Questions (2014-02-17) by (
- 19 Sep 2018 10:37 UTC; 4 points) 's comment on Which piece got you more involved in EA? by (EA Forum;
- 7 May 2013 23:18 UTC; -4 points) 's comment on Using Evolution for Marriage or Sex by (
The correct response to Hamming’s question is “Because I have a comparative advantage in working on the problem I am working on”. There are many, many important problems in the world of greater and lesser degrees of importance. There are many, many people working on them, even within one field. It does not make sense for everyone to attack the same most important problem, if indeed such a single problem could even be identified. There is a point of diminishing returns. 100 chemists working on the most important problem in chemistry are not going to advance chemistry as much as 10 chemists working on the most important problem, and 90 working on a variety of other, lesser problems.
This point was first made to me by Richard Stallman who told me quite clearly that free software was not the most important problem in the world—I think he cited overpopulation as an example of a more serious problem—but software freedom was the problem he was uniquely well situated to address.
There are seven billion people in the world. I know of no problem that actually needs 7 billion minds to solve it. We are pretty much all well advised to find the biggest problem we have a comparative advantage at, and work on solving that problem. We don’t all have to, indeed we shouldn’t, all work on the same thing.
I’m torn regarding this argument. Aaron Swartz wrote a very nice piece which I can’t find (his personal site now appears to be down) about how working entirely on things that are your current comparative advantage is fixed mindset, and what you could be doing instead is changing what your comparative advantage is. I’m glad that Aaron Swartz did this, and I worry that focusing on comparative advantage gives me an excuse not to branch out. (My current comparative advantage is in mathematics but I’m not convinced that means I should only be spending my life working on mathematics.)
To make my argument clearer, I will use you as an example; please forgive me.
If you have a comparative advantage in maths, and decide to change your comparative advantage to medical, computer, or social science, as soon as you have caught up on the fundamentals of the field necessary to make an informed opinion you will already have a comparative advantage because of your background.
Your proficiency in maths lent you a comparative advantage in maths; your comparative advantage in maths lends you a comparative advantage in [economics]; your comparative advantage in maths and [economics] lends you a comparative advantage in [biochemistry], etcetera.
I think this makes sense. We need to distinguish between something like “obvious current comparative advantage” and “less obvious potential comparative advantage.” In practice, the heuristic “stick to your comparative advantage” may optimize excessively for the former at the expense of the latter.
So in that case, the question is in some ways addressed by narrowing the meaning of “field”.
If a physicist interprets Hamming’s question as “what is the most important problem in your field?” as “what is the most important problem in physics?” then obviously not everyone should be answering it. If, instead, the physicist interprets it as “what is the most important problem in quantum cryptography?”, that being his/her more specific field, then it becomes more reasonable (and more vital!) that the physicist is indeed working on the most important problem in their field.
Although, upon reflection, if I decide to become the world’s expert in lit-match juggling, and the most important problem is lighting the third one before the first two burn down, that is obviously not necessarily an important problem on a larger scale. But I think my point above still has value even if it’s missing something that permits this counterexample.
And a response that brings up another important point is simply that everyday language is said without precision. When someone claims that their problem isn’t important, they don’t mean that it has zero importance, and when they say it’s not going to lead to something important, they aren’t really claiming that it has a zero chance of leading to anything important. Indeed, they aren’t even claiming aht the expected value of it is low—imagine they are working on something which, by contributing to general knowledge, increases the odds of solving each of 2000 problems by 0.1% each, Nobody in their right mind would claim that that is an important problem, yet it increases the expected number of important problems that are solved by more than 1.
I would
The comparison that leapt into my mind was Chomskians talking about how politicians and the media decide which topics are even discussed. Not sure if they have a term for that. I guess what you call “Privileging the Question” is part of framing in the social sciences sense. It’s handy to have a phrase for this particular thing, though.
Perhaps agenda-setting?
Good find!
I believe they would term it “manufactured consent.” Although, I think the two ideas are slightly different. The idea behind manufactured consent is that, in order to answer a question one way or another, you must implicitly accept its premises. It is a special, politicized case of privileging the question.
Before going back to check the name, I just assumed the academic asking awkward questions and making people feel bad about their life-choices was Robin Hanson.
This seems related to Robin Hanson’s concept of “pulling sideways”,. Some questions (e.g. income tax levels or gay marriage) get privileged because the alternative answers align with pre-existing political coalitions, so they give people an opportunity to cheer for their side and against the Enemy, whereas other questions whose answers would involve “pulling sideways” are ignored.
Apparently there are indeed still issues that haven’t gotten stuck being aligned with one or the other major U.S. political parties—if what I read is correct, the bill before Congress that would make it easier for states to collect sales taxes on Internet purchases has both support and opposition from members of both parties.
Source.
I don’t think that’s a good example since it does align with political parties. A better example of an important issue that doesn’t align with political parties and thus gets mostly ignored by congress is copyright reform.
Isn’t it more that both parties are against it?
But a lot of the population is for it. So there are people in both parties that are for it (partially for the same reason a lot of the population is, partially in an attempt to get some of the pro-copyright reform vote).
There’s a Swedish word for this, “problemformuleringsprivilegiet,” which roughly translates as “the privilege to formulate the problem.”
Which is basically the same phrase, but without spaces between words.
In a way of self-fulfilling prophecies, a privileged question becomes important when it is brought to everyone’s attention. It becomes the question to decide whether you are a Green or Blue. Refuse to deal with it, and all Greens will suspect that you are a Blue, and all Blues will suspect that you are a Green. Then you may feel the social consequences of having enemies but no allies.
(This is not meant as a criticism of the post:) I hope I’m not the only one who went “gaaah” here about the latter two not being questions we could be answering right now in politics :-) Not that I have much hope that this is doable, but still.
(I started out writing “not that I have any hope”, but then remembered that GiveWell didn’t manage to find good opportunities for funding immunizations or micronutrient supplementation—the first of which they “consider to have the strongest evidence base of any intervention we know of”—with a major reason being that government and multilateral funders are already taking the best opportunities. See also Eliezer’s comment and Holden’s reply, suggesting the reason is that there are some people in government to whom measurable, quantifiable, tangible benefits make a very attractive pitch.)
Often, my mother will ask me “what do you think about [some issue that’s been discussed in the media lately]?” and I’m like “how the heck should I know, and why should I even care?” It usually irritates her -- “you surely must have an opinion about that! How can you have no opinion?” (Sometimes I retort by asking for her opinion about some unanswered question about physics or something like that, but then she usually says something to the effect that her question, unlike mine, is so intrinsically important that any good citizen has a duty to form an opinion on it.)
Have you tried responding to that and taking the conversation a step or two further to see if you can resolve it?
Not that I remember of. By now, she’s probably accepted my aloofness as yet another weird quirk of mine, and I’m OK with that.
Questions about physics are probably separate enough from the normal person’s life that even if they do connect back, it’s at a long inferential distance. Have you tried asking about things that are more clearly applicable to her, or are you picking things you consider equally irrelevant? If the latter, in the absence of an explanation, she will naturally consider them much less relevant.
The point is to have her realize how it feels like to be asked a question about which one doesn’t give a damn about.
It doesn’t seem to be working. I’d suggest a different approach.
(My own response to this sort of thing is usually “well, it doesn’t seem important and I haven’t been following it because [reasons], but now that you mention it, [rampant speculation].” This gets across that I don’t consider it a useful question, but still respects the other person and their desire to have a conversation.)
The problem with that is that IME people will take my rampant speculation way too seriously.
Depends on the person. It’s perfectly possible that your rampant speculation is much better than anything they could come up with.
Well, if you get into an argument, you can always say “as I said, I don’t know much about this” and change the subject, right?
I think this line of thinking is very important. People would benefit immensely from becoming better at deciding what questions to address with their scarce cognitive resources. However, I do not think this problem of “meta-rationality” is an easy one, and in particular I’m not sure your heuristic is a good one. The principle that a good question has a clear-cut policy implication conflicts directly with the principle of curiosity. Maybe if an individual is an high-stress, high-stakes decision-making role, he or she may want to ignore questions that are not immediately relevant to the problems at hand. But the whole idea of academia is that society benefits when some individuals have the time and the incentive to go out and answer questions of academic interest—because, of course, we don’t know what we don’t know and some ideas, or their consequences, may have nonobvious policy implications somewhere down the road.
I propose the following heuristics, noting that in this area one should adopt a “fox-like” strategy and try to apply as many different perspectives as possible:
It is a good sign here if the answer is “yes”. Mathematics as a field is worthwhile, in large part, because mathematicians know with a very high degree of confidence when they have produced a correct result (in contrast, say, to medical science).
For example, in the field of AI one of the most standard strategies is to try to take a key insight from some other domain of knowledge—economics, physics, evolution, etc—and try to apply it to the problem of intelligence (a famous immunologist, Gerald Edelman, has made significant efforts to apply his insights from immunology to the problem of consciousness; in computer vision there is a very well known paper about edge detection that is very clearly inspired by the path integral formulation of quantum mechanics). I personally believe that questions in this reference class don’t typically yield much progress, but YMMV.
I was hoping that starting with examples involving politics would make it clear that I wasn’t suggesting we toss out intellectual curiosity, but I can make an edit clarifying this.
Why shouldn’t the same exception apply to political questions?
From an outside view this almost seems like “asking questions in my field is satisfying curiosity, asking questions outside my field is privileging the question”.
Are people intellectually curious about political questions like gay marriage and gun control? That isn’t my impression.
There is a large extent to which those questions are about values and not facts. But I am extremely curious about e.g. How and when does prohibition succeed in controlling the usage of a good? How are social institutions like marriage affected by how society understands them and what sort of negative externalities might there be from reforming long standing social rules? Marriage rates have dropped off a cliff among certain demographics and it seems plausible that a) that leads to really bad things and b) the cause might have had something to do with the rhetoric about marriage used by important figures in politics and academia. I’m not sure that necessarily involves the gay marriage question or that it implies a particular answer to the gay marriage question. But both issues exist at intersections of very interesting economic and socio-cultural questions such that I generally enjoy broad, thoughtful and knowledgeable discussions of those political issues.
Sure, so discuss those general questions and not the specific ones (which are not only privileged but which many people are mind-killed over).
The Overton Window is a related concept, but it’s at least as much about what people may not consider as it’s about what’s drawn to their attention.
Related:
Those who dismiss postmodernism are condemned to reinvent it, one piece at a time.
Unfortunately, there seems to be no such thing as Postmodernism: The Good Bits.
(If you order a big sundae and discover that the top scoop is dog shit, it makes more sense to go buy your own ice cream and make your own sundae—even knowing you’re reinventing many pieces of the original sundae—than settling for the original sundae and trying to carefully spoon around the shit.)
Speaking as a fan of the stuff, I fully appreciate and frequently concur with your reasons for not wanting to touch it.
Postmodernism or dogshit? ;)
Can you give a summary of postmodernism or should I just google it myself?
Probably not one that’s very useful. If you think of it as an artistically-related metatheory construction kit (so, a meta-meta-theory), that would probably describe what the bits that aren’t shit are useful for. Gwern (per comment parallel to yours) probably wouldn’t benefit, having deep cultural knowledge in at least one area, though he might find useful bits to avoid having to reinvent terminology.
(Oh, and it’s not one thing—any area that has something that corresponds to a “modernist” outlook, where progress can occur, is likely to spawn a “postmodernism” which involves questioning every assumption including that the right questions are being asked. Remember that it sprang up after World War II, which was seen by the postmodernists as the horrific reductio ad absurdum failure of several decades-long modernist programmes.)
What I mean is that dismissing this icky squishy cultural “what is meaning?” stuff as a diseased discipline does not somehow mean you, as a bounded human intellect in the world, are immune to the problems the toolkit-constructing toolkit in question can somewhat alleviate. Because the proper study of mankind is man. Squishy and infuriating as humanity is, directed cyclic graphs of preferences and all.
FWIW, don’t read original Derrida unless the wall opposite needs a few more dents in it—stick to commentaries.
I challenge anybody to come up with a higher-density description of postmodernism than ‘Fallacy Of Gray Plus Meta Everything with Leftism’. Remember that it arose out of an enviroment that was far too absolutist and ignored meta-analysis.
Analytic approaches to continental concerns are routinely insightful. The problem with post-modernism is language and methodology more than the subjects and theses. But it’s so big a problem than any analytic approach is almost inevitably the best thing written on the subject.
Those who dismiss the dangers of mountaineering are condemned to fall off mountains, one cliff at a time.
To borrow gwern’s comment, there is no such thing as Falling Off Mountains: The Good Bits.
Great post. Similar thoughts have been expressed by some LessWrong users on Twitter, but having a nice long summary here is much better. I’d advocate promoting this one to Main.
It’s not that simple. Getting views is important but it isn’t the only consideration. Journalists want to win publisher prizes. Journalists have an interest to build relationships with people who can supply them additional stories.
Owners of newscorporations have their interests. Buyers of advertisement have interests.
Another variable is that stories have to be relatively easy to research. Journalism is under tight deadlines. If the story is just to complex the time that the journalist needs to write the story isn’t used “productively”.
Data point: I knew a person who worked as a journalist for a newspaper. Each day they received from their boss a random topic to write about, and they had to write three or four articles within the day. There was no time to do any research, and there was no budget for travelling and seeing something firsthand.
That situation left only a few possible strategies: (1) Call a few relevant people by phone. Most of them will refuse to talk with you, because they have experience that in the past they told a journalist something and the journalist wrote something else using their name as a support. A few people will respond. Compile their answers into articles. (2) Know a few people willing to talk about this topic. Call them. (3) Use Google and steal information from other articles, especially the foreign ones. (4) Just invent the story, using any cliche you know. (5) Any combination of the above. For example write the story first, using the cliches you know, then call random people and try to get them agreeing with you, and then add their names to the article.
That explained a lot. Among other things, it explained why a person willing to talk with journalists about anything can get so much space in media. (Assuming they are compatible with common wisdom and don’t speak anything controversial.) For a journalist, such person is the best contact they could ever have.
What country was this?
Downvoted because it’s improperly sourced, second hand information, and in a comment about journalism no less. That is, “I knew a person who worked as a journalist for a newspaper”. Who did you know? What newspaper did they work at? Can you quote them directly rather than simply recalling and rephrasing what they said?
I have two sources, both from Slovakia.
First is a person I know and consider trustworthy, but I don’t have their consent to publish their name. The newspaper was Pravda (second or third most selling newspaper in the country).
Second is more indirect, I worked for a company which provided some services for a few newspapers, so my colleagues had a lot of contact with journalists. The stories suggested that the journalists were paid poorly, overburdened with work, and usually quit burnt out after a few months.
The information I got from the first source is: The journalists have to write 3-4 articles during the day, about a topic they learn in the morning, and they mostly use phone and internet to create the texts. (Sometimes the newspaper has a policy that the journalist must sign half of articles with their real name, and half with some pen name, to not make it obvious to the readers how much articles one person writes daily.) Many people, especially scientists, refuse to communicate with journalists under these conditions. But there are some known, relatively high-status people, willing to give an opinion on almost any topic… and when the day is almost over and the journalist didn’t succeed to get information from anyone else, such a person can really save the day. (This is written from my memory.)
The part about the possible strategies is: From the first source, I got the descriptions (1) and (2); the former is a typical day, and the latter is the fallback option, when the former one fails, the day is getting over, and the boss is becoming impatient.
Unrelated to my sources, (3) a few journalists in Slovakia were fired a few years ago, because they repeatedly stole and translated material from foreign newspapers (for some reasons, Guardian was a popular victim). This was discovered when web discussions below articles became mainstream in Slovakia; some people made a sport of providing hyperlinks to the original articles.
The option (4) is what happened to me as a person interviewed by journalist, on two indepenent occasions. At the interview it was obvious that the story is already written, and I am only there as a name to put under the article.
The first case: I was a small child and I won a local mathematical olympiad. The journalist came to me with an idea of writing about a gifted child and computers. Was disappointed to hear that we don’t have a computer at home, and can’t afford one. The journalist asked me repeatedly whether because of my success my parents are planning to buy me a computer, and I repeatedly said no. The resulting article said that my parents are considering buying me a computer; although I never said anything like that. But that’s what the journalist wanted to write.
The second case happened a decade later: At some protest against some human rights abuse in Palestine, I decided to talk with a journalist about a paradox that when talking about Europe, the concept of “collective guilt” is not accepted, but when talking about people in Palestine, the same concept is used for them, and nobody discusses that. The journalist seemed interested and asked about my name. The only thing about this all in the resulting article was: “Student Viliam said that opressing Palestinians is bad.” At that moment I decided to never speak with a journalist again.
And that’s the information I built my model upon. Sorry, no references.
Disclaimer: I am not saying that all countries are like this, or even that all newspapers in my country are like this. But I also don’t believe that this all was just a random exception. Under some conditions, this is the equilibrium the system converges to.
I can also imagine that the reality is more complex than this, for example that the newspaper has a few high-status journalists who do serious work, and a few low-status journalists to provide the text to fill the rest of the pages; so this all only describes the work of the low-status ones.
Chapters 2-4 of Nick Davies’s Flat Earth News discuss how systemic constraints like those ChristianKl & Viliam_Bur mention lead to worse journalism. It mostly addresses the US & UK rather than Slovakia, but if anything this strengthens Viliam_Bur’s point. (Replying even though you’re at −4, fuck the troll toll.)
If you’re going to assume they’re lying, then they could make up whatever they liked for who they knew and what newspaper they worked at. And, frankly, being able to quote someone word for word often makes me more suspicious of someone lying.
I don’t see what more info you’d get out of their citing it really—at least in checkable terms.
It’s rather against the point of the article to start talking about the above examples of privileged questions…
Even so, it’s worth noting that immigration policy is a rare, important question with first-order welfare effects. Relaxing border fences creates a free lunch in the same way that donating to the Against Malaria Foundation creates a free lunch. It costs on the order of $7 million to save an additional American life, but on the order of $2500 to save a life if you’re willing to consider non-Americans.
By contrast, most of politics consists of policy debates with about as many supporters as opponents, suggesting there isn’t a huge welfare difference either way. What makes immigration and international charity special is the fact that the beneficiaries of the policies have no say in our political system. Thus the benefits that accrue to them are not weighted as heavily as our benefits, which means there’s a free lunch if overall welfare is what you care about.
I think it’s plausible that immigration policy is in fact an important question but less plausible that that’s why people talk about it. (Similarly, a privileged hypothesis need not be wrong.)
xv15′s point is actually really really standard among public intellectuals and elites.
Sure.
It’s only an aside to your main point, but “work on the most important problems you can think of” is a horrible heuristic for everybody to follow. If you want to advance human knowledge, then work on the most important problems you can think of, as weighted by your best estimate of the likely time advantage of “when I would discover a solution” over “when someone else would have discovered a solution”. Unless you are one of the smartest people in the world, this weight is likely to be negligibly greater than 0 for many of the most important problems you can think of, simply because there are already plenty of smarter people already working on them.
That’s okay. Many of the most important solutions in the world came about because someone was working on an unexpectedly-related problem that would never have made it onto a “most important problems” list.
Of course.
I think I’m missing the point of something.
Can you give examples of overpriviledged questions and what harm might come from overpriviledging them?
The general harm is that paying attention to unimportant question X means you have less time to pay attention to possibly more important question Y. Was this unclear from the text? For example:
In another comment here, Alejandro1 linked to this Robin Hanson post about policy questions that are orthogonal to vs. that map closely onto traditional liberal vs. conservative divides. The former are underprivileged and the latter are overprivileged, and the harm is that people are likely missing opportunities to implement useful policies because of this. (Perhaps Robin has prediction markets in mind as an example?)
Non-white feminists have a beef with white feminists part of which I think concerns white feminists having overprivileged certain kinds of questions, e.g. questions based on gender alone when non-white feminists think that gender, race, and class need to be understood together. The harm here is that the concerns of white feminism dominate feminist discussion and so don’t leave room for the concerns of non-white feminism.
Similarly, and as an expansion of the gay marriage example, (I think) many LGBT activists think that gay marriage is less pressing than issues like bullying and job security. The harm here is, as above, that concerns about gay marriage dominate discussion of gay rights and so don’t leave room for possibly more important concerns.
To take another tack on the gay marriage example, asking the question also implies that it’s the sort of thing one is allowed to decide on. I welcome a national debate on “Should we give Thom Blake a million dollars” but am less enthusiastic about debating “Should we throw rocks at Thom”.
As a minor historical note, the focus on gay marriage rather than other aspects of gay rights started out as a minority position, not a privileged one. Or so I gather from reading Andrew Sullivan, who seems to have been pushing for the emphasis on marriage rights at a time when that was still controversial among LGBT activists.
ETA: I find the downvote bizarre. I assume it’s for PITMK reasons, but I’m not presenting a normative stance one way or the other.
Promote to Main?
[pollid:460]
Flesh out a bit more, then promote to main.
How important is it that I answer that question, really?
You will signal that you care about the quality of articles on LessWrong. That means that you are a loyal member of our team, and we love you.
I am sad that this got downvoted; it amused me inordinately.
Not least because, well, it’s accurate.
So what you’re saying is, “signalling?”
Yes, joking is a subset of signalling. ;-)
There is (apparently) only one way to find out!
How important is it that Dave answers that question, really?
[pollid:461]
Hmm. The winner appears to be “Whatevs.”, followed by “The question is important but Dave shouldn’t answer because we don’t care what he personally thinks and nobody loves him.”
I’m sorry, Dave …
I’ll somehow recover from the blow.
There is some theoretical sense in which the fact that right now the “nobody loves Dave” answer is tied with “Extremely important” helps.
If it’s any consolation, I only voted for that one because it was funny, not because I thought it was true.
I wuv you :3
At the moment, “nobody loves Dave” and “the fate of the cosmic commons is at stake!” are running neck and neck. I suggest leaving it up to Omega.
“nobody loves Dave” has now edged out “the fate of the cosmic commons is at stake!” by one vote. Of course, if I take the weighted average, that still works out to a way higher evaluation of my personal significance than seems at all justifiable. Is it possible people aren’t all being entirely sincere in their votes? Nah, that’s crazy talk.
I voted “Eh” so I could see the poll results and determine whether I should in fact promote this post, but I see that there are a lot of “Eh”s and I’m wondering if others did the same.
Do you mean by this that one should always have a non-”nothing” answer to this question, or just that the “nothing” answer frequently (though not always) indicates a privileging problem?
I think if the answer is literally “nothing” then that is bad, but “nothing” to many people could mean “nothing, I’m just curious” and then you can expand that to “I think this is an interesting question and I enjoy thinking about it on an intellectual level,” which is not in fact nothing; what you’re doing is satisfying your intellectual curiosity. (This might apply to a random math or physics question you ask yourself, for example.)
My initial reaction to your question was ‘what about curiosity-driven research?‘, it seems like this was true of other people too. I would suggest editing to make clear that ‘curiosity’ is an acceptable response.
Done.
My problem is that “curiosity” is not a discriminating feature for me at all. I am automatically extremely curious about any research question.
I think this post is covering a superset of the unhelpful questions that I pointed out here, because the ones that I mentioned are also dissolved by “What do you plan to do with the answer?” Sometimes, you do have a goal in mind, but you don’t realize that the question you’re asking isn’t going to yield an answer that’s relevant or helpful with respect to that goal (which is a mini lost purpose in the form of expended clock cycles but maybe also yelling at a tiny child.) Meanwhile, I think the signalling ones are usually when you feel like you need to formulate an opinion on something just because it’s in the media even though it doesn’t affect your daily life.
I think the answer “I’ll be like whoaaa, because this question is interesting!” is a helpful, non-nothing answer, but something like “then I’ll put it in this paper and eventually it will lead to more papers, but not much else” is indicative of larger lost purposes.
But that is a ‘nothing’ answer. Just finding a question and answer interesting isn’t doing something with the answer. And while I agree that having really bad reasons for trying to answer questions is really bad, I’m still not sure what to make of the original ‘nothing’ comment. Is ‘nothing’ indicative of a bad reason for trying to answer a question? If it is, is it merely evidence for, or does it (or the motivations it reflects) constitute a bad reason?
I will experience an increase in utility from the process of finding out the answer? I will bask in the good feeling of having my question answered? I will gleefully tell like-minded friends about it, thus infecting them with my enthusiasm?
I guess it’s hard to tell the difference if you haven’t encountered the second type of question, where you think, “Oh well, I better develop an opinion on this whole gay marriage thing because everyone is talking about it and I’m the type of person that has opinions.” It feels kinda like a chore you should do. (It’s even worse when you feel pressure to make the opinion unique and interesting.) The difference between that feeling and realizing you don’t know something and then checking it on wikipedia and going “ohhhh” is really big.
General note: you are demonstrating a pattern of posting comments that seem to depend on more shared context than actually exists, and which I suspect are consequently pretty much unintelligible to the majority of readers.
I don’t know about that. Probably the most important question that can be asked in politics is “how can we produce a perfect society in every which way according to the following list of criteria....”
The trick, of course, is that for most people, the “most important” questions are defined by more than just what the impact of the answer would be when we get one. Likelihood of finding an answer, feasibility of being able to implement an answer, ability to implement it using partial steps, and similar real-world considerations are also part of what makes a question the “most important”. Based on those real-world criteria, the questions that you call privileged actually score pretty high on the importance scale. If enough people vote for gay marriage or gun control, we can have it tomorrow (maybe not literally tomorrow, since the system takes time, but still fairly soon). It may be harder to get, for instance, life extension tomorrow.
What? “Vote for a politician who I feel has a chance of stopping/expediting (depending on my conclusion) gay marriage, gun control, and such” isn’t “something”? Even just discussing a subject and affecting public opinion (to the extent that one person out of millions can do so at all) is “something”.
I agree that these are important criteria but strongly disagree that questions like gay marriage were in fact brought to your attention based on such criteria.
I don’t think it is. Do you have evidence to the contrary? (As I’ve mentioned in another comment, I’m pessimistic about the value of voting but willing to update.)
Dear Quiochu_Yuan,
I would respectfully urge you to vote. People have died for that right. Wars have been fought for that right. I would put it to you that not voting is an act of great disrespect.
OK. You may differ. But to me, watching the votes get counted and seeing the will of the people get tallied up on election night is a wonderful thing.
(I am politically active. I scrutineer.)
Good people need to get into politics. If they don’t, what Aristotle termed a noble profession is left to the dogs and the cynics. Also pols look much different in the flesh than they do on camera. The difference is worth observing closely and personally—not through the lens of what ‘the media’ think is ‘important.’
I agree strongly with the point that voting can be worthwhile, but I think you’re being downvoted because you’re making poor arguments for that point. (Specifically, you can make arguments for really bad conclusions using the same format, “people died for the belief that X is true, therefore X is true”.)
Irrelevant. If people had died for the right to own slaves, that wouldn’t imply I should own slaves.
People died so that we can vote. That doesn’t mean we should.
The will of the few people funding the super PACs which are telling the sheeple what to bleat, in the few states whose result matter.
Invest their life’s efforts for a miniscule chance to change the system of entrenched interests and institutional deadlocks from inside. No thanks. Even for the charity-minded, getting rich then buying influence (in the direction you perceive as “correct”) would be the far more effective route.
The sheeple? That is a contemptuous remark. You should withdraw it.
There are no super PACs in my country. We have sensible electoral laws Down Under… Sensible gun laws too. Oh and Medicare for all without any squibbing, mandatory 401ks for all workers. And freedom… Lot of good stuff...
What is your constructive alternative to voting and political activism?
What are you offering? Some cafe society “I am vastly superior to the bleating sheeple masses” pose?
Or I will take a long shot and get rich and buy the country option?
FYI, People did die for the right to own slaves. They LOST. There is a subtle but important difference.
You have to fight to win. You got nothing but rationalizing your chicken out cynicism option.
Well, there are countries whose voting system is such that I would vote, and countries in which I wouldn’t. “Democracy” is an umbrella term covering a host of political systems, each one has to be evaluated on its own merits. Circumstances are different even inside one country; individual voters in e.g. Ohio or Florida have actual influence.
So the victors write the history books, eh? We should just do whatever the people who successfully killed the other people say, I suppose.
I’m saying that the most effective way to influence your country’s policy is through money, not through your individual vote. If you want to vote so you feel better about yourself, that’s your business.
Watch some political ads sometimes, at least of the US variety. “Swift Boat Veterans for Truth” and such. Ask yourself why billions are spent on such tactics, check the role of billionaires such as the Koch brothers, Sheldon Adelson, the list goes on. The percentage of voters who believe Obama is secretly a Kenyan muslim. Nope, I don’t think I should withdraw the remark. It’s a numbers game, even the “few” percent of the vote the NRA controls is enough to vote in or out (radical voters dominate the primary system) whomever they want in large parts of the US.
The kind of questions pols actually think about. (I used to work for one...)
How do I get re-elected?
Which event/announcement relating to the party platform (the list of ‘improve society’ criteria that the party has approved) will get airtime and make me look good and my opponent in the next race look bad?
Within the current budget what money can I win for my electorate through the normal processes?
Who can I help within the limits of my power and influence and the laws and budget as they are?
What changes to the current party platform (the list of criteria) do we need to make to achieve 1.
Different pols are more or less diligent about these points.
So long as the people can SACK pols. I.e. vote them out. Democratic politics seems to work tolerably well...
My point was that “the most important question” doesn’t mean “the question which, if answered and implemented, would lead to the biggest benefit”. The feasibility of answering and implementing is, for most of us, part of what makes a question an important question.
The original post seems to have been saying that “privileged” questions are not really important. I think that, when analyzed with a definition that is closer to what we mean by “important”, they are.
All the examples of privileged questions given are disguised manifestations of moral uncertainty
is the struggle between a morality that favors equality, and one that has a certain set of values surrounding purity and/or respect for religious authority.
is the struggle between individual autonomy vs. harm avoidance
is the struggle between in-group preference and lack thereof
The questions themselves are unimportant...but the deeper moral undercurrent which causes those questions to be privileged is important. If someone is against gay marriage and stem cells, how do you expect them to react to trans-humanist memes, life extension, and the AI?
When society makes a decision about the morality of gay marriage and stems cells, they have also gone part of the way to making a decision about AI, since a lot of the same moral circuitry is going to be involved.
Side comment: Can anyone find an example of a “privileged” question which isn’t a disguised moral struggle?
Isn’t moral strugle a part of how mindkilling feels from inside?
Also, compare these two questions:
a) Should gay marriage be legal?
b) How to optimize the society for more long-term utility for people of any sexual orientation?
Only the first one could get media attention. And it’s not because the second one is less moral.
You can’t even ask this question until you arrive at utilitarianism as a moral philosophy. A person with moral objections against homosexual marriage isn’t a utilitarian by definition, since they care about additional things (purity, respect for authority, etc) which have nothing to do with increasing everyone’s utility..
When you ask “how to maximize utility”, you have already assumed that the moral struggle between harm/care and purity has been settled in favor of harm/care. Otherwise, you would be asking about how to maximize utility while also keeping people from “defiling” themselves.
As mare-of-night reminded us elsewhere in-thread, even Clippy is a utilitarian. There’s nothing special about paperclips or purity that prevents them from being included in someone’s definition of utility.
On the other hand, even if your post boils down to “my definition of utility is the correct global definition”, that’s no more wrong than Viliam_Bur’s treating “utility for people” as a well-defined term without billions of undetermined coefficients.
So the original question was:
Under classical preference utilitarianism, you try to maximize everyone’s utility and conveniently ignore the problems of putting two utility functions into one equation, and the problems you mention.
Continuing to conveniently ignore that problem, I implicitly assume that we agree that the positive utility generated by removing restrictions to homosexuality outweigh the negative utility generated by violating purity boundaries, when applied over the entire population.
We still include the purity thing in the calculations of course. For example, I could in principle argue that the negative utility from allowing sex in public probably outweighs the positive utility generated from the removal of the restriction, hence our public obscenity laws.
That ignores the possibility that there is a reason those purity boundaries were there in the first place.
I’ve seen this before, but I can’t say I find it a compelling argument - if an institution was placed for good reason, then at least someone, somewhere would remember why it was placed and could give a compelling argument. If no one can do so, the risk of some, hidden drawback which the original lawmaker could have forseen seems too small to count.
I mean, this argument does apply when you are acting alone, on some question that neither you nor anyone you come into contact with knows anything about...but it doesn’t apply to something like this.
How do utilitarians decide to draw the boundary at the whole human race rather than some smaller set of humans?
II’m not sure if I understand your question...
Utilitarians who choose to draw the line around the whole of the human race do so because they believe they aught to value the whole of the human race.
Is that a deontological standard?
The reason I asked is that, in principle, you could have utilitarianism based on some group smaller than the human race.
For some people, probably. Let’s take a step back.
Morality comes from the “heart”. It’s made of feelings. Utilitarianism (and much of what falls under moral philosophy) is one of many attempt to make a consistent set of rules to describe inconsistent feelings. The purpose of making a consistent set of rules is 1) to convince others of the morality of an action and 2) because we morally feel aversion to hypocrisy and crave moral consistency.
Keeping those aims in mind, drawing the line across all humans, sentient beings, etc has the following benefits:
1) The creators might feel that the equation describes the way they feel better when they factor in all humans. They might hold it as a deontological standard to care about all humans, or they might feel a sense of fairness, or they might have empathy for everyone, etc.
2) Drawing the line across all humans allows you to use the utilitarian standard to negotiate compromises with any arbitrary human you come across. Many humans, having the feelings described in [1], will instinctively accept utilitarianism as a valid way to think about things.
There are plenty of things that are problematic here, but that is why utilitarianism defaults to include the whole human race. As with all things moral, that’s just an arbitrary choice on our part, and we could easily have done it a different way. We can restrict it to a smaller subset of humans, we can broaden it to non-human things which seem agent-like enough to be worth describing with a utility function, etc. Many utilitarians include animals, for example.
People use feelings/System1 to do morality. That doesn’t make it an oracle. Thinking might be more accurate.
If you don’t know how to solve a problem, you guess. But that doens’t mean anything goes. Would anyone include rocks in the Circle? Probably not, since they don’t have feelings, values, or preferences. So there seem to be some constraints.
Accurate? How can you speak of a moral preference being “accurate” or not? Moral preferences simply are. There are some meta-ethics sequences here that explain the arbitrariness of our moral preferences more eloquently , and here is a fun story that tangentially illustrates it
I bet I can find you someone who would say that burning the Quran or the Bible is inherently immoral.
Quite a few of them no doubt. Of course, the overwhelming majority of people who would say that burning the Quran or the Bible is inherently immoral would also say that it’s immoral by virtue of the preferences of an entity that, on their view, is in fact capable of having preferences.
Of course, I’m sure I could find someone who would say rocks have feelings, values, and preferences.
I don’t think this is an accurate formulation of the general religious attitude towards morality.
I agree. Do you also think it’s a false statement?
Let’s just say the expression “it’s immoral by virtue of the preferences of an entity” is not actually a good ‘translation’ of the phrase they’d use.
Um… well, I’m not really sure what to do with that statement, but I’m happy to leave the topic there if you prefer.
Ok, maybe I misunderstood your question in the grandparent. Which statement was it referring to?
“the overwhelming majority of people who would say that burning the Quran or the Bible is inherently immoral would also say that it’s immoral by virtue of the preferences of an entity that, on their view, is in fact capable of having preferences.”
They’d phrase it in terms of sacredness, which isn’t quite the same thing, e.g., how would you apply your argument to flag burning?
Fair enough.
Conversationalists will want to preserve ecosystems, even where those ecosystems are already well studied by science, even when the ecosystem contains no sentient beings (plants, fungi, microbes), even when destroying the ecosystem has many advantages for humans, because they think the ecosystem is intrinsically valuable independently of the effect on beings with feelings, values, and preferences.
Some looser examples...
Pro-life advocates say that beings without preferences have rights by virtue of future preferences. Not all of them are religious.
Hindus treat books (all books in general) with reverence because they are vehicles of learning, despite not necessarily believing in deities.
Many social conservatives report being unwilling to slap their fathers, even with permission, as part of a play.
The classic trolley problem implies that many people’s moral intuitions hinge on the act of murder being wrong, rather than the effect that the death has on the values, feelings, and preferences being morally wrong.
Of course, if you are a moral realist, you can just say that these people’s intuitions are “wrong”...but the point is that “feelings, values, and preferences”—in a word, utilitarianism—isn’t the only guiding moral principle that humans care about.
And yes, you could argue that this is all a deity’s preferences...but why did they decide that those were in fact the deity’s preferences? Doesn’t it hint that they might have an underlying feeling of those preferences in themselves, that they would project those wishes on a deity?
No doubt some of them will, but I suspect you meant “conservationists.” And yes, I agree that some of those will assign intrinsic value to “nature” in various forms, or at least claim to, as you describe.
Some of them do, yes. Indeed, I suspect the ones who say that are disproportionately non-religious.
A fine question.
That’s one possibility, yes.
And, again, if destroying entity X is wrong because some other entity Y says so, that is not inherent.
Indeed. Do you mean to say that you don’t expect it to be said, or merely that those saying it are confused?
The latter.
We sometimes extend morality to inanimate objects , but only ones that mean something to us, such as works of art and religious artefacts. That isn’t actually inherent because of the “to us” clause, although some people might claim that it is.
Pebble sorting is a preference. That’s it. I don’t have to believe it is a moral preference or a correct moral preference.
Moral objectivism isn’t obviously wrong, and system 2 isn’t obviously the wrong way to realise moral truths. IOW, moral subjectivism isn’t obviously true.
NB: Objectivism isn’t universalism.
Beliefs simply are. And some are true and some are not. You seem to be assuming the non-existence of anything that could verify or disprove a moral preference in order to prove more or less the same thing.
I would say that the “to us” clause actually applies to everything, and that nothing is “inherent”, as you put it. Pebble sorting means something to the pebble sorters. Humans mean something to me. The entirity of morality boils down to what is important “to us”?
To me, moral objectivism is obviously wrong and subjectivism is obviously true, and this is embedded in my definition of morality. I’m actually unsure how anyone could think of it in any other coherent way.
I think it’s time to unpack “morality”. I think morality is feelings produced in the human mind about how people aught to act. That is, I think “murder is bad” is in some ways analogous to “Brussels sprouts are gross”. From this definition, it follows that I see moral objectivism as obviously wrong—akin to saying, “no man, Brussels sprouts are objectively, inherently gross! In the same way that the sky is objectively blue! / In the same way that tautologies are true!” (Actually, replace blue with the appropriate wavelengths to avoid arguments about perception)
What do you think “morality” is, and where do you suppose it comes from?
I think morality is behaving so as to take into account the values and preferences of others as well as ones own. You can succed or fail in that, hence “accurate”.
Morality may manifest in the form of a feeling for many people, but not for everybody and not all feelings are equal. So I don’t think that is inherent, or definitional.
I don’t think the sprout analogy works, because your feeling that you don’t like sprouts doesn’t seriously affect others, but the psychoaths fondndess for murder does.
The feelings that are relevant to morality are the empathic ones, not personal preferences. That is a clue that morality is about behaving so as to take into account the values and preferences of others as well as ones own.
if you think morlaity is the same as a personal preference...what makes it morality? Why don’t we just have one word and one way of thinking?
Because they feel different to us from the inside—for the same reason that we separate “thinking” and “feeling” even though in the grand scheme of things they are both ways to influence behavior.
In Math, empirical evidence is replaced by axioms. In Science, the axioms are the empirical evidence.
The point is that all rational agents will converge upon mathematical statements, and will not converge upon moral statements. Do you disagree?
I’m very, very sure that my morality doesn’t work that way.
Imagine you lived on a world with two major factions, A and B.
A has a population of 999999. B has a population of 1000.
Every individual in A has a very mild preference for horrifically torturing B, and the motivation is sadism and hatred. The torture and slow murder of B is a bonding activity for A, and the shared hatred keeps the society cohesive.
Every individual in B has a strong, strong preference not to be tortured, but it doesn’t even begin to outweigh the collective preferences of A.
From the standpoint of preference utilitarianism, this scenario is analogous to Torture vs. Dust Specks. Preference Utilitarians choose torture, and a good case could be made even under good old human morality to choose torture as the lesser of two evils. This is a problem which I’d give serious weight to choosing torture
Preference utilitarian agents would let A torture B—“shut up and multiply”. However, from the standpoint of my human morality, this scenario is very different from torture vs. dust specks, and I wouldn’t even waste a fraction of a second in deciding what is right in this scenario. Torture for the sake of malice is wrong (to me) and it really doesn’t matter what everyone else’s preferences are—if it’s in my power, I’m not letting A torture B!
Morality evolved as a function of how it benefited single alleles, not societies. Under different conditions, it could have evolved differently. You can’t generalize from the way morality works in humans to the way it might work in all possible societies of entities.
Agreement isn’t important: arguments are important. You apparently made the argument that convergence on morality isn’t possible because it would require empirically detectable moral objects. I made the counterargument that convergence on morality could work like convergence on mathematical truth. So it seems that convergence on morlaity could happen, since there is a way it could work.
OK. Utilitarianism sucks. That doens’t mean other objective approaches don’t work—you could be a deontologist. And it doesn’t mean subjectivism does work.
Says who? We can generalise language, maths and physics beyond our instinctive System I understandings. And we have.
is the reason why I said that my morality isn’t preference utilitarian. If “taking into account the values and preferences of others as well as your own”, then preference utilitarianism seems to be the default way to do that.
Alright...so if I’m understanding correctly, you are saying that moral facts exist and people can converge upon them independently, in the same ways that people will converge on mathematical facts. And I’m saying we can’t, and that morality is a preference linked to emotions. Neither of us have really done anything but restate our positions here. My position seems more or less inherent in my definition of morality, and I think you understand my position...but I still don’t understand yours.
Can I have a rudimentary definition of morality, an example of a moral fact, and a process by which two agents can converge upon it?
Can you give me a method of evaluating a moral fact which doesn’t at some point refer to our instincts? Do moral facts necessarily have to conform to our instincts? As in, if I proved a moral fact to you, but your instincts said it was wrong, would you still accept that it was right?
For lexicographers, the default is apparently deontology
“conformity to the rules of right conduct”
“Principles concerning the distinction between right and wrong or good and bad behavior.”
etc.
1 A means by which communities of entities with preferences act in accordance with all their preferences.
2 Murder is wrong.
3 Since agents do not wish to be murdered, it is in their interests to agree to refrain from murder under an arrangement in which other agents agree to refrain from removing them.
I don’t see why I need to, Utilitarianism and ontology take preferences and intuitions into account. Your argument against utilitarinism that it comes to to conclusions which go against your instincts. That isn’t just an assumption that morality has to something to do with instincts, it is a further assumption that your instincts trump all further constderations It is an assumption of subjectivism.
You are saying objectivism is false because subjectivism is true. If utilitarianism worked, it would take intuitions and preferences into account, and arrive at some arrangement that minimises the number of people who don’t get their instincts or preferences satisfied. Some people have to lose You have decided that is unaccpetable because you have decided that you must not lose. But utilitariansim still works in the sense that a set of subjective prefefernces can be treated as objective facts, and aggregated together. There is nothing to stop different utilitarians (of the same variety) converging on a decision. U-ism “works” in that sense.You objection is not that convergence is not possible, but that what is converged upon is not moral, because your instincts say not.
But you don’t have any argument beyond an assumption that morality just is what your instincts say. The other side of the argument doesn’t have to deny the instinctive or subjective aspect of morality, it only needs to deny that your instincts are supreme. And it can argue that since morality is about the regulation of conduct amongst groups, the very notion of subjective morality is incoherent (parallel: language is all about communication, so a language that is only understood by one person is a paradox).
Maybe. Almost everybody who has had their mind changed about sexual conduct had overridden an instinct.
So there are several things I don’t like about this..
0) It’s not in their interests to play the cooperative strategy if they are more powerful, since the other agent can’t remove them.
1) It’s not a given that all agents do not wish to be murdered. It’s only luck that we wish not to die. Sentient beings could just as easily have come out of insects who allow themselves to be eaten by mates, or by their offspring.
2) So you sidestep this, and say that this only applies to beings that wish to be murdered. Well now, this is utilitarianism. You’d essentially be saying that all agents want their preferences fulfilled, therefore we should all agree to fulfill each others preferences.
Essentially yes. But to rephrase: I know that the behavior of all agents (including myself) will work to bring about the agent’s preferences to the best of the agent’s ability, and this is true by definition of what a “preference” is.
I’m not sure I follow what you mean by this. My ideas about sexual conduct are in line with my instincts. A highly religious person’s ideas about sexual conduct are in line with the instincts that society drilled into them. If I converted that person into sex-positivism, they would shed the societal conditioning and their morality and feelings would change. Who is not in alignment with their instincts?
(Instincts here means feelings with no rational basis, rather than genetically programmed or reflexive behaviors)
I am not sure what the argument is here. The objectivist claim is not that every entity actually will be moral in practice, and it’t not the claim that every agent will be interested in settling moral question: it’s just the claim that agents who are interested in settling moral questions, and have the same set of facts available (ie live in the same society) will be able to converge. (Which is as objective as anything else.The uncontentious claim that mathematics is objective doens’t imply that everyone is a mathematician, or knows all mathematical truths).
I have described morality as an arrangement within a society. Alien societies might have different morality to go with their diffrent biology. That is not in favour of subjectivism, because subjectivism requires morality to vary with personal preference, not objective facts about biology. Objectivism does not mean universalism. It means agents, given the same facts, and the willingness to draw moral conclusions from them, will converge. It doens’t mean the facts never vary. if they do, so will the conclusions
All agents want their preferences fulfilled, and what “should” means is being in accordance some arrangement for resolving the resulting conflicts, whether utilitarian, deontological, or something else.
The convertee. In my expererience, people are generally converted by arguments...reasoning...system 2. So when people are converted, they go from Instinct to Reason. But perhaps you know of some process by which subjective feelings are transferred directly, without the involvement of system 2.
But don’t you see what you’re doing here? You are defining a set of moral claims M, and then saying that any agents who are interested in M will converge on M!
The qualifier “agents who are interested in moral questions” restricts the set of agents to those who already agree with you about what morality is. Obviously, if we all start from the same moral axioms, we’ll converge onto the same moral postulates—the point is that the moral axioms are arbitrarily set by the user’s preferences.
Wait, so you are defining morality is defined as a system of conflict resolution between agents? I actually do like that definition...even though it doesn’t imply convergence.
Then Utilitarianism is the solution that all agents should maximize preferences, deontological is the solution that there exist a set of rules to follow when arbitrating conflict, etc.
Counterexample—Imagine a person who isn’t religious, who also believes incest between consenting adults is wrong (even for old infertile people, even if no one else gets to know about it). There is no conflict between the two agents involved—would you say that this person is not exhibiting a moral preference, but something else entirely?
The vast majority of people are not convinced by argument, but by life experience. For most people, all the moral rhetoric in the world isn’t as effective as a picture of two gay men crying with happiness as they get married.
That’s besides the point, though—you are right that it is possible (though difficult) to alter someone’s moral stance through argument alone. However, “System 1” and “System 2“ share a brain. You can influence “system 1” via “system 2”—reasoning can effect feelings, and vice versa. I can use logical arguments to change someone’s feelings on moral issues. That doesn’t change the fact that the moral attitude stems from the feelings.
If you can establish a shared set of “moral axioms” with someone, you can convince them of the rightness or wrongness of something with logic alone. This might make it seem like any two agents can converge on morality—but just because most humans have certain moral preferences hardwired into them doesn’t mean every agent has the same set of preferences. I have some moral axioms, you have some moral axioms, and we can use shared moral axioms to convince each other of things… but we won’t be able to convince any agent which has moral axioms that do not match with ours.
I haven’t defined a set of moral claims. You asked me for an example of one claim. I can argue the point with specifying any moral conclusions. The facts I mentioned as the input to the process are not moral per se.
In a sense, yes. But only in the sense that “agents who are interested in mathematical questions” restricts the set of agents who are interested in “mathematics” as I understand it. On the other had, nothing is implied about the set of object level claims moral philosophers would converge on.
I don’t have to accept that, because I am not use a subjective criterion for “morailty”. If you have a preference for Tutti Frutti, that is not a moral preference, because it does not affect anybody else. The definition of morality I am using is not based on any personal preference of mine, it’s based on a recognition that morality has a job to do.
If no convergence takes place, how can you have an implementable system? People are either imprisoned or not, they cannot be imprisoned for some agents but not for others.
You are tacilty assuming that no action will be taken on the basis of feelings of wrongness, that nobody ever campaigns to ban things they don’t like.
If system 1 was influenced by system 2 , then what stems from system 1 stemmed from system 2, and so on. You are drawing an arbitrary line.
If moral axioms are completely separate from everything else, then you would need to change their axioms. If they are not, then not. For instance, you can argue that some moral attitudes someone has are inconsistent with others. Consistency is not a purely moral criterion.
If “moral axioms” overlap with rational axioms, and if moral axioms are constrained by the functional role of morality, there is plenty of scope for rational agents to converge.
Does it follow, then, that rational agents will always be “moral”? Does it mean that the most rational choice for maximizing any set of preferences, is also in line with “morality”?
That would put morality into decision theory, which would be kind of nice.
But I can’t think how an agent whose utility function simply read “Commit Murder” could possibly make a choice that was both moral (the way morality is traditionally defined) and rational.
People who believe in the Convergence thesis tend not to believe in Orthogonality thesis. They tend to use the traditional definition of rationality
“In its primary sense, rationality is a normative concept that philosophers have generally tried to characterize in such a way that, for any action, belief, or desire, if it is rational we ought to choose it” (WP)
in detail, they tend to see rationalists as having a preference for objectivity, consistency and non-arbitrariness, including in their preference. Thus they would tend to see Clippies as having highly rational though and highly irrational (because arbitrary) desires. Likewise they would see a murderer who does not want to be murdered as inconsistent, and therefore irrational.
Another way of looking at it is that they would see highly intelligent and rational agents as climbing the Maslow hierarchy.
Depends how rational. For a Convergence theorist, an ideal, supremely rational agent will have rational desires and preferences. A less ideal one might fall short as a result of being non-ideal.
Thanks for that term. This makes things clearer. Based on what you are arguing, does that make you a convergence theorist then? (Or at least, you seem to be defending convergence theory here, even if you don’t wholeheartedly accept it)
I dunno...I just find the orthogonality thesis as intuitively obvious, and I’m having real trouble grasping what exactly the thought process that leads one to become a convergence theorist might be. I’m hoping you can show me what that thought process is.
The page even says it:
Now, I agree that there exist some G such that this is the case, but I don’t think this set would have anything to do with morality as humans understand it.
You seem to be making the argument that one of the characteristics that would automatically qualify something as a candidate for G is immorality.
This makes no intuitive sense. Why couldn’t you make an efficient real world algorithm to destroy all life forms? It seems like, in the absence of some serious mathematical arguments to the contrary, we aught to dismiss claims that efficient real world algorithms for murder are impossible offhand.
Why is that important?
I think I can see where the intuitive appeal comes from, and I think I can see where the errors are too.
I can see why that is appealing, but it is not equivalent to the claim that any intelligent and rational entity could have any goal. Of course you can write a dumb algorithm to efficiently make paperclips, just as you can build a dumb machine that makes paperclips. And of course an AI could....technically …design and/or implement such an algorithm, But it doesn’t follow that an AGI would do so. (Which is two propositions: it doesn’t follow that an AI could be persuaded to adopt such a goal, and it doesn’t follow that such a goal could be programmed in ab initio and remain stable).
The Convergentist would want to claim:
“To assert the Orthogonality Thesis is to assert that no matter how intelligent and rational an agent, no matter the breadth of its understanding, no matter the strength of its commitment to objectivity, no matter its abilities to self-reflect and update, it would still never realise that making huge numbers of paperclips is arbitrary and unworthy of its abilities”
The orthogonality claim only has bite against Convergence/Moral Realism if it relates to all or most or typical rational intelligent agents, because that is how moral realists define their claim: they claim that ideal rational agents of a typical kind will converge, or that most rational-enough and intelligent-enough agents will converge. You might be able to build a (genuinely intelligent, reflecting and updating) Clippy, but that wouldn’t prove anything. The natural existence of sociopaths doesn’t disprove MR because they are statiscally rare, and their typicality is in doubt. You can’t prove anything about morality by genetically engineering a sociopath.
As an argument against MR/C, Orthogonality has to claim that the typical, statistically common kind of agent could have arbitrary goals, and that the evidence of convergence amongst humans is explained by specific cultural or genetic features, not by rationality in general.
ETA:
If we don’t understand the relationship between instrumental intelligence and goals, Clippies will seem possible—in the way that p-zombies do if you don’t understand the relationship between matter and consciousness.
Because I want to be sure that I’m understanding what the claim you’re making is.
Okay...so I agree with the Convergence theorist on what the implications of the Orthogonality Thesis are, and I still think the Orthogonality Thesis is true.
Hold on now...that makes the claim completely different than what I thought we were talking about up till now. I thought we were talking about whether or not all rational agents would be in agreement about what morality is, independent of specifically human preferences?
We can have the other discussion too...but not before settling whether or not the Orthogonality Thesis is in fact true “in principle”. Remember, we originally started this discussion with my claim that morality is feelings/preference, as opposed to something you could figure out (i.e. something embedded into logic/game theory or the universe itself.) We weren’t originally talking about rational agents to shed light on evolution or plausible AI...we brought them in as hypothetical agents who converge upon the correct answer to any answerable question, to explore whether or not “what is good” is independent from “what do humans think is good”.
I don’t see how. What did you think we were talking about?
I thought we were talking about whether morality was something that could be discovered objectively.
I said:
Then you said:
Then I said
You disagreed, and said
To which I countered
You disagreed:
Which is why
Hence
doesn’t make any sense in our discussion. All rational agents converge on mathematical and ontological facts, by definition. My argument was that there is no such thing as a “moral fact: and moral statements can only be discussed when in reference to the psychology of a small set of creatures which includes humans and some other mammals. I argued that moral statements can’t be “discovered” true or false in any ontological or mathematical sense, nor are they deeply embedded into game theory (meaning it is not always in the interest of all rational agents to follow human morality) - even though game theory does explain how we evolved morality given our circumstances.
If you admit that at least one of all possible rational agent doesn’t converge upon morality, you’ve been in agreement with me this entire time—which means we’ve been talking about different things this entire time...so what did you think we were talking about?
Only by a definition whereby “rational” means “ideally rational”. In the ordinary sense of the term, it perfectly possible for someone who is deemed “rational” in a more-or-less, good-enough sense to fail to understand some mathematical truths. The existence of the innumerate does not disprove the objectivity of mathematics, and the existence of sociopaths does not disprove the objectivity of morality.
Do you believe that it is possible for a rational agent to fail to understand a mathematical truth? Because that seems rather commonplace to me. Unless you mean ideally rational....
I did mean ideally rational.
The whole point of invoking an ideal rational agent in the first place was to demonstrate that moral “truths” aren’t like empirical or mathematical truths in that you can’t discover them objectively through philosophy or mathematics (even if you are infinitely smart). Rather, moral “truths” are peculiar to humans.
If you want to illustrate the non-objectivity of morality, then stating that even ideal rational agents won’t converge on them is one of expressing the point, although it helps to state the “ideal” explicitly. However, that is still only the expression of a claim, not the “demonstration” of one.
I’m not sure what you mean by “statistically common” here. Do you mean a randomly picked agent out of the set of all possible agents?
I mean likely to be encountered, likely to evolve or to be built (unless you are actually trying to build a Clippy)
I think you’ve misunderstood the meta-ethics sequences, then, or I have, because
is quite similar to Eliezer’s position. Although Juno_Watt may have reached it from another direction.
I read it as a warning about expecting sufficiently rational beings to automatically acquire human morality, in the same way that sufficiently rational beings would automatically acquire knowledge about true statements (science, etc). The lesson is that preferences (morality, etc) is different from fact.
If you want to know Eliezer’s views, he spells them out explicitly here—although I think the person most famous for this view is Nietzsche (not that he’s the first to have held this view).
To me, “No universally compelling arguments” means this—two rational agents will converge upon factual statements, but they need not converge upon preferences (moral or otherwise) because moral statements aren’t “facts”.
It really doesn’t matter if you define the pebble sorting as a “moral” preference or a plain old preference.The point is, that humans have a morality module—but that module is in the brain and not a feature which is implicit in logical structures, nor is it a feature implicit in the universe itself.
I agree that is what it is trying to say, but...as you made illustrate above..it only appears to work if the reader is willing to bel fuzzy about the difference between preference and moral preference.
For some value of “explicit”. He doesn’t even restrict the range of agents to rational agents, and no-one expects irrationali agents to agree with each other, or rational ones.
Mathematical statements aren’t empirical facts eitherl but convergence is uncontroversial there.
Are you quite sure that morlaity isn’t implicit in the logic of how-a-society-if-entities-wth-varying-prefernces manage-to-rub-along ?
Juno Watt has read the sequences, but still doesn’t know what Eliezer’s position is.
Ummmmm… do I draw the line around the whole of the human race? I’m not sure whether I do or not. I do know that there is a certain boundary (defined mostly by culture) where I get much more likely to say ‘that’s your problem’ and become much less skeptical/cynical about preferences, although issues that seem truly serious always get the same treatment.
For some reason, choosing to accept that somebody’s utility function might be very different from your own feels kind of like abandoning them from the inside. (Subjective!).
You could also, in principle, have a utilitarianism that gives unequal weights to different people. I’ve asked around here for a reason to think that the egalitarian principle is true, but haven’t yet received any responses that are up to typical Less Wrong epistemic standards.
It’s a very clear Schelling point. At least until advances in uplifting/AI/brain emulation/etc. complicates the issue of what counts as a human.
This seems to me very unclear actually. In fact, I have never encountered someone that acted as if this was (approximately) the decision criterion they were following. For all the humans I have personally observed, they seem to be acting as if they, their friends, and their family members are weighted thousands or millions of times greater than perfect strangers.
That, or something like it, is the decision criterion people are expected to follow when acting in official capacity.
You’re applying moral realism here...as in, you are implying that moral facts exist objectively, outside of a human’s feelings. Are you dong this intentionally?
Your alternative would be to think an aristocratic or meritocratic principle is true. (It’s either equal or unequal, right?)
I think we can assume aristocracy is a dead duck along with the Divine Right of Kings and other theological relics.
Meritocracy in some form I believe has been advocated by some utilitarians. People with Oxford degrees get 10 votes. Cambridge 9. Down to the LSE with 2 votes and the common ignorant unlettered herd 1 vote…
This is kind of an epistemocratic voting regime which some think might lead to better outcomes. Alas, no one has been game to try get such laws up. There is little evidence that an electorate of PhDs is any less daft/ignorant/clueless/idle/indifferent on matters outside their specialty than the general public.
From a legal rights perspective, egalitarianism is surely correct. Equal treatment before the law seems a lot easier to defend than unequal treatment.
But put something up that assumes a dis-egalitarian principle and see how it flies. I’d be interested to see if you can come up with something plausible that is dis-egalitarian and up to epistemic scratch...
Hint: plutocracy...
I wouldn’t use those terms, since they bring in all kinds of unnecessary connotations. I would say the opposite of the egalitarian principle is the non-egalitarian principle. I was thinking less along the lines of nobles/commoners and more along the lines of my children/other people’s children. I find the idea (that I think the egalitarian principle entails) that I have as much obligation to perfect strangers as to my wife to be extremely counter-intuitive.
I don’t consider the Divine Right of Crowds (‘human rights’, or whatever the cool kids are calling it these days) to be any less silly than those ‘theological relics’.
This part isn’t really relevant to what I’m talking about, since I’m not discussing equal weight in decision-making, but equal weight in a social welfare function. My infant son’s interests are one of my greatest concerns, but he currently has about zero say in family decision-making.
Equal treatment before the law does not necessarily mean that individuals interests are weighted equally. When was the last time you heard of jurors on a rape trial trying to figure out exactly how much utility the rapist got so they could properly combine that with the disutility of the victim?
Of course what “the cool kids” are actually talking about is more like a Divine Right of People; it’s got nothing to do with treating people differently when there’s a mass of them. And of course adding the word “divine” is nothing more than a handy way of making it sound sillier than it otherwise would (whereas in “Divine Right of Kings” it is a word with an actual meaning; the power of kings was literally thought to be of divine origin).
So, removing some of the spin, what you’re apparently saying is that “let’s treat all people as having equal rights” seems as silly to you as “let’s suppose that one person in each country is appointed by a divine superbeing to rule over all the others”. Well, OK.
It means that people are treated unequally only according to differences that are actually relevant. (Of course then the argument shifts to which differences are relevant; but at least then one actually has to argue for their relevance rather than simply assuming it on traditional grounds.)
Having said all of which, I agree that the usual arguments for equal weighting completely fail to show that a person shouldn’t give higher weighting to herself, her family, her friends, etc.
The state in which I live has statute law initiatives, so yes, people actually do ‘rule’ only if there is a large enough mass of them. Individually, I have no such (legal) right.
Speaking of dubious origins:
I am in complete agreement with the following:
In any case, the point of my comment was not to bring up politics, but to show the incompatibility of typical intuitions with regards to how one should treat family and friends compared to strangers with what (the most popular flavors of) utilitarianism seems to indicate is ‘correct’.
I have argued with utilitarians several times on Less Wrong and the discussions seem to follow the same sequence of backpedalling. First they claim utilitarianism is true. Then, when I ask and they are unable to conceive of an experiment that would verify or falsify it, they claim that it isn’t the kind of thing that has a truth-value, but that it is a description of their preferences. Next, I demonstrate that relying on revealed preference shows that virtually nobody actually has utilitarian preferences. Lastly, they claim that intuition gives us good reason go with (even if it isn’t True) utilitarianism. My response to NancyLebovitz in this thread is yet another attempt to show that, no, it really isn’t intuitive.
Is this an accurate description of what is going on or am I mind-killed on the subject of normative ethics (or both, or neither)?
When you first used the phrase “Divine Right of Crowds” you immediately explained in parentheses that you meant “human rights” or something similar. Now you seem to be talking about democracy instead. The two aren’t the same, though probably approval of one is correlated with approval of the other.
Anyway, “crowds” in the literal sense still aren’t involved (it needs N people to get something voted on, but that doesn’t require them to be colocated or to know one another or anything else crowd-like other than sheer numbers; and if you’re now using “Divine Right of Crowds” to mean “a political system that tries to favour outcomes preferred by more people rather than fewer” then, again, I suggest that you’re picking terminology simply to make the other side look as silly as possible.
It is possible that those words from the Declaration of Independence show that in the 18th century people believed in something like a “Divine Right of Crowds”. (It’s not entirely obvious, though. Perhaps they actually just believed in a Right of Crowds and thought what they said would sound better if they included “created” and “by their Creator”; compare the mention of a Creator at the end of some editions of the Origin of Species, or Einstein’s “God does not play dice”.)
But that doesn’t mean that people who now favour democracy, or human rights, or independence of the US from the UK, have to believe (or commonly do believe) that those things are divinely ordained. Similarly, there are people now who want kings without believing in a Divine Right of Kings, and pretending that they do would be a shabby rhetorical trick.
Yup, there are indeed such incompatibilities (though I think one could make a reasonable argument that, given human nature, overall utility is likely to be higher in a society where people care more about themselves and those closer to them than in one where they truly care equally about everyone. Surely not nearly so much more as our intuitions lead to, though.
I’ll take your word for it, but I’m a bit surprised: I’d have thought an appreciable fraction of LWers advocating utilitarianism would start from the position that it’s an expression of their preferences rather than an objective fact about the world.
(For my part, not that it particularly matters, I do indeed care most about myself, and less about people less connected to me, physically further from me, more unlike me, etc., but I find that as I reflect more on my preferences in any given case they shift nearer to egalitarianism, though they often don’t get all the way. Something like utilitarianism seems like a pretty decent approximation to what I’d want in law.)
I can’t tell, obviously, but I do tend to think that things like switching ground without noticing (“human rights” --> democracy) and insisting on using question-begging language (“Divine Right of Crowds”) are often signs of someone not thinking as clearly as they might be.
Counterpoint: it offers stability, which is useful regardless of theology. See the Fnargle World thought experiment and various other neo-reactionary stuff on Why Democracy Is Bad.
Let me put it this way: would you rather we’re ruled by someone who’s skilled at persuading us to elect him, and who focuses resources on looking good in four years; or someone who’s been trained since birth to govern well, and knows they or their descendants will be held accountable for any future side-effects of their policies?
These arguments may be deeply flawed, but hereditary aristocracy doesn’t stand of fall with the Divine Right Of Kings.
Stability Is good if governance is good and bad if not.
...and you can get rid of..
OK. Looks like democracy with a supply of candidates from Kennedy-style political dynasties is the best of all possible systems...;-)
Kinda. In practice a lot of the power of government wrests in agencies that offer advice to the currently ruling party, and those agencies often embody significant powers themselves. It would be a mistake to confuse the elected executive branch of government with government entire. It’s not even clear to me that they have the majority share of influence over what actually happens.
I was suggesting that it might serve to render governance better.
You still have to focus on retaining popularity, via attacking political opponents and increasing PR skills, unless the elections are total shams.
Also, to be clear, I’m not advocating this position; just pointing out there are other arguments for it than the “Divine Right of Kings”.
Under democracy, the people can decide if their stable government has outstayed its welcome after so many years.
Whilst aristos just have to keep slipping their rivals the poisoned chalice...much more discreet.
Got that.
Except that due to problems with rational ignorance they frequently make bad choices. Furthermore, this system encourages politicians to made shortsighted decisions.
Whereas aristos can be batshit crazy due to problems with genetics. Furthermore, this system encourages them to make selfcentered decisions.
What do you mean by “self-centered”? It is after all in a noble’s self-interest to pursue the success of his manor and its inhabitants.
I’m not sure the lord of the manor and the tenant farmer define “success” the same way.
The politician and the voter in a democracy also don’t define “success” in the same way.
There’s an ordinary selection mechanism for politicians, and an ordinary selection mechanism for lords of the manor.
Ideally, the ordinary selection mechanism for politicians (elections) would choose people who define success the way the voter would define success. That said, we both know that this is not how things actually work. For principal-agent delegation reasons, politicians often have their own agendas that conflict with voter preferences. The politician agenda diverges increasingly from the voter agenda as the number of voters increases (i.e. national figures generally have more freedom to pursue their own ends than county officials).
Still, politician agendas cannot completely diverge from voter preferences. Observationally, many voter preferences are implemented into law. As an extreme example, bribery is illegal even though the prohibition is bad for most politicians. So there is reason to think that the ordinary selection process for politicians leads to some connection in the definition of success (teleologically, if not cognitively).
By contrast, there is no particular reason to think the ordinary selection mechanism (inheritance) picks lords of the manor who want to implement tenant farmers preferences. Unless you include revolutionary change, which does not seem like an ordinary selection process.
I think that is what I was trying to say, but you said it much better.
Inasmuch as democracy woks, they do. In an ideal democracy, representatives are servants of the people who are fired if they don’t deliver. Diverging interests are failures, not inherent to democracy.
What do you mean by “inherent to democracy”? Certain types of failures, e.g., politicians pursuing short sighted policies because they’re not likely to be around when said policies implode, are systemic to democracies.
In practice short-termism is ameliorated by life presidents, second chambers, career civil servants, etc.
To a certain extent. However, the bureaucrat has no motivation to care about the welfare of the people, not even the politician’s desire to get reelected or the noble’s incentive to make his estate successful. The bureaucrat’s incentive, by contrast, is to expand his bureaucratic empire, frequently at the expense of the nation as a whole.
But it’s still long termist. None of the cogs does the work of the whole machine itself. You also need a free press, even though their motivation is to sell pieces of paper.
It is also in a factory-owner’s interest to pursue the success of his factories and their workers. And yet...
What’s more, it’s in an emplyers interest to have workers who are stakeholders..
Only if we define “interest” in a rational sense (i.e., “how rational agents embodying the role of ‘employers’ should optimally behave if their goals/values are X), rather than in an evopsych sense (i.e., “how human apes embodying the role of ‘employers’ will tend to behave, and what that implies that the encoded values of human apes actually are”).
Maintaining or improving position within the dominance hierarchy often co-opts other concerns that a human ape might have, up to and including bare survival. Often, that cognitive dissonance is “resolved” by that human ape convincing themselves that strategies which improve their position within the dominance hierarchy are actually strategies to achieve other goals that seem more palatable to the parts of their brain that cogitate palatability.
(In Anglo: “We like bossing more than we like living well, but we like thinking that we’re trying to live well more than we like thinking that we’re trying to boss. So, we trick ourselves into believing that we’re trying to live well, when we’re really just trying to boss.”)
Its in their economic interest to tax the peasantry to almost but not quite the point of starvation, and use the excess to fund land-acquisition, which is pretty much what they did for centuries. You could argue that with the benefit of hindsight, what they should have done is abandoned agriculture+war for education+industrialisation, since [by some measures] ordinary citizens of the present are wealthier than the aristocrats of the past. But I could argue right back that the industrial revoiution wasn’t that good for the aristocaracy, as a class, in the end.
Only if you consider absolute gains preferable to relative/”zero-sum” gains, which our evolved psychological makeup isn’t really prepared to do very well.
Social animals with a natural dominance hierarchy will often see “how well am I doing right now, compared to how well everyone else around me is doing right now?” as a more salient question than “how well am I doing right now, compared to how well I was doing before / how well I could be doing?”.
That’s what I meant.
nod I just felt it needed to be stated more explicitly.
Yes and it’s in the interest of elected politicians to take all the property of 49% of the population and divide it among the remaining 51%.
Except that that never happens, and it’s not in their interests to disrupt the economy that much, and it’s also not in their interests to do something that might lead to civil unrest...and it never happens.
Well, it never happens at the 49%-51% level, but that’s because there aren’t any countries where 49% of the country is wealthy enough to be worth plundering (see Pareto). Massive redistribution of wealth away from minorities has happened quite a bit, as in Zimbabwe, Haiti, Germany, and others. The various communist revolutions seem to be an example of this, if you allow ‘democracy of the sword’, and I would suspect pogroms are as well, to the extent that property is looted as well as destroyed.
I don’t think you have many good examples of democracies there.
One counterexample is sufficient to break a “never.” To the extent that ‘good’ democracies do not do this, it is not a statement about the incentive structure of democracy, but a statement about the preferences of the voters of that particular polity.
Or the details of the exact structure of the democracy which may create relevant incentives.
Like Vaniver said, it’s never happened this explicitly, but demanding that [group you’ve just demonized] pay their “fair share” is relatively common rhetoric. And yes, politicians are willing to do this even as it gradually destroys the economy as is happening right now in Europe.
Quite. It’s hard to make it stick unless it is seen as fair.
You mean southern Europe? I don’t know who you think the 49% are. (In fact, given the tendency of democracies to alternate between parties of the left and right, one would expect the 49% and 51% to switch roles, leading to an averaging out).
In any case, if Greek or Spanish voters vote for unsustainable benefits, more fool them, It wasn’t done to them, they did it to themselves.
I think you’re overestimating the amount of difference between the two parties. Also, this still screws the economy.
See my comment on rational ignorance above.
The two parties where?
I think you may be over generalising from (your assessment of) your own nation.
Uhhh...so democracy is not theoretically perfect. The discussion was about whether there is anything practical that is less bad, eg aristocracy.
I should have said two coalitions, sorry.
A stable government that loses power when it loses an election is, in fact “unstable”.
Eh, taste-testers, bodyguards and damn good doctors are cheaper than election campaigns.
Well, I suppose all govt. is unstable, then. Which dynasty has been in power forever?
What good is that going to do a peasant like me? It’s not like they are going to knock off the cost of electioneering from my taxes.
Stability is a matter of degree, as you’re well aware. Few dynasties lose power after four years of rule.
Even a massive amount of spending on election campaigns is less likely to succeed (and thus less stable) than a (relatively) small amount of spending an safeguarding from assassination.
Also, election campaigns have negative effects on, among other things, the rationality of the populace; and they encourage polarization in the long term—in contrast, bodyguards discourage trying to off your rich uncle for the inheritance.
I can’t seem to google up anything with the worlds “Fnargle World”
http://unqualified-reservations.blogspot.com/2007/05/magic-of-symmetric-sovereignty.html
This is the reference.
Considering many of them profess to include other kinds of intelligence, at least in theory … it seems to be mostly a consistency thing. Why shouldn’t I include Joe The Annoying Git?
Ask the counter-question: what do you plan to do once you’ve settled to your satisfaction the struggle between moral concern X and moral concern Y? Have you known yourself to change your behavior after settling such issues?
I agree that people have different opinions about the relative value of different moral concerns. What I’m pessimistic about is the value of discussing those differences by focusing on questions like the examples I gave.
If you wanted to be really pessimistic about mathematics research, you could argue that most of pure math research consists of privileged questions.
Of course! I have to change my behavior to be in accord with my new-found knowledge about my preferences. A current area of moral uncertainty for me revolves around the ethics of eating meat, which is motivating me to do research on the intelligence of various animals. As a result, the bulk of my meat consumption has shifted from more intelligent/empathetic animals (pigs) to less intelligent animals (shrimp, fish, chicken).
Through discussion, I’ve also influenced some friends into having more socially liberal views, thus changing the nature of their interpersonal interactions. If optimizing charity was the question that people focused on, we would still end up having the discussion about whether or not the charity should provide abortions, contraceptives, etc.
You can’t escape discussing the fundamental moral questions if those moral struggles create disagreement about which action should be taken.
I do think that it might be better to focus on the underlying moral values rather than the specific examples.
Cool. I’ve been having second thoughts about eating pigs as well.
They don’t seem to pass the mirror test (which has been my criteria for such things, even if flawed).
Since GiveWell hasn’t found any good charities that provide abortions and give out contraceptives the answer in this community is probably: “No, charity shouldn’t do those things.”
That’s however a very different discussion from mainstream US discussion over the status of abortion.
Did an ‘is’ just morph into a ‘should’ there somehow?
Or “There is not an existing charity which does those things well enough to donate towards.”
“Givewell hasn’t found any good charities that do X” does not imply “Charity should not do X”
We are talking about the mainstream US here.
Qiaochu_Yuan’s argument was that debates over abortion are privileged questions (discussed disproportionately to the value of answering them).
I added that while this is true in regards to the specific nature of the questions, the underlying moral uncertainty that the questions represent (faced by the US population—lesswrong is pretty settled here) is one that is valuable to discuss for the population at large because it effects how they behave.
Givewell isn’t worrying about moral uncertainty—they’ve already settled approximately on utilitarianism. Not so for the rest of the population.
Important to whom? To the people who choose low-hanging research topics that promise easy tenure? Could easily be the most important questions for them.
The apocryphal Hamming-story only shows that people would like to conflate their own “most important” which takes into account their career trajectory with “most important in the field”, and don’t appreciate being told they compromised in favor of paying the bills.
I would say that the economy has a theoretical potential for higher utility payoffs, but I have much less confidence in the government’s ability to implement one of the better responses out of the vast possible response space.
It would be much better to develop a new, abundant clean energy source than to come up with some slightly more efficient recycling scheme, but if you had to assign one or the other as a project for a bunch of high schoolers, you’d be better off choosing the second, because they’d stand a chance of actually accomplishing it.
The correct response to Hamming’s question is “Because I have a comparative advantage in working on the problem I am working on”. There are many, many important problems in the world of greater and lesser degrees of importance. There are many, many people working on them, even within one field. It does not make sense for everyone to attack the same most important problem, if indeed such a single problem could even be identified. There is a point of diminishing returns. 100 chemists working on the most important problem in chemistry are not going to advance chemistry as much as 10 chemists working on the most important problem, and 90 working on a variety of other, lesser problems.
The notion of a “better question” seems inherently relative, even using the suggested counter-question, “What do I plan on doing with an answer to this question?” If the answer “nothing,” flags privileged questions, then what answer(s) would indicate more important ones? You imply that the answers to these better questions impel us to do something, but what does “doing” mean? Perhaps more specifically, what does action include (and exclude) in this context?
It’s relative to your utility function (feel free to take this literally or metaphorically depending on how you feel about modeling people as having utility functions). An action is something you do to increase your expected utility.
For example, take the question “should I give to charity now or invest my money and give more to charity later?” If I had an answer to this question (which I don’t; GiveWell recommends the former but Robin Hanson recommends the latter), I would change my charitable donation patterns (they don’t happen very often yet, partially because of my uncertainty about the answer to this question) and maybe also write a blog post convincing others to do the same.
For a less direct example, suppose you are interested in physics and you ask yourself some interesting physics question one day. You probably aren’t building a rocket or anything else that would require an answer to such a question; more likely you’re just intellectually curious and think the process of finding out an answer will enrich you in some way. Intellectual curiosity is probably part of your utility function.
Journalists are not paid to print the truth. They are paid to sell newspapers. (This correlates to your “most views” idea.)
However, people buy newspapers (and consume other forms of media). People choose to read celebrity gossip and trivia rather than constructive solutions for world peace (and other things you might think ‘important’). I think its intellectually lazy to blame the media. They produce for their audience.
Also, there are diverse media with diverse views of what is ‘important’. And a lot of people don’t want answers to questions. They don’t want solutions to problems. They want to be entertained. They want to be amused.
Is this so terrible?
Who’s blaming the media for anything? All I said is that this is one reason to stop paying attention to the media.
Another form of this is asking questions that are motivated by obtaining material goods for the self. More altruistic questions that seek first the profit of others seem less privileged in general.
I don’t know exactly how popular he is around these parts, but I have been watching a quite a bit of John Oliver recently. From what I understand, he is relatively free to pick his own content and HBO has supported him through and through. He isn’t dependent on sponsorship, so I doubt HBO will place too much pressure on views every month, though I expect they will want him to at the very least not drive viewers away.
Nevertheless, there are a number of shows where John Oliver is actively critical of both the popular mainstream media, for not paying enough attention to the most important stories. Very often these are politically sensitive topics (including drone strikes, international politics et al.). But he does try hard and I would argue, successfully, for the inclusion of stories that aren’t nearly covered often enough in the media.
It seems like a good idea for someone to study the model on which his show, last week tonight is modeled and try to come up with a better one. PS. he did claim that no one had been able to explain it successfully to him.
Its a slightly unfair challenge to make, as for day to day purposes it matters less what you think the most important questions are, but what the majority, or people of high status, think is important. The obvious answer is “because I am less powerful than people who think different things are important, and they determine if I have a job/funding/social status.”
Once you get to this point you’re at least being honest with yourself about what you’re optimizing for. If you waste an hour having a political argument with someone on Facebook, you’re not even being goal-oriented enough to notice that nobody paid you to do that.
As a tool for combating privileged questions, what about consciously prioritizing which issues you spend time thinking about?
I think of it the other way around: combating privileged questions is a tool for consciously prioritizing which issues you spend time thinking about.
Gay marriage and gun control are privileged questions? I disagree. They’re not important if you’re thinking about them in purely utilitarian terms, as in how many people get killed per year by illegal firearms. But they are important if you are concerned about the role of government.
I think the more relevant question here is why do such questions get more views in the first place. I’d say the reason is they divide people along party lines. So it’s more fun to ask those questions than a question like what to do in order to make charity more effective. It’s entertainment, and who’s to say entertainment is not important? There’s no privileged value system.
I think most people who watch talk shows know that they are watching them for entertainment, not news.
If I apply this principle to this author and this post, I’d wonder why take these three issues to make his point, instead of something clear and simple like the Casey Anthony brouhaha, which was clearly and indisputably a privileged question. Is he trying to signal something?
This is a good article.
To get most views, a question must divide people along some party lines and be simple enough, so even people with zero knowledge can jump into the discussion and express their opinions.
In other words, stupid people are customers too, and they are probably the largest and most easily manipulated segment of customers, therefore most important. Most of the media content is optimized to be accessible for stupid people. So even if the privileged question is important, the question and the proposed solutions are probably expressed in a way that is not helpful to solving them. Optimizing for a flamewar is more profitable.
But … you just admitted they’re unimportant “in purely utilitarian terms”!
What made utilitarianism the privileged value system? All I said was that if you try to make a utilitarian argument for gun control being an important issue, you’d probably fail. Someone would make a better argument for controlling diabetes being more important by comparing the number of people getting killed by illegal firearms and the number of people who die because of diabetes. (Note that the point here isn’t whether controlling guns is a good thing to do, but whether it’s more important than controlling diabetes).
I never said that utilitarianism is the privileged value system. What makes Casey Anthony brouhaha a privileged question is not the fact that it’s entertainment and not news, but the fact that from all possible gruesome murders that could be equally as entertaining, they picked this one and follwed it day and night. That’s a clear case of privileging the question. There are better questions to ask even among sensational issues.
Utilitarianism/consequentialism is a metaethic, so it’s a way of deciding what to do with a value system rather than a value system in itself—the paperclipper is a utilitarian even though it values paperclips rather than people.
You’re correct that the original post makes assumptions about what the reader values. I think that’s often worth it for efficient communication, though—the only alternatives I can think of are speaking in general or abstract terms (“a really bad thing happens”, without being able to give an example like “a person dies”), or stating the assumptions.
I think gun control probably is privileging the hypothesis, according to most peoples’ stated goals—they think gun control matters because it’s related to safety, and they value safety, even though there are dangers more common and easier to control than guns. (I don’t know off the top of my head what the low hanging fruit is for safety in first world countries, but transportation and preventative healthcare seem like possible candidates.) How close their stated goals are to their actual goals is a different question.
Most people around here (myself included) believe that utilitarianism is the correct value system and regard it as a settled question. There are debates about the correct type of utilitarianism, of course, but still.
The 2012 survey had 62% support for consequentialism, of which utilitarianisms form a subset. Some importantly non-utilitarian brands of consequentialism include egoism, egalitarianism, perfectionism, and mixed value functions that include elements of the above.
Ah, sorry. I read “value system” as referring to the utility assigned to various things, because that’s the default around here. Sorry for any confusion.
As for whether utilitarianism is, in fact, the correct value system, by human ethical standards, most LWers seem to ascribe to one form or another; this probably isn’t the comment section for that discussion, though.
How big of a concern should this be relative to other possible concerns? (I think “what should the role of government be?” is another privileged question. What do you intend to do with an answer to this question? (I am not convinced of the value of voting.))
I picked the first three things that came to my head.
Governments are very, very, very big agents. They are very strong, and succumb to some predictable biases. I’ll have to hear a pretty compelling argument to believe that caring about the crazy shenanigans they get up to is a trivial distraction. Or did you mean something else by “the role of government”?
Even if the actions of the government have a significant impact it could still be considered a ‘trivial distraction’ to discuss the actions of the government if one’s ability to influence them is negligible. They are, after all, very, very, very big agents.
In those circumstances it may be useful to discuss how to most effectively live in the world given what the government is doing but next to useless to discuss the politics of what the government ‘should’ do.
Agreed. (I would be happy to update on evidence to the contrary, but I basically asked this question in the politics thread and didn’t get any responses.)
Decide for whom to vote, for one thing. Of course my one vote isn’t important. But the vote of ten million people who watch news is significant.
I think the role of government is an important question because governments of nation states are some of the most powerful entities there are. No other entity can coerce people virtually without consequence.
Why do you think voting is valuable relative to the other things you could be doing? (Not a rhetorical question.)
Selfish reasons. When viewed purely as entertainment and ritual, voting is a stupendous use of my time. It makes me feel like part of something bigger than myself, similar to Raemon’s solstice celebration.
(I also think that voting is plausibly justifiable on altruistic grounds, but if I’m honest with myself, that’s not the real reason I do it.)
Upvoted for honesty. I don’t get such feelings from voting myself.
The behavior of political candidates has direct, measurable, and large effects on the welfare of a lot of people I care about.
Political candidates are often very different from each other.
Even small differences between political candidates can have large effects if their constituency is large.
I want people who are like me to vote (because I have fantastic political views). You can motivate this acausally using TDT, but that’s not really necessary, because if I signal that I’m a voter to members of my social circles (who, by a remarkable coincidence, have nearly-as-fantastic political views) I can causally impact their tendency to vote. Signaling authentically is less taxing than pretending to be a voter, because it eliminates the risk of being found out, and doesn’t feel damaging to my image of myself as a good person.
Voting makes me more willing to speak up regarding important political issues, to maintain a consistent self-image; not voting would make me feel (at least a tiny bit) like a hypocrite or outsider when I think getting involved will have important benefits. More generally, it gives me practice cultivating habits I find otherwise useful.
I enjoy voting as a ritual. It improves my self-image, and the people in line at polls are fun to talk to.
In elections that are small and/or close, my individual vote often has a nontrivial probability of swinging the election. You can think of it as, in effect, an altruistic lottery with amazingly good odds. See Voting for Charity’s Sake.
Are those the kinds of reasons you were looking for?
Yes, I think that’s a nice selection of reasons. But I also think that when most people discuss political questions they aren’t doing it to become better-informed voters. A strategy optimized for better voting wouldn’t look like constantly discussing political questions, it would look like maybe setting aside a few weeks before election day to do a lot of research. A strategy optimized for influencing the votes of others would look like a grassroots campaign or something.
Yes. They’re also trying to influence other people’s votes.
A grassroots campaign sounds like a significant expenditure of effort compared to voting and casual conversation about the issues. Perhaps maximizing our influence on the votes of others is not the only consideration, and voting hits a sweet spot which returns acceptable values for “(potentially) having an effect”, “not too time consuming”, and “improves my self-image”.
You’re right about setting aside some time for research, though; it’d be nice if we maximized potential effect in the correct direction :P
That sounds like a nearly fully general counterargument—I could be asking the same question about watching a movie, playing darts, studying geology, or whatever else the hell the person I’m speaking to is doing (short of working on efficient charity and the like).
It’s not a counterargument. It’s a request for an explanation.
This interpretation requires directly contradicting the explicit and intentional claim in the grandparent.
(It is not always inappropriate to call ‘bullshit’ on a claim that a question is not a rhetorical question when it actually is but it seems more appropriate to do so directly rather than just casually ignoring the claim and assuming it is an argument anyway. As such I assume hasty reading is involved.)
A question can have presuppositions even if it’s not rhetorical. If I ask you whether you have stopped beating your wife, I’m implicitly claiming you have a wife and were beating her at some point, even if I’m genuinely curious as to whether or not you’re still doing so.
QY is implicitly saying that brainoil must think voting is valuable relative to the other things ey could be doing, with which I either agree or ADBOC depending on what exactly is meant by “valuable”.
Am I justified in asking why you bought an iPhone when you could have saved a starving child with that money, and whether you think getting an iPhone for yourself is more valuable than saving a dying kid? If not, you’re a hypocrite. If yes, that too tells something about you.
I accept utilitarianism. But I also think we’re not born with a utility function. When I vote, I value it being an informed decision. If you ask me whether I couldn’t think of anything more valuable than that, I’d ask whether you couldn’t think of anything more valuable to do with your money than buying a smartphone.
To be honest, I don’t vote. But many do and value their right to vote.
http://xkcd.com/871/
Yes. This is a question I thought about before buying an iPhone and I think it deserves a serious answer, which in short is that iPhones can be used as incredible productivity tools, that I intended to use my iPhone that way (and have, by and large), and that I would pass on those productivity gains eventually (e.g. in increased earning potential which eventually finds its way to effective charities, or in being more effectively able to do work for MIRI or some other organization). Remember that consequentialism need not be nearsighted.
Really. (This sounds acerbic, but your comment is incredibly hard to take at face value.)
Do you also only watch a TV show (or a movie, or a fantasy novel) just enough so that you maximize your future contributions to effective charity (or MIRI etc.)?
What about going out with a girl, or keeping up with old friends outside the field? All that diverted effort consciously calculated to maximize your effective charity contributions, or do you treat time invested differently from other resources such as money?
(In the last few years, I’ve deliberately excluded some goals/activities that would sink resources/time but won’t serve a long-term global purpose (i.e. goals that are not about myself) and whose avoidance doesn’t seem to damage my motivation/productivity (reading fiction, playing games, studying music and (natural) languages, buying things beyond necessities, creating a family, aggressively advancing career). It might be that this mode is (psychologically) enabled by the fact that so far I’m investing in my own time/training, not donations.)
Just curious: What things, if any, do you do that sink resources but don’t serve a long-term global purpose?
When tired or low on motivation, I watch TV shows (US, UK series), recently for about 1.5 hours a day on average, in 10-20 minute sections throughout the day.
Yeah, I noticed that after writing it. Look, I have limited time and a complicated utility function. I can try to optimize where I think it would be particularly valuable to optimize (the decision to buy an iPhone is not small in terms of the accumulated costs or in terms of time investment so it seemed particularly worth paying attention to), but if I tried to consciously optimize everything I’d quickly run out of time to actually do any of the things I’m trying to optimize.
I recognize I’m opening myself up to further accusations of hypocrisy here, but I’d rather be hypocritical in the sense that I optimize some part of my life and not the other parts (and ask others to do the same) than be consistent in the sense that I don’t optimize any of it. The perfect is the enemy of the good and all that.
Qiaochu participated in the last MIRI math workshop. Calculation done.
That seems like an awfully contrived reason to buy an iPhone, especially when you could do all the work you do with an iPhone using a cheaper android phone too. But suppose there is a certain unique feature that the iPhone has that others don’t that makes you more productive (I’m not asking what that feature is). You are still deliberately dodging the spirit of the question. It wasn’t about an iPhone. I didn’t even know you had one.
So, am I justified in asking why you spend four hours per month watching Game of Thrones when you could have used that time to earn more money and use that to save a child in Africa? Do you think spending time on a couch, watching Game of Thrones, eating potatoes, is more valuable than saving a dying child in Africa?
You already have guessed what these questions would lead to. But my intention is not to accuse you of hypocrisy. What I say is that even if you watch Game of Thrones instead of saving a dying child in Africa, I wouldn’t think any less of you.
Your article was Privileging the Question, not Promoting Utilitarianism. You could make your point without trying to change what people value.
Yes, and it depends. Whatever your values are, you need to be in a position to satisfy those values. That means you need to take care of yourself so you won’t go crazy or otherwise become incapable of satisfying your values in the future, and one aspect of that is giving yourself leisure time. Game of Thrones may or may not be a good way to do this.
I think you perceive this huge chasm between selfishness and selflessness that doesn’t really exist. Making your life better makes other people’s lives better to the extent that you put time and effort into making other people’s lives better and can do that better if your life is better. Making other people’s lives better makes your life better to the extent that you care about other people.
Is this really how you think it works? Do you honestly watch Game of Thrones because it helps to better other people’s lives? I’d be surprised. More likely, you start with “I like Game of Thrones” and end up with “it helps me to save the world.” I can’t read your mind. But that’d be my guess.
The problem is, you can justify too many things with this excuse. You already justified your iPhone when you could have bought a cheap android phone that has pretty much the same features. Paying the Apple tax is perhaps not the most effective way to save the world.
P.S. Is there any research done that suggests smartphones make people more productive?
That’s a reasonable guess, and it’s certainly something I have to watch out for. (I don’t watch Game of Thrones, but I’m mentally substituting with a show I do watch.) If I genuinely didn’t think that watching Game of Thrones was better as measured by my utility function than the alternative upon reflection, I hope I would be able to stop. I’ve stopped doing various other things this way recently (most recently browsing Tumblr).
This wasn’t clear to me at the time of my purchase. My impression from several people I talked to (that I trusted to be reasonably knowledgeable) was that Android is ultimately more powerful but requires more effort and tinkering to be put to use whereas an iPhone can be used out of the box. I’m not much of a power user and I wanted something that just worked. I also had the sense that there were more apps available for the latter than the former.
And, again, the perfect is the enemy of the good. It takes too much time to make optimal decisions, but I can at least try to make better decisions.
Who needs research? It’s pretty clear to me that my smartphone has made me more productive, and that’s the question that actually needed answering. (I expect most people get distracted by games, but I adopted a general policy of not downloading games which I have only rarely broken, and the games I do download I don’t play very much.)
I’m still not sure I understand what you’re getting at with this line of questioning. You seem to think there’s something wrong with the way I try to make decisions, which is to attempt to maximize expected utility while recognizing that I have limited time to search the space of possible things to do. What would you suggest as a superior alternative? (The alternative you’ve presented so far is justifying that you should think about political questions because of voting even though you don’t vote.)
The intent was to show that asking whether I don’t have anything more valuable to do than voting was an unfair question because even those who profess utilitarianism don’t always do the things that are most valuable in utilitarian terms. But it seems this strategy won’t work with you.
While you are required to judge others using all of the information available to you, you are not required to inform them of that fact prior to gathering some information.
For example, one can privilege themselves without being evil, even when that means that one or more babies starve that didn’t strictly need to, or a section of space that could contain an asteroid that will destroy humanity remains unchecked.
I think maybe an easier way to think about this is to avoid comparing selfish and altruistic things you do (because that comparison is hard) but at least try to be effective in each category separately. Then it’s fair to ask why one would buy an iPhone over an Android or why one would vote as opposed to donate a malaria net (assuming time is roughly equivalent to money).
It’s not that comparing across the two categories is invalid; it’s just that the honest answer may be “I don’t care enough about other people to go without my iPhone” and that’s not an honest answer anyone wants to give. More generally, this comparison runs up against utility functions much more than the other.
Seriously, how much effort goes into voting? Perhaps an hour at the most?
Compared to how much tax gets taken off you every day it seems that having some minor influence in guiding the assembly that sets the budget for the spending of said tax is worth your while. If only to sack a representative assembly that displeases you.
What virtues are displayed by not voting? Sloth? Indifference?
If no one voted how would democratic government work?
Does voting increase utility? In a single case not by much but in the aggregate the people can remove a government that displeases them. This is surely better than the alternative (shoot them out as in Syria today).
The fact that Super PACs pay money to persuade people to vote speaks to the value of your vote not its worthlessness.
I think there are reasonable grounds for making the modest effort required to vote.
If you only spend an hour on gathering information for voting, you probably shouldn’t be voting: given that you probably don’t have magical powers of common sense pointing inerrantly to the optimal choices, voting without research or some kind of insider information is pretty much equivalent to expressing a vote in favor of whatever random environmental biases you’ve been exposed to. That’s a set that normally includes a lot of PAC influence, if you care about such things.
On the other hand, I’ll admit that in some situations proposals do make their way to the ballot without being cleared of flaws or biases that’re obvious to the average LW reader but not to the average voter. When I do choose to vote, my usual way of dealing with California ballot propositions (a form of referendum) that I’ve never heard of is to read the voter information pamphlet while I’m waiting in line and then vote against whatever option sounds frothy, knee-jerky, or economically insane. There are surprisingly few that don’t have such an option.
A lot of effort can go into informed voting. I experimented with voting for the first time last fall and I spent several hours looking up relevant information, and I could’ve spent a lot more if I wanted to get a strong grasp of the issues, which I didn’t feel like I had.
Depends on how much influence.
Why do I care about displaying virtue?
You’re confusing the average value of voting with the marginal value of voting. Would you apply the same argument to homosexuality (“if no one was heterosexual how would making babies work”)?
If I believed that a social norm encouraging homosexuality stood a significant chance of reducing the rate of heterosexual relationships to the point where the birthrate became low enough to cause collective harm, I would be concerned about public acceptance of homosexuality.
If I believed that a social norm encouraging non-voting stood a significant chance of reducing the voting rate to the point where it became low enough to cause harm, I would be concerned about public acceptance of non-voting.
I find the second claim significantly more plausible than the first, though given how implausible I find the first claim that isn’t saying much.
Okay, but me saying “I don’t think voting is valuable” on LW seems pretty unlikely to actually encourage such a social norm.
I would agree that it doesn’t apply very much pressure, but what pressure it applies does seem pretty clearly to push in the direction of non-voting.
Yes, which frees up people’s time to do and think about other things, and I think for LW people in particular that the benefits of this outweigh the costs of not voting (although I am amenable to a Fermi estimate suggesting otherwise).
I’m assuming your reasoning is that LW people are, or at least are capable of, spending their time/effort doing more valuable things, so time spent voting (including time spent becoming an informed voter) is a net loss.
If that assumption gets widely implemeted, the end result seems to be that only people who don’t do anything particularly valuable with their time vote.
Am I following your reasoning correctly? Or is there some other aspect of LW people (like being more likely to work on x-risk, or being more likely to be mathematicians, or something else) driving your reasoning?
Yes.
This is nearly identical to the current situation as far as I can tell anyway.
I don’t understand this dichotomy. What do you mean by ‘purely utilitarian’? Doesn’t the role of government also affect, e.g., death rates?
Perhaps your point is that they’re still important, but for more complicated and indirect reasons? E.g., as schelling points or points of precedent. (You could also give, I think, a compelling argument that they’re important precisely because people think they’re important.)
There are certainly privileged value systems: the value systems people actually have. Short-term entertainment may be important, but virtuous (or at least non-destructive) conduct can be made entertaining as well.
It’s more likely that they’re watching them for entertaining news, or for news-enriched entertainment.
I think he’s suggesting some sort of deontological system. If “big government” is inherently bad, because it infringes on the Rights of the People, you might care about it even if the people don’t utilitarianly need those rights.
Man, reality is so much less interesting than my magical kingdom of loyal steelmen.
I would push the fat man in front of the trolley too in the thought experiment, and so would many rights based libertarians. They just don’t do it in real life. I don’t think they think rights are any more real than utilities are. They think it is a better form of government, to hold that people have inviolable rights even when there are compelling arguments in favor of violating those rights.
But more importantly, why are you being smug about this? Some people value being able to own firearms even at a steep cost to others. Some people, like communists, value economic equality across the population. It is not privileging the question if they value something more than you do. In fact, if that is the case, we should frown upon customized news feeds in general.
Brainoil, I’m pleased to hear your argument is more nuanced than MugaSofer suggested. It helps redeem the practice of steelmanning, which is not just about making discussions more civil and nuanced but also about becoming more accurate at predicting others’ views. Utilitarians can accept ‘rights’ views, if they either reify rights and assign high value to consequences in which they are satisfied, or treat ‘rights’ as a heuristic that usefully approximates the true moral theory. So perhaps instead of talking about which abstract moral theory is the Right One, we should focus on more object-level questions like ‘Which human preferences are more satisfied by debating gay marriage than by ignoring it, and how strong are those preferences relative to their competitor-values?’
Sure. (Though many values people have are probably a causal product of which questions they privilege.)
One way to think of rights is as ethical injunctions for governments, and depending on the right others, against violating them.
Of course, but the reason that rights based libertarians oppose gun control is not utilitarianism. A rights based libertarian would oppose gun control even if the utilitarian argument for it was obviously true. Such a person would not consider this question a privileged question.
You can quote by putting a “>” in front of the paragraph you’re quoting, it’ll make your comments more readable.
This strikes me as a bit strong. What kind of answer could Kepler or Newton have given to this question regarding the theory planetary motion?
Better tidal tables and navigation at sea are two extremely important uses which come to mind as being lucrative products of a better understanding of celestial mechanics.
I’m not sure these uses would have been clear to Kepler. Also what about the scientists doing research on the LHC?
Celestial navigation long predated Kepler, and he was far from ignorant, so it’s pretty unlikely he was unaware. Though it’s true he probably would’ve argued that his astronomical learning was more useful for casting horoscopes and pursuing his Platonist theology.
You’re changing the topic. Just because you picked an awful example—one of the very few areas in astronomy which really does have immediate cash payoffs—doesn’t oblige me to defend every physics project or paper ever. I’m not sure the LHC is a good use of money either, since it didn’t find an anomaly which could trigger new theoretical insight and discoveries, but just what was predicted.
I suspect the LHC was a mistake too, but that’s not clear just from the fact that it hasn’t revolutionized physics. We’d also have to correct for hindsight bias, and show that a Higgs-only outcome was too likely in advance to make the high-value alternative possibilities worth pursuing.
ETA: For instance, I believe I recall one physicist assigning probability ~.5 to ‘we only discover the higgs’ and ~.5 to ‘we discover the higgs plus new physics’ in advance. If the probability were anywhere near that high, it would likely be very easy to justify the LHC.
One could also have meta-inductive reasons to research something. E.g. We know that certain fields, physics in particular, have yielded huge technological advancements as a result of their blind theoretical advancement. That conceivably justifies researching fundamental issues in physics even without “what would you do with this” knowledge.
Both the development of scientific hypotheses and testing them fall under the category of expanding the general knowledge base. Also, both research areas identified are at the fundamental level. Expanding the general knowledge base about the fundamental facts of nature is an inherently valuable activity.
If I can solve planetary motion, I will be famous and professionally respected. I will feel great about myself, rich noblemen will want to become my patrons, and I will be appointed to prestigious and lucrative posts like President of the Royal Society or Master of the Mint. I will be forever remembered as one of the greatest scientists in all of history!
Using my theory of planetary motion, I will be able to STOP THE EARTH FROM CRASHING INTO THE SUN.
But you killed the gnomes and the fairies!
Because they did not obey my scientific theories, they went on to CRASH INTO THE SUN!
Even without a coherent theory of planetary motion, we can assign a very low probabiliy to the earth crashing into the sun simply on the basis that it hasn’t yet.
...that is exactly the sort of judgment which requires some sort of theory. Every day, trillions of things happen which have never happened before. Never in the history of the universe has this comment been posted to LW!
Well, a couple days ago, we could reasonably have assigned a pretty low probability to that exact post being made today.
New things happen all the time, but without a model, we can’t assign much likelihood to any specific new thing happening at any particular time.
Without some kind of model, you can’t assign any probabilities period (.)
And yet it was made, regardless. People get hit with Black Swans all the time.
That’s not what a Black Swan is.
Huh. Looks like I’ve been misusing it all this time. Thanks!
‘Unexpected things happen all the time’ isn’t necessarily a reason to be less surprised by specific especially unexpected things. The reason Eliezer’s post isn’t crazy super surprising isn’t that surprising things are (surprisingly?) common; it’s that it’s relatively ordinary (in-character, etc.) for its reference class.
Except surprising things are surprisingly common. Most people overestimate the likelihood that their model is correct.
But this doesn’t seem like a great example of that, yeah. I was sort of pattern-matching this into the wider discussion (is it worth figuring out if the earth will crash into the sun?)
That might count as “signaling”.
What exactly counts as “signalling”? I started to write down a definition, but I think it’s better you give yours.
The colloquial definition is “Useless but impressive and flatters my vanity”.
The probabilistic definition is “Observable thing X signals quality A means P(A|X) > P(A)”.
The economic definition is “Alice signals P to Bob by X if the net cost of X to Alice is outweighed by the benefits of Bob ‘believing’ A, and X causes Bob to ‘believe’ A even when Bob takes in to account that Alice wants him to ‘believe’ A.” (note ‘believe’ A means ‘act as if A were true’.)
Useless to whom?
Newton was respected for coming up with useful theories and natural science, not just pure philosophy or non-applied math. You could maybe argue that his work was rarely useful to him personally, so he only did it as “signalling” to get respect from others to whom it was useful. But under that theory, any division of labor where people are paid money for their work which is only useful to others would be called “signalling”.
That’s true: that Newton came up with good theories in the past is evidence he’ll come up with more good theories in the future. It signals his quality as a scientist.
But this is a good thing (as opposed to the usual negative implied connotations of “mere signalling”). And the reason it’s a good thing is that his scientific work was actually useful, so it’s a good thing others could identify this and reward him to make him do more useful work.
That’s just saying “people will choose to signal if benefits exceed costs”. It’s true, but it doesn’t explain to me the original statement:
Which says “signalling’ in this instance is something that motivates people in the absence of things being useful in their own right.
I don’t see this in other comment responses, but it seemed obvious to me: A better grasp of and getting closer to understanding fundamental physics?
Possibly also a better ability to read messages sent from the Heavens? Comparisons between the motions of celestial versus earthly bodies? Perhaps even insights as to how to imbue earthly objects with some celestial motion properties, so as to gain better control on the motion of objects in various other domains (e.g. ballistics, architecture, navigation)? If what moves the celestial objects can be harnessed, perhaps a new type of vessel that could travel through the land, air or aethers?
All of these are things that, if I put myself in the frame of mind of a 17th-cent’ philosopher / scholar, would be very pertinent and seem like intuitively obvious possibilities as to what might come from studying the properties and regularities of Celestial Things. And I didn’t even have to think about it for more than five minutes. They were fiddling with no actual answers and no high school astrophysics for lifetimes, or large parts thereof.
It goes deeper than analyzing planetary motion. There was no reason I can think for people to keep close track of anything but the sun and the moon, but if there hadn’t been good records for the planets, the laws of planetary motion couldn’t have been discovered.
“When did you stop beating your wife?”
This is basically framing effect, no?
The framing effect affects how you might answer a given question. What I’m talking about is figuring out why you’re answering a given question at all instead of some completely different question (rather than a reframing of the given question). A privileged question isn’t necessarily “wrong” in the sense that it might rely on an untrue premise (as in your example), it’s just suboptimal.
The perspective is that the question “What is a good way for me to expend my brain clock-cycles?” doesn’t automatically come to mind and stop people from wasting think-time. The framing effect is about judgement bias introduced by the particular way a question is asked, rather than the possible bias in think-time allocation caused by asking and talking about the wrong questions.
“Hmelo-Silver, Duncan, & Chinn cite several studies supporting the success of the constructivist problem-based and inquiry learning methods. For example, they describe a project called GenScope, an inquiry-based science software application. Students using the GenScope software showed significant gains over the control groups, with the largest gains shown in students from basic courses.[24]
In contrast, Hmelo-Silver et al. also cite a large study by Geier on the effectiveness of inquiry-based science for middle school students, as demonstrated by their performance on high-stakes standardized tests. The improvement was 14% for the first cohort of students and 13% for the second cohort. This study also found that inquiry-based teaching methods greatly reduced the achievement gap for African-American students.[24]”″