A Rational Argument
You are, by occupation, a campaign manager, and you’ve just been hired by Mortimer Q. Snodgrass, the Green candidate for Mayor of Hadleyburg. As a campaign manager reading a book on rationality, one question lies foremost on your mind: “How can I construct an impeccable rational argument that Mortimer Q. Snodgrass is the best candidate for Mayor of Hadleyburg?”
Sorry. It can’t be done.
“What?” you cry. “But what if I use only valid support to construct my structure of reason? What if every fact I cite is true to the best of my knowledge, and relevant evidence under Bayes’s Rule?”1
Sorry. It still can’t be done. You defeated yourself the instant you specified your argument’s conclusion in advance.
This year, the Hadleyburg Trumpet sent out a 16-item questionnaire to all mayoral candidates, with questions like “Can you paint with all the colors of the wind?” and “Did you inhale?” Alas, the Trumpet’s offices are destroyed by a meteorite before publication. It’s a pity, since your own candidate, Mortimer Q. Snodgrass, compares well to his opponents on 15 out of 16 questions. The only sticking point was Question 11, “Are you now, or have you ever been, a supervillain?”
So you are tempted to publish the questionnaire as part of your own campaign literature . . . with the 11th question omitted, of course.
Which crosses the line between rationality and rationalization. It is no longer possible for the voters to condition on the facts alone; they must condition on the additional fact of their presentation, and infer the existence of hidden evidence.
Indeed, you crossed the line at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it. “What!” you cry. “A campaign should publish facts unfavorable to their candidate?” But put yourself in the shoes of a voter, still trying to select a candidate—why would you censor useful information? You wouldn’t, if you were genuinely curious. If you were flowing forward from the evidence to an unknown choice of candidate, rather than flowing backward from a fixed candidate to determine the arguments.
A “logical” argument is one that follows from its premises. Thus the following argument is illogical:
All rectangles are quadrilaterals.
All squares are quadrilaterals.
Therefore, all squares are rectangles.
This syllogism is not rescued from illogic by the truth of its premises or even the truth of its conclusion. It is worth distinguishing logical deductions from illogical ones, and to refuse to excuse them even if their conclusions happen to be true. For one thing, the distinction may affect how we revise our beliefs in light of future evidence. For another, sloppiness is habit-forming.
Above all, the syllogism fails to state the real explanation. Maybe all squares are rectangles, but, if so, it’s not because they are both quadrilaterals. You might call it a hypocritical syllogism—one with a disconnect between its stated reasons and real reasons.
If you really want to present an honest, rational argument for your candidate, in a political campaign, there is only one way to do it:
Before anyone hires you, gather up all the evidence you can about the different candidates.
Make a checklist which you, yourself, will use to decide which candidate seems best.
Process the checklist.
Go to the winning candidate.
Offer to become their campaign manager.
When they ask for campaign literature, print out your checklist.
Only in this way can you offer a rational chain of argument, one whose bottom line was written flowing forward from the lines above it. Whatever actually decides your bottom line is the only thing you can honestly write on the lines above.
1See “What Is Evidence?” in Map and Territory.
- Believing In by 8 Feb 2024 7:06 UTC; 230 points) (
- Elements of Rationalist Discourse by 12 Feb 2023 7:58 UTC; 223 points) (
- Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think by 27 Dec 2019 5:09 UTC; 127 points) (
- Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by 29 Jun 2020 2:45 UTC; 105 points) (
- Elements of Rationalist Discourse by 14 Feb 2023 3:39 UTC; 68 points) (EA Forum;
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 46 points) (
- 31 Jan 2020 7:02 UTC; 43 points) 's comment on Book Review: Human Compatible by (
- 21 Aug 2010 16:32 UTC; 39 points) 's comment on Transparency and Accountability by (
- SotW: Avoid Motivated Cognition by 28 May 2012 15:57 UTC; 33 points) (
- Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles by 2 Mar 2024 22:05 UTC; 26 points) (
- Lighthaven Sequences Reading Group #8 (Tuesday 10/29) by 27 Oct 2024 23:55 UTC; 20 points) (
- 6 Nov 2021 21:52 UTC; 15 points) 's comment on [Book Review] “The Bell Curve” by Charles Murray by (
- 2 Sep 2012 20:08 UTC; 13 points) 's comment on Open Thread, September 1-15, 2012 by (
- 20 Jul 2019 15:08 UTC; 11 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- 6 Jul 2011 13:43 UTC; 10 points) 's comment on Find yourself a Worthy Opponent: a Chavruta by (
- Summarizing the Sequences Proposal by 4 Aug 2011 21:15 UTC; 9 points) (
- 27 Feb 2009 23:47 UTC; 9 points) 's comment on The Most Important Thing You Learned by (
- 9 Jul 2011 1:55 UTC; 8 points) 's comment on What would a good article on Bayesian liberty look like? by (
- 19 May 2011 20:41 UTC; 8 points) 's comment on What bothers you about Less Wrong? by (
- [SEQ RERUN] A Rational Argument by 15 Sep 2011 2:35 UTC; 7 points) (
- Rationality Reading Group: Part G: Against Rationalization by 12 Aug 2015 22:09 UTC; 7 points) (
- 20 Feb 2023 1:34 UTC; 6 points) 's comment on We should be signal-boosting anti Bing chat content by (
- 31 Aug 2011 3:40 UTC; 6 points) 's comment on [SEQ RERUN] Kahneman’s Planning Anecdote by (
- 27 Nov 2019 18:40 UTC; 5 points) 's comment on Dialogue on Appeals to Consequences by (
- 5 Nov 2009 13:18 UTC; 5 points) 's comment on Open Thread: November 2009 by (
- 2 May 2020 15:53 UTC; 3 points) 's comment on Open & Welcome Thread—May 2020 by (
- 15 Jun 2012 23:37 UTC; 2 points) 's comment on Wanted: “The AIs will need humans” arguments by (
- 17 Oct 2011 14:39 UTC; 2 points) 's comment on Open thread, October 2011 by (
- 11 Nov 2013 20:12 UTC; 2 points) 's comment on [Prize] Essay Contest: Cryonics and Effective Altruism by (
- 6 Nov 2009 2:38 UTC; 1 point) 's comment on Open Thread: November 2009 by (
- 21 Apr 2012 12:03 UTC; 0 points) 's comment on Against the Bottom Line by (
- 4 May 2012 18:15 UTC; 0 points) 's comment on Seeking links for the best arguments for economic libertarianism by (
- 11 Oct 2011 23:52 UTC; 0 points) 's comment on Improving My Writing Style by (
So are you suggesting that it impossible for someone else to construct an unbiased argument for you?
After all, it’s only a small step to observe that it’s impossible to ever know whether someone else has the motives of the campaign manager in this case.
You can never construct an unbiased argument for anything, except by an improbable coincidence that any wise person will refuse to believe in.
After all, it’s only a small step to observe that it’s impossible to ever know whether someone else has the motives of the campaign manager in this case.
Valid evidence is valid, whatever the motives of the one who cites it; the world’s stupidest person may say the sun is shining, but that doesn’t make it dark out. But you’d be wise to take responsibility for adding up the evidence yourself, and try to check one or more sides to see if any arguments were omitted. (Just don’t expect the evidence to balance. It shouldn’t.)
I like the spirit of what you’re saying, but I’m not convinced that you’ve made a rational argument for it. Also, I’m concerned that you might have started with the conclusion that a rational argument must flow forward and constructed an account to justify it. If so, in your terms, though not in mine, that would make your conclusion irrational.
I think it can be perfectly rational to think backwards from any conclusion you want to any explanation that fits. Rationality is among other things about being bound by the requirement of consistency in reasoning. It’s about creating an account from the evidence. But it’s also about evaluating evidence, and that part is where it gets problematic.
In an open and complex world like the one we live in every day, weighing evidence is largely a non-rational (para-rational? quasi-rational?) process. We are operating only with bounded rationality and collections of murky impressions. So, your idea of making a checklist and somehow discovering who the best candidate is is already doomed. There is no truly evidence-driven way of doing that, because evidence does not drive reasoning—it’s our BELIEFS about evidence that drive reasoning. Our beliefs are mostly not a product of a rational process.
A logical explanation is one that follows from premises to conclusions without violating any rule of logic. Additionally, all logical explanations of real world situations involves a claim that the logical model we put forward corresponds usefully to the state of the real world. What we called a “cat” in our reasoning corresponded to that furry thing we understand as a cat, etc. If I can think backwards from a conclusion without finding an absurd premise, then I have a logical explanation. (It may be wrong, of course.)
To attack my self-consistent, logical account of a situation that suggests that X is TRUE, based solely on the fact that I was looking for evidence that X is true, is equivalent to an ad hominem fallacy. I think you can certainly suspect that my argument is weak, and it probably is, but you can’t credibly attack my sound argument simply because you don’t like me, or you don’t like my method of arriving at my sound argument. A lot of science would have to be thrown out if a scientist wasn’t allowed to search for evidence to support something he hoped would be true. Also, as you know, many theorems have been proven using backward reasoning.
If you want to attack the argument, you can attack it rationally by offering counter-evidence, or an alternative reasoning that is more consistent with more reliable facts. Furthermore, our entire legal system is built on the idea that two opposing sides in a dispute, marshaling the best stories they can marshal, will provide judges and juries with a good basis on which to decide the dispute.
Instead of calling it irrational, I would say that it’s a generally self-deceptive practice to start from a conclusion and work backward. I don’t trust that process, but I couldn’t disqualify an argument solely on those grounds.
Instead of prescribing forward reasoning only, I would prescribe self-critical thinking and de-biasing strategies.
(BTW, one of the reasons I don’t vote is that I am confident that I cannot, under any circumstances, EVER, have sufficient and reliable information about the candidates to allow me to make a good decision. So, I believe all voting decisions people actually make are irrational.)
What you need to remember is that all of this applies to probabilistic arguments with probabilistic results—of course deductive reasoning can be done backward. However, when evidence is presented as contribution to a belief, omitting some (as you will, inevitably, when reasoning backward) disentangles the ultimate belief from the object thereof. If some evidence doesn’t contribute, the (probabilistic) belief can’t reflect reality. You seem to conceptualize arguments as requiring the outcome if they’re valid and their premises are true, which doesn’t describe the vast majority.
You only need to have better information than average voter for your vote to improve result of election. Though then again, effect of 1 vote is usually so small that the rational choice would be to vote for whatever gives you more social status.
The argument could turn out valid, by coincidence; but the process of making it isn’t valid, so given the vast space of all possible arguments… it’s probably not valid. Indeed, as nearly all advertising, propaganda, political campaigns, etc. are not.
James, in regard to your last paragraph: I very much doubt whether your decision not to vote is itself a good one, by the standards you’ve just espoused. After all, if you don’t have enough information to decide between voting for X and voting for Y, how can you have enough information to decide between voting for X and voting for no one? Seems to me that you have to make a decision (which might end up being the decision to cast no vote, of course) and the fact that you don’t have enough evidence to be strongly convinced that your decision is best doesn’t relieve you of the responsibility for making it.
“(BTW, one of the reasons I don’t vote is that I am confident that I cannot, under any circumstances, EVER, have sufficient and reliable information about the candidates to allow me to make a good decision. So, I believe all voting decisions people actually make are irrational.)”
See http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_stronger/.
Hmmm. If I understand you correctly, then two people could produce an identical argument but one would be incorrect because he did it backwards? Do you suppose that there is an implied arrow of time in every syllogism?
The argument could turn out valid, by coincidence; but the process of making it isn’t valid, so given the vast space of all possible arguments… it’s probably not valid. Indeed, as nearly all advertising, propaganda, political campaigns, etc. are not.
Many of you seem to think there is an axiom of reasoning that says the persuasiveness of an argument must be independent of what you know about the process that produced that argument. There is no such axiom, nor should there be.
Voting is irrational because the probability that your vote will have any effect on the outcome is about zero. I discuss that more and have a back-and-forth in the comment section here.
But it isn’t zero… and we know that if people systematically obeyed that advice, the world would be much worse off.
Voting may be a Tragedy of the Commons, but it’s not just simpliciter irrational.
At least, it is in insane countries where it isn’t compulsory.
Voting is analogous to taxes and should be legally enforced as such. (Or, rather, the public service of attending a voting booth and scribbling something arbitrary that may or may not be a vote on a piece of paper should be compulsory.)
I agree that voting is a Tragedy of the Commons—but in the exact opposite way to how you frame it. Because people don’t fully internalise the costs and benefits of their votes, but value self-expression, (1) it is very cheap for the ill-informed to use their votes to signal expressively, and (2) there is little incentive to become a well-informed voter. For a given level of political ignorance, we get far too much voting.
To my mind, voting is analagous to pollution and should be taxed as such.
Why? Assuming I vote randomly all I’m doing is increasing the noise to signal ratio. If everyone you force to do it votes randomly then it’ll average out.
It’s worse than that. The randomness is biased in ways that can be systematically manipulated.
Why? I fail to seem any gains from that. Neither do I see any major empirical differences between countries with compulsory voting and countries without.
In general the correct response to most “I fail to see” or “I can’t imagine” claims is to observe that this could be either a fact about the problem or a fact about the speaker’s imagination.
The current solution to the tragedy of the commons is brainwashing with patriotism and relying on poorly calibrated tribal-political instincts to get by. This works well enough and I honestly don’t think this is a problem that particularly needs addressing, compared to all the other things that can be done. It is merely a minor systemic insanity.
Well, informed voting is, but how do you reliably check if somebody was well-informed as they voted, to legally enforce it?
Only requiring informed voters to vote would be a potentially useful optimisation. As you point out that distinction does not seem to be practical.
So what problem is mandatory voting supposedly solving again?
In countries without mandatory voting, if voting is more inconvenient for certain groups than for others, the latter will be over-weighed in the election. With mandatory voting, casting a valid vote is no more and no less inconvenient than spoiling the ballot, so that’s not an issue—all eligible people who ceteris paribus would prefer, no matter how slightly, to vote will do so.
(Unlike wedrifid I’m not in a country with mandatory voting, BTW.)
I’m tapping out of this conversation. It’s predisposing me towards racism. I’m sure anybody actually interested will have no problem finding a book on game theory.
Taboo “racism”. From context it seems to mean [having beliefs that while more accurate make me uncomfortable].
Many of you seem to think there is an axiom of reasoning that says the persuasiveness of an argument must be independent of what you know about the process that produced that argument. There is no such axiom, nor should there be.
In particular, depending on the process that produces an argument, you may have to infer the existence of evidence not seen.
Hmmm. If I understand you correctly, then two people could produce an identical argument but one would be incorrect because he did it backwards? Do you suppose that there is an implied arrow of time in every syllogism?
More like… Hamlet might be just as good if it had been written by monkeys on typewriters instead of Shakespeare, but there’s a reason why it wasn’t.
Even if things come out equally by luck in one world, it would have different entanglements in possible worlds. The entanglements wouldn’t follow. It’s like the lottery ticket that happens to win in your Everett branch or Tegmark duplicate—buying it still wasn’t a rational act. Only a forward-flowing algorithm will make the entanglements match up.
“Only a forward-flowing algorithm will make the entanglements match up.”
To try and rephrase this in simpler language: You do not know the truth. You want to discover the truth. The only thing you get scored on is how close you are to the truth. If you decide “XYZ is a great guy” because XYZ is writing your paycheck, writing down lots of elaborate arguments will not improve your score, because the only thing you get scored on was already written, before you started writing the arguments. If you start writing arguments and then conclude that XYZ is a great guy, you may improve on your score, because you get a chance to change your mind. Changing your mind becomes mandatory as the truth becomes more complex; if you decide on the truth for personal convenience out of 2^100 or 2^200 possible options, you’re never going to hit it if you don’t work to improve your accuracy.
This has approximately zero relationship to the way political campaigns (or anything else) happens in the real world, where campaign managers are part of an ideologically biased social network. In fact, their job is essentially to strengthen the connections between voters and a candidate, by whatever means necessary, mostly through propaganda (aka advertising) that combines emotional appeal with the occasional smidgen of rational argument.
Maybe it would be a better world if people didn’t work this way, but they do, and I don’t see any prospect of changing this. I’m not even sure how rationality can be applied to most electoral issues. Take the issue of abortion. Either you believe abortion is immoral, or not. You can apply rationality to figure out which candidate supports your moral point of view, but it’s not much help in setting your root moral values. So how can you make an unbiased choice?
Elections are all about trying to get people who share your biases into power. I know the self-proclaimed rationalists here think the whole process is icky, but part of being rational is dealing with the real world, not the world as you would like it.
That being said, there’s room in the electoral process for a bias in favor of rationality, science, humanism, and enlightenment. I think it’s pretty clear which of the two major political parties in the US favor those values.
Rationality has plenty to say about whether abortion is morally permissible.
Are fetuses sentient, for example? Do they feel pain? What would happen socially, economically, if we outlawed abortion? Who would benefit? Who would be harmed? How much?
If you’re a strict utilitarian, moral problems reduce to factual problems. But even if you’re not, facts often have a great deal to say about morality. This is especially true in issues like economics and foreign policy, where the goals are largely undisputed and it’s the facts and methods that are in question. I challenge you to find an American politician who says he wants to increase poverty or undermine American national security. “We need 10% of Americans to starve! And by the way, I hope China invades!” (I guess I should hedge my bets and say that such bizarre people may exist—after all, Creationists do—but they aren’t likely to get a lot of votes from any party.)
Also, rationality can assess the arguments used for and against political positions. If one side is using a lot of hard data and the other one is making a lot of logical fallacies… that’s should give you a pretty good idea of which side to be on. (It’s no guarantee, but what is?)
First you need to decide what gives utility points to you, which is a moral problem. I consider most computer programs to be sentient, with their work memory being sentience, i also see pain as just a bit of programming that makes creatures avoid things causing it, not different from some regulators i have programmed. Therefore i don´t care if fetuses are sentient or feel pain, so for me that does not affect the utility calculation. But most people do not agree.
Actually this would work nicely if the body that makes this survey doesn’t work for any of the candidates, but either has independent votes or is funded by the voters. It would then be in their best interest to show the voters all the evidence, rather than “all the true evidence that serves my candidate”.
In other words, if you want to intervene in politics as a rational agent, you shouldn’t work for any party: you should work for the public at large! Which brings us to the following question: what is the necessity, nay, the justification for parties existing in this day and age? Aren’t there better alternatives in making governments be the faithful servants of popular will, rather than, say, of their own existence or of the interests of a particular group of people?
There are such organizations, and in general the information they put out is a lot more reliable, for exactly these reasons.
Name three.
Politico, PolitiFact, FactCheck.org
Thank you very much for sharing these. I am very glad to find out that such organizations exist.
It’s a good question. The answer is “none, because people are crazy and the world is mad”.
That’s a bit of a non-explanation: it predicts anything, and nothing. How about, instead, you name three specific patterns of craziness (you know, fallacies, errors in judgment, bad heuristics, and so on) that are decisive factors in this state of affairs.
No. The whole point of that phrase is to not get overly complicated in explaining other people’s failures.
Explaining and rationalizing/justifying are two different things. Pleading the “humanity is insane” is, to put it bluntly, unproductive and lazy. If you want to say “don’t think about it too hard, it’s not worth the effort”, then say that, and spare us the theatrics.
This is why I think an adversarial court system is fundamentally defective.
Granted, inquisitorial court systems have flaws as well… but in principle it seems like an inquisition is actually what we want. We want to know what happened, not find out who is better at arguing.
A bayesian-rational inquisition judge is in principle the ideal court system. The problem is to ensure that this judge remains conform to requirements (a problem very akin to the unresolved reflectively self-consistent proof of friendly self-modification in the Friendly AI field), and that it always has enough power to enforce decisions.
The ideal system is one where a superintelligence not only knows what happened, but can causally prove that it will not happen again, and thus safely proceed to letting everyone off (including the proven-guilty party) to go about their business.