The Power of Positivist Thinking
Related to: No Logical Positivist I, Making Beliefs Pay Rent, How An Algorithm Feels From Inside, Disguised Queries
Call me non-conformist, call me one man against the world, but...I kinda like logical positivism.
The logical positivists were a dour, no-nonsense group of early 20th-century European philosophers. Indeed, the phrase “no-nonsense” seems almost invented to describe the Positivists. They liked nothing better then to reject the pet topics of other philosophers as being untestable and therefore meaningless. Is the true also the beautiful? Meaningless! Is there a destiny to the affairs of humankind? Meaningless? What is justice? Meaningless! Are rights inalienable? Meaningless!
Positivism became stricter and stricter, defining more and more things as meaningless, until someone finally pointed out that positivism itself was meaningless by the positivists’ definitions, at which point the entire system vanished in a puff of logic. Okay, it wasn’t that simple. It took several decades and Popper’s falsifiabilism to seal its coffin. But vanish it did. It remains one of the least lamented theories in the history of philosophy, because if there is one thing philosophers hate it’s people telling them they can’t argue about meaningless stuff.
But if we’ve learned anything from fantasy books, it is that any cabal of ancient wise men destroyed by their own hubris at the height of their glory must leave behind a single ridiculously powerful artifact, which in the right hands gains the power to dispel darkness and annihilate the forces of evil.
The positivists left us the idea of verifiability, and it’s time we started using it more.
Eliezer, in No Logical Positivist I, condemns the positivist notion of verifiability for excluding some perfectly meaningful propositions. For example, he says, it may be that a chocolate cake formed in the center of the sun on 8/1/2008, then disappeared after one second. This statement seems to be meaningful; that is, there seems to be a difference between it being true or false. But there’s no way to test it (at least without time machines and sundiver ships, which we can’t prove are possible) so the logical positivists would dismiss it as nonsense.
I am not an expert in logical positivism; I have two weeks studying positivism in an undergrad philosophy class under my belt, and little more. If Eliezer says that is how the positivists interpreted their verifiability criterion, I believe him. But it’s not the way I would have done things, if I’d been in 1930s Vienna. I would have said that any statement corresponding to a state of the material universe, reducible in theory to things like quarks and photons, testable by a being who has access to the machine running the universe1 and who can check the logs at will—such a statement is meaningful2. In this case the chocolate cake example passes: it corresponds to a state of the material world, and is clearly visible on the universe’s logs. “Rights are inalienable” remains meaningless, however. At the risk of reinventing the wheel3, I will call this interpretation “soft positivism”.
My positivism gets even softer, though. Consider the statement “Google is a successful company.” Though my knowledge of positivism is shaky, I believe that most positivists would reject this as meaningless; “success” is too fuzzy to be reduced to anything objective. But if positivism is true, it should add up to normality: we shouldn’t find that an obviously useful statement like “Google is a successful company” is total nonsense. I interpret the statement to mean certain objectively true propositions like “The average yearly growth rate for Google has been greater than the average yearly growth rate for the average company”, which itself reduces down to a question of how much money Google made each year, which is something that can be easily and objectively determined by anyone with the universe’s logs.
I’m not claiming that “Google is a successful company” has an absolute one-to-one identity with a statement about average growth rates. But the “successful company” statement is clearly allied with many testable statements. Average growth rate, average profits per year, change in the net worth of its founders, numbers of employees, et cetera. Two people arguing about whether Google was a successful company could in theory agree to create a formula that captures as much as possible of their own meaning of the word “successful”, apply that formula to Google, and see whether it passed. To say “Google is a successful company” reduces to “I’ll bet if we established a test for success, which we are not going to do, Google would pass it.”
(Compare this to Eliezer’s meta-ethics, where he says “X is good” reduces to “I’ll bet if we calculated out this gigantic human morality computation, which we are not going to do, X would satisfy it.”)
This can be a very powerful method for resolving debates. I remember getting into an argument with my uncle, who believed that Obama’s election would hurt America because having a Democratic president is bad for the economy. We were doing the normal back and forth, him saying that Democrats raised taxes which discouraged growth, me saying that Democrats tended to be more economically responsible and less ideologically driven, and we both gave lots of examples and we never would have gotten anywhere if I hadn’t said “You know what? Can we both agree that this whole thing is basically asking whether average GDP is lower under Democratic than Republican presidents?” And he said “Yes, that’s pretty much what we’re arguing about.” So I went and got the GDP statistics, sure enough they were higher under Democrats, and he admitted I had a point4.
But people aren’t always as responsible as my uncle, and debates aren’t always reducible to anything as simple as GDP. Consider: Zahra approaches Aaron and says: “Islam is a religion of peace.”5
Perhaps Aaron disagrees with this statement. Perhaps he begins debating. There are many things he could say. He could recall all the instances of Islamic terrorism, he could recite seemingly violent verses from the Quran, he could appeal to wars throughout history that have involved Muslims. I’ve heard people try all of these.
And Zahra will respond to Aaron in the same vein. She will recite Quranic verses praising peace, and talk about all the peaceful Muslims who never engage in terrorism at all, and all of the wars started by Christians in which Muslims were innocent victims. I have heard all these too.
Then Paula the Positivist comes by. “Hey,” she says, “We should reduce this statement to testable propositions, and then there will be no room for disagreement.”
But maybe, if asked to estimate the percentage of Muslims who are active in terrorist groups, Aaron and Zahra will give the exact same number. Perhaps they are both equally aware of all the wars in history in which Muslims were either aggressors or peacemakers. They may both have the entire Quran memorized and be fully aware of all appropriate verses. But even after Paula has checked to make sure they agree on every actual real world fact, there is no guarantee that they will agree on whether Islam is a religion of peace or not.
What if we ask Aaron and Zahra to reduce “Islam is a religion of peace” to an empirical proposition? In the best case, they will agree on something easy, like “Muslims on average don’t commit any more violent crimes than non-Muslims.” Then you just go find some crime statistics and the problem is solved. In the second-best case, the two of them reduce it to completely different statements, like “No Muslim has ever committed a violent act” versus “Not all Muslims are violent people.” This is still a resolution to the argument; both Aaron and Zahra may agree that the first proposition is false and the second proposition is true, and they both agree the original statement was too vague to go around professing.
In the worst-case scenario, they refuse to reduce the statement at all, or they deliberately reduce it to something untestable, or they reduce it to two different propositions but are outraged that their opponent is using a different proposition than they are and think their opponent’s proposition is clearly not equivalent to the original statement.
How are they continuing to disagree, when they agree on all of the relevant empirical facts and they fully understand the concept of reducing a proposition?
In How an Algorithm Feels From the Inside, Eliezer writes about disagreement on definitions. “We know where Pluto is, and where it’s going; we know Pluto’s shape, and Pluto’s mass—but is it a planet?” The question, he says, is meaningless. It’s a spandrel from our cognitive algorithm, which works more efficiently if it assigns a separate central variable is_a_planet apart from all the actual tests that determine whether something is a planet or not.
Aaron and Zahra seem to be making the same sort of mistake. They have a separate variable is_a_religion_of_peace that’s sitting there completely separate from all of the things you might normally use to decide whether one group of people is generally more violent than another.
But things get much worse than they do in the Pluto problem. Whether or not Pluto is a planet feels like a factual issue, but turns out to be underdetermined by the facts. Whether or not Islam is a religion of peace feels like a factual issue, but is really a false front for a whole horde of beliefs that have no relationship to the facts at all.
When Zahra says “Islam is a religion of peace,” she is very likely saying something along the lines of “I like Islam!” or “I like tolerance!” or “I identify with an in-group who say things like ‘Islam is a religion of peace’” or “People who hate Islam are mean!” or even “I don’t like Republicans.”. She may be covertly pushing policy decisions like “End the war on terror” or “Raise awareness of unfair discrimination against Muslims.”
When Aaron says “Islam is not a religion of peace,” he is probably saying something like “I don’t like Islam,” or “I think excessive tolerance is harmful”, or “I identify with an in-group who would never say things like ‘Islam is a religion of peace’” or even “I don’t like Democrats.” He may be covertly pushing policy decisions like “Continue the war on terror” or “Expel radical Muslims from society.”
Eliezer’s solution to the Pluto problem is to uncover the disguised query that made you care in the first place. If you want to know whether Pluto is spherical under its own gravity, then without worrying about the planet issue you can simply answer yes. And you’re wondering whether to worry about your co-worker Abdullah bombing your office, you can simply answer no. Islam is peaceful enough for your purposes.
But although uncovering the disguised query is a complete answer to the Pluto problem, it’s only a partial answer to the religion of peace problem. It’s unlikely that someone is going to misuse the definition of Pluto as a planet or an asteroid to completely misunderstand what Pluto is or what it’s likely to do (although it can happen). But the entire point of caring about the “Islam is a religion of peace” issue is so you can misuse it as much as possible.
Israel is evil, because it opposes Muslims, and Islam is a religion of peace. The Democrats are tolerating Islam, and Islam is not a religion of peace, so the Democrats must have sold out the country. The War on Terror is racist, because Islam is a religion of peace. We need to ban headscarves in our schools, because Islam is not a religion of peace.
I’m not sure how the chain of causation goes here. It could be (emotional attitude to Islam) → (Islam [is/isn’t] a religion of peace) → (poorly supported beliefs about Islam). Or it could just be (emotional attitude to Islam) → (poorly supported beliefs about Islam). But even in the second case, that “Islam [is/isn’t] a religion of peace” gives the poorly supported beliefs a dignity that they would not otherwise have, and allows the person who holds them to justify themselves in an argument. Basically, that one phrase holes itself up in your brain and takes pot shots at any train of thought that passes by.
The presence of that extra is_a_religion_of_peace variable is not a benign feature of your cognitive process anymore. It’s a malevolent mental smuggler transporting prejudices and strong emotions into seemingly reasonable thought processes.
Which brings us back to soft positivism. If we find ourselves debating statements that we refuse to reduce to empirical data6, or using statements in ways their reductions don’t justify, we need to be extremely careful. I am not positivist enough to say we should never be doing it. But I think it raises one heck of a red flag.
Agree with me? If so, which of the following statements do you think are reducible, and how would you begin reducing them? Which are completely meaningless and need to be scrapped? Which ones raise a red flag but you’d keep them anyway?
1. All men are created equal.
2. The lottery is a waste of hope.
3. Religious people are intolerant.
4. Government is not the solution; government is the problem.
5. George Washington was a better president than James Buchanan.
6. The economy is doing worse today than it was ten years ago.
7. God exists.
8. One impulse from a vernal wood can teach you more of man, of moral evil, and of good than all the sages can.
9. Imagination is more important than knowledge.
10. Rationalists should win.
Footnotes:
1: More properly the machine running the multiverse, since this would allow counterfactuals to be meaningful. It would also simplify making a statement like “The patient survived because of the medicine”, since it would allow quick comparison of worlds where the patient did and didn’t receive it. But if the machine is running the multiverse, where’s the machine?
2: One thing I learned from the comments on Eliezer’s post is that this criterion is often very hard to apply in theory. However, it’s usually not nearly as hard in practice.
3: This sounds like the sort of thing there should already be a name for, but I don’t know what it is. Verificationism is too broad, and empiricism is something else. I should point out that I am probably misrepresenting the positivist position here quite badly, and that several dead Austrians are either spinning in their graves or (more likely) thinking that this whole essay is meaningless. I am using “positivist” only as a pointer to a certain style of thinking.
4: Before this issue dominates the comments thread: yes, I realize that the president having any impact on the economy is highly debatable, that there’s not nearly enough data here to make a generalization, et cetera. But my uncle’s statement—that Democratic presidents hurt the economy, is clearly not supported.
5: If your interpretation of anything in the following example offends you, please don’t interpret it that way.
6: Where morality fits into this deserves a separate post.
- A Parable On Obsolete Ideologies by 13 May 2009 22:51 UTC; 185 points) (
- Defense Against The Dark Arts: Case Study #1 by 28 Mar 2009 2:31 UTC; 145 points) (
- When Truth Isn’t Enough by 22 Mar 2009 20:23 UTC; 135 points) (
- The Library of Scott Alexandria by 14 Sep 2015 1:38 UTC; 126 points) (
- The Trouble With “Good” by 17 Apr 2009 2:07 UTC; 100 points) (
- Missing the Trees for the Forest by 22 Jul 2009 3:23 UTC; 87 points) (
- 26 Jan 2012 15:37 UTC; 55 points) 's comment on The Substitution Principle by (
- Help, help, I’m being oppressed! by 7 Apr 2009 23:22 UTC; 43 points) (
- What I Learned About Meetup Organization by 6 Oct 2012 2:11 UTC; 42 points) (
- Index of Yvain’s (Excellent) Articles by 30 Jun 2011 9:57 UTC; 36 points) (
- 1 Oct 2010 7:57 UTC; 17 points) 's comment on Reflections on a Personal Public Relations Failure: A Lesson in Communication by (
- 19 Nov 2010 13:04 UTC; 9 points) 's comment on Goals for which Less Wrong does (and doesn’t) help by (
- 14 Oct 2010 5:12 UTC; 9 points) 's comment on A Fundamental Question of Group Rationality by (
- 14 May 2009 12:28 UTC; 6 points) 's comment on Survey Results by (
- 5 Sep 2009 21:34 UTC; 5 points) 's comment on The Sword of Good by (
- 28 Mar 2009 18:15 UTC; 4 points) 's comment on Crowley on Religious Experience by (
- 8 Jul 2010 23:57 UTC; 2 points) 's comment on Open Thread: July 2010 by (
- 9 Apr 2009 3:54 UTC; 2 points) 's comment on Extreme Rationality: It’s Not That Great by (
- 11 Nov 2010 21:57 UTC; 2 points) 's comment on The Strong Occam’s Razor by (
- 26 Nov 2010 3:37 UTC; 2 points) 's comment on Helpless Individuals by (
- 1 Aug 2013 0:34 UTC; 2 points) 's comment on More “Stupid” Questions by (
- 12 Jun 2009 23:55 UTC; 2 points) 's comment on Typical Mind and Politics by (
- 7 Apr 2009 10:31 UTC; 1 point) 's comment on Rationalist wiki, redux by (
- 3 Dec 2010 6:14 UTC; 1 point) 's comment on Helpless Individuals by (
- A Short Intro to Humans by 20 Jul 2022 15:28 UTC; 1 point) (
- 24 May 2009 1:00 UTC; 1 point) 's comment on Least Signaling Activities? by (
- 11 Nov 2009 9:38 UTC; 0 points) 's comment on What makes you YOU? For non-deists only. by (
- 3 Jul 2012 6:05 UTC; 0 points) 's comment on Can anyone explain to me why CDT two-boxes? by (
- 15 Dec 2010 0:40 UTC; 0 points) 's comment on Honours Dissertation by (
- 29 Oct 2010 1:51 UTC; 0 points) 's comment on Call for Volunteers: Rationalists with Non-Traditional Skills by (
- 16 Dec 2009 15:27 UTC; 0 points) 's comment on Getting Over Dust Theory by (
So… first of all, I’d like someone to look up the logical positivists and say what it is they actually believed. My impression is that so far as their verbal description of their philosophy went, if not its actual use, they claimed that the meaning of any phrase consisted entirely in its impact on experience, and that no other aspect of it is meaningful. This implies that a theory of photons which had photons vanishing as soon as they crossed the horizon of the expanding universe, and a theory which had the photons continuing undetectably onward, had the same meaning.
If this is not logical positivism, then let me be corrected.
The position you’re describing sounds to me like what I would call reductionism, and I would agree with the caveat that certain meaningful entities can have logical elements—for example, I am willing to consider “the sum of 2 + 2” apart from any particular calculator that calculates it; its meaning is distinct from the meaning of “the result of calculator X” where calculator X is any physical thing I can point to including my own brain. I have no idea if this reflects reality, but I am unable to make my map work without logical as well as physical elements. I am, however, entirely willing to reduce every meaning to some mixture of physical stuffs and abstract computations.
Is there any point in arguing over whether we are “logical positivists” apart from the particulars of the stance? :)
Logical positivists never reached complete agreement about just what the verificationist criterion entailed. (Their inability to meet their own high standards in this regard was their downfall.) For example, I’ve read that some of them considered it meaningless to ask whether there’s life after death. Whether it is meaningless was apparently a matter of debate among them.
From what I’ve read, though, the “mainstream” view among them would be that your two theories have different meanings. As I tried to explain in this comment to your OvercomingBias post “No Logical Positivist I”, they held that meaningful statements had to be logically reducible to descriptions of possible experience. To quote my earlier comment, “They held that if A is a meaningful (because verifiable) assertion that something happened, and B is likewise, then A & B is meaningful by virtue of being logically analyzable in terms of the meaning of A and B. They would maintain this even if the events asserted in A and B had disjoint light cones, so that you could never experimentally verify them both.”
But why take my word for it :)? I’m replying to this comment because I recently came across an article that seems to answer you question. Published in 1931, it was one of the very first articles to present logical positivism to the English-language audience. Here’s the reference:
Blumberg and Feigl, “Logical Positivism: A New Movement in European Philosophy”, The Journal of Philosophy, Vol. 28, No. 11 (May 21, 1931), pp. 281-296
It’s available through JSTOR at the following URL:
http://www.jstor.org/stable/2015437
Here is the relevant excerpt:
It looks to me like observing events beyond the edge of the observable universe is impossible in the “type (b)” sense. But assertions about such events still have meaning, so it would seem to follow that two theories that make different claims about such events still have different meanings.
I’m hesitant to use “reductionism” because I already interpret that to be a belief about the material world (747s made of quarks and so on), not about propositions. I know people who accept material reductionism, but not propositional reductionism.
The real positivists were willing to accept that 2+2=4 was irreducible, since they considered it a tautology/definition and so exempt from testing. I am split: I think in one sense it’s tautological, but that we pay attention to that particular tautology for reasons involving a testable generalization over all cases where two objects have been added to two objects and the result has been four objects.
A.J. Ayer’s Language, Truth, and Logic is brief, to-the-point, bold, and fun to read. All of this to the extent that you may forget why you dislike reading philosophy. I’m pretty sure that Eliezer and Scott would enjoy their time reading it and would get something out of it.
Hah! I would have voted you up just for this. And also for:
Ooh, this is good too:
The “is a religion of peace” example shows one reason to avoid the word “is”.
Speaking in E-prime does not help clear the brain of those cognitive errors, really. I tried it for a while, and it soon became clear that The Blind Idiot God engrained it far too deeply into our thinking patterns.
Even if they could not explicitly use the word “is”, they would still use the same thinking patterns to equate things with their arbitrary definitions.
Agreed on all counts (I also tried holding myself to writing and thinking in E-prime, several years ago). You can use E-prime without deeply understanding the motivation behind it, in which case you’ll find other clever grammatical structures with which to make the same cognitive mistakes, or you can actually understand the nature of those mistakes and stop making them whether or not you avoid the word “is”.
This is a great post that illustrates an important point. Whenever you make a statement, you’re bringing a large number of beliefs along with you.
Stating that “the car is red” brings along your beliefs of red, car, and possibly is. Its a good bet that most people have very similar beliefs about the meaning of ‘red’ and ‘car’, so it’s immediately clear what your meaning is.
But with a statement like “God exists” or “Islam is a religion of peace”, you’re dragging along with you a huge number of beliefs, many of which you might not be aware of. And so an argument like that will go round and round until its made explicit what beliefs you’re talking about.
One of the main strengths of rationality, in my mind, is that it forces you to consider that hierarchy of belief that sits below every thought and statement.
“All men are created equal” is false insofar as looking at the atom configurations: every human is unique (by Quantum Indistinguishability, any non-unique humans are the same human). On the other hand it is probably true insofar as the CEV Morality Computation says.
“The lottery is a waste of hope” is true by an expected capital gain calculation (net loss) and putting mental energy into something that is a net loss is worse by utilitarianism than something that is a net gain (working hard and hoping you get rich) because at the very least, having money allows you to donate more to charity.
“Religious people are intolerant,” largely depends on the religion how much of the scripture is “boo for unbelievers,” but it seems to almost always incite ingroup-outgroup dichotomies in the psychology of the believers, and that I think is pretty much what intolerance springs from.
“Government is not the solution; government is the problem” is false, because humans generally need more than ‘fairness’ to avoid defection on the iterated prisoners dilemma. Many don’t realise that bureaucracy is fair, if often slow and bloated. A real chance of punishment decided by fair trial is very effective at deterring defectors.
“George Washington was a better president than James Buchanan,” depends solely on the criterion. Peoples opinion? Historical attitude? GDP growth rates? Precentage of votes won at election? Your own boo/yay rating? Mix and match as you like.
“The economy is doing worse today than it was ten years ago,” depends on whether you look at GDP growth or GDP, global economic crisis means lower growth, but we are still richer than ten years ago.
“God exists,” very unlikely, courtesy of Solomonoff Induction.
“One impulse from a vernal wood can teach you more of man, of moral evil, and of good than all the sages can,” bluh? Is this some sort of pop culture reference? Depends on what vernal wood is: if it is something akin to the Akashic record granting omniscience, then it is true. If it is anything else, probably not.
“Imagination is more important than knowledge,” false, the more knowledge you have, the more your lawful creativity can search, the better your imagination, the better you can use what you have gained with the eleventh virtue. Both are roughly equally important.
“Rationalists should win,” mathematical tautology. Perfectly rational bayesian expected utility maximizers do just that. As humans, it is a good heuristic to avoid privileged rituals of thought.
There can be value in tautology for the purpose of drawing attention to an important point: “oh, I’m not winning, I am not a rationalist, then.”
Exactly. When you are currently not holding one million dollars from one-boxing, then you are being irrational (assuming monotonically increasing utility from money), and should self-modify accordingly.
Just clarify for me, the way you use the phrase would you say a perfectly rational Bayesian expected utility maximizer takes one box or two in Newcomb’s problem? Plenty of people would claim that that particular combination of terms refers to a particular kind of agent (and meaning of ‘rational’) which two boxes. The phrase “Rationalists should win” comes built in with the unambiguous “one box” prescription. Those people would therefore either say that the phrase “rationalists should win” is tautologically false or perhaps insist on different language.
The BayRatUtilMax agent I am talking about is of course running the One True Decision Theory which one boxes, is immune to acausal blackmail, and all sorts of other nice features.
It’s a Wordsworth quote. Also “vernal wood” is like “vernal pool”, where vernal means ‘of or pertaining to Spring’. It is reducible and false.
Uniqueness does not imply inequality without the additional assumption of a measure of ‘value’ which it seems the other speaker clearly doesn’t share.
‘Equal’ meaning ‘Copy’ is false, ‘Equal’ meaning ‘Value in Ethical Utilons’ is true.
Privileged Rituals of Thought?
“If you are so smart why aren’t you rich.”
Your rituals of thought should pay rent too. If you follow the way of ‘rationality’ and bad things happen, then you should find out what you are doing wrong, and act accordingly. There is no such excuse as ‘but I did everything I am supposed to’.
Huh?
Imagine a lottery with a $500 prize, 100 tickets sold for a dollar each. The rational thing to do is buy every ticket you can. But you get to the sales office too late, and one ticket has already been sold. You buy the remainder, but don’t win the lottery. You ended up losing money, but you did everything right, didn’t you?
Well, rationalists should end up “winning” insofar as winning means “doing better than non-rationalists ON AVERAGE.
Then again, it doesn’t mean all rationalists end up living 120 years old and extremely rich. If yo are a non-rationalist born with 1 billion of dollars on your bank account you’ll probably end up richer than a rationalist born in North korea in a poor family with no legs and no arms.
But on the other hand, if you cannot identify the causes for your defeats as completely independant of yourself, it probably means you are doing something wrong or at least not optimally.
In the lottery example above, there is 99 other worlds where the rationalist who bought the tickets is better off than the man who did not (unless the lottery is rigged, in which case the rationalist is the one who realised that this smells funny and doesn’t buy tickets). Or more intuitively, if there is a lot of such lotteries, the Rationalist buying the tickets every time will end up richer than the man who doesn’t.
IN YOUR LIFE, there is probably enough such “lotteries” for you to end up better off if you are rationalist than if you are not, and reliably so.
(and “you did everything right” but maybe the right thing to do would have been to arrive at the sales office earlier).
On the other hand, you should expect such a thing to only happen 1% of the times in average, so if you’re consistently unlucky for a long period of time, odds are you’re doing something wrong.
It is not in principle possible to do better in that scenario. It is in principle possible to do better than, say, two-boxing on Newcomb’s problem, even though a CDT agent always does that.
If I randomly get hit by a meteor, there isn’t a lot I could have done to avoid it. If I willingly drive faster than the speed limit and get myself killed in an accident, there isn’t a lot of excuses for why not to abide by the speed limit and survive.
Btw, an excellent book on this topic (which I have just read) is Robert F. Mager’s “Goal Analysis”—it gives a rather detailed procedure for reducing “fuzzies” to testable statements, and even goes a bit more into some of the ramifications you mention here, though the emphasis is on reducing business and education concepts like “we want people to take pride in their work” and “we want students to be serious about their education”.
I would like to give an example that I think fits here. (Do you think it fits?) It has to do with the common practice of summarizing the whole abortion debate with whether or not “life begins at conception”. Putting the debate entirely aside (of course!) this is my favorite example of words that have no meaning outside of identifying yourself with this group or that group, and thus is the beginning and the end of terribly sloppy thinking.
Does life begin at conception? We need to define “life” first. OK, anything that is composed of cells and metabolizes is some sort of common definition. For simplicity, let’s call the unit of life an “organism”. So we are asking if an organism first begins living at conception. Sort of obviously “yes” and totally irrelevant when you think about what the words actually mean, yet somehow this frames the debate for many people.
I agree. This is part of the problem someone (maybe Phil Goetz?) was mentioning before about how our society has this very binary mode of thinking where taking life is always automatically bad. The solution here is taboo “life” and find the disguised query you’re really wondering about. In my case, the question is whether the abortion causes suffering to an entity capable of feeling suffering. In the case of a fetus, arguably yes; in the case of an egg, very likely no.
I’m also sympathetic to logical positivism. I mentioned in the OB thread on behaviorism that inclined toward the theory being scoffed at, but the experiments regarding mental rotation that Pinker has written about show it to be an inferior theory of cognitive science to a computational one. Still better than psychoanalyses though!
Eliezer’s post on entangled lies and the pebble make me skeptical of whether the cake-in-sun example qualifies. The photons-outside-light-cone example is better. Quine’s Two Dogmas of Empiricism convinced a number of people that pragmatism was better than positivism, and on a pragmatic level we could say we just don’t care about those photons.
I’m not sure that epistemic ‘verifiability’ is really a helpful notion here, so I wouldn’t call this any kind of ‘positivism’. Better, I think, to define your thesis directly in terms of metaphysical reduction. For example, it seems a bit of a stretch when you write:
It’s a vivid heuristic, I guess, but it looks like the underlying idea you’re really getting at here is simply the conjunctive claim that (i) there is a privileged class of fundamental “base facts” that specify the contingent state of the universe, and (ii) any meaningful statement must supervene on (or be reducible to) said base facts.
I discuss this more in my old post, ‘Verification and Base Facts’.
One point worth noting is that, although most folks here happen to be physicalists, there’s no principled reason why a “soft positivist” couldn’t be a Chalmers-style property dualist, i.e. including phenomenal properties next to physical properties in the “base facts” to which all else reduces. After all, we can imagine our hypothetical observer “checking the logs” of the universe, and seeing—not only that chocolate cake briefly appeared in the center of the sun (being instantly consumed in a way that nobody inside the universe had any way to detect), but also that Eliezer became a phenomenal zombie for a day (in a way that nobody inside the universe had any way to detect).
Of course, you might have other reasons to reject property dualism—I don’t want to get into the zombie debate here. My point is simply that it seems compatible with the core reductionist idea behind Yvain’s so-called “soft positivism”. This demonstrates just how far this view is from old-fashioned positivism and its concerns about (intra-world) verifiability.
P.S. Html doesn’t work. What’s the comment markup code (blockquotes, hyperlinks, etc.) for this site?
While editing a comment, click “Help”.
Thanks! (I didn’t see that, somehow.)
As I recall, this interpretation was disputed in the comments on that post. In addition to referring readers there, let me also direct those interested in learning more about logical positivism to this interview with A.J. Ayer. If memory serves, he mentions espousing a view of ethical and aesthetic statements similar to Yvain’s interpretation of statements like “Islam is a religion of peace” (often called emotivism, or the “boo-hurray theory”).
I was one of the ones disputing in those comments, but I have to admit that g’s “The absolute is an uncle” example stopped me short. I developed the “logs of the universe” theory in response to that, but I’m not sure how good a response it was—hence footnote 2, about how it was easy in practice but kind of hard in theory.
Emotivism is very relevant here and I was thinking of making a followup post on it. Good catch.
This is a terrific post! It begins to arrive at what is really crucially wrong with the majority of thinking. As someone who only feels comfortable with statements when terms are well-defined and verifiable, I must be a natural positivist. Granted, it’s a bit bizarre that the question ,”How are you?” throws me through a loop unless the context is very clear (the doctor’s office is OK), but I wish that such concepts would catch on with the media -- I mean those who really design to report the news, so I could tell them apart from those that babble lots of impressions and other nonsense.
My instinct is that basically everything real humans talk about in practice is reducible, with the exception of ought-statements, and even ought-statements are reducible if there is some clear goal in the context that all parties will agree with. When people talk, they are usually talking about something.
So, 137 are clearly just claims about reality. Reducing them should pose no difficulty. 10 is also a claim about reality, using “should” not in the sense of “ought”, but rather in the sense of “would be expected to”. 2 and 6 are technically-speaking value claims, but it should be pretty easy to agree on a shared basis for determining value that would let us evaluate them. 459 are value claims where the basis for determining value is likely to be more controversial. Eliezer believes in a fundamental human value system, but I’m more cynical. 459 strike me as unresolveable, even though they are not in principle meaningless. 8 is poetry, which violates the condition that it should be something “real humans talk about in practice”. Much poetry is not meaningful in an ordinary sense, any more than a melody is meaningful.
But there’s no way to test it (at least without time machines and sundiver ships, which we can’t prove are possible) so the logical positivists would dismiss it as nonsense.
Objection! We don’t know that there’s no way to test it. And if we do presume that to be the case, then there are no differences between cake-formed-in-sun and cake-didn’t-form-in-sun—no differences whatsoever, since even a single one would permit a test to distinguish between the two.
If “X happened” and “X didn’t happen” have exactly the same consequences, they describe exactly the same thing, and thus “X” is meaningless. If we imagine that “X happened” really does have consequences if something else also happened, and that thing didn’t occur, talking about X is now pointless.
It should be noted that you’re taking a pragmatist stance here, which has a close family resemblance to positivism but in some lights they’re considered rivals.
Going off on a tangent here about the ‘religion of peace.’
When discussing migration to Europe the relative crime rates of muslim immigrants often pop up, so once I decided to sit down and look them over. In Denmark, which keep pretty meticulous statistics, migrants from muslim countries have a roughly 2.5x higher violent crime rate than the mean. This might sound like a lot but then I started comparing it with other countries:
If we just look at homicides (the crime statistic that most people care about and which is hardest to fake), Denmark has a very low level (1 pr 100 000 inhabitants pr year), and most middle eastern countries have one of 2 to 3 if we exclude the ones currently engaged in civil war. So the rate of migrants was actually exactly what you should expect if people kept to their cultural norms.
Now compare this to places like the US ( 6 pr 100 000), or mexico (almost 30). Even if we look at the state of Minnesota which was mainly settled by Scandinavians we find a rate of 3, higher than most peaceful middle eastern countries.
If you look further east to Malaysia and Indonesia you will find even lower homicide rates. Malaysia has around 1/5th the rate of neighboring Thailand.
Though there might be more reasons for these statistics, it does strike me that Islam might be a religion of peace in as far as it has found a simple ‘peace heuristic’ that make people less likely to kill each other than in other comparatively wealthy or well functioning places.
2. The lottery is a waste of hope. [I’d say that statement reduces to: the odds of winning the lottery are lower than the odds of the average person getting millions dollars by some (any) other process. Is there any other concrete action that the average human can perform; ie, any other concrete scenario that the average human can focus on, and feel hope about actually occurring, that is more likely than winning the lottery after buying a lottery ticket? Maybe starting their own company? The odds of getting rich doing that are small too, though no doubt much larger than the odds of getting rich winning the lottery. But the effort required is much larger too.
But the statement codes for ‘hope’ not increasing actual average wealth.
So, I’d have to say the average lottery ticket buyer is perhaps not irrational if he is buying hope. (ie, if what he wants to buy is hope, something to get him through the depression of knowing he’s never going to be able to quit his shit job, or live in a great house, or do all the fun stuff that rich people do, or have lots of girls chasing him cause he’s rich…
On that basis, I’d say the statement is demonstrably untrue… hope is a valuable commodity, since it makes you feel good, even if it doesn’t pan out, and thus, perhaps very much worth buying.
The statement says that the lottery is a waste of hope. I say that hope is very often not a waste even if what you are hoping for never pans out. ]
I don’t think you have positivism right. It sounds like you’re talking about something related to positivism (or its neighbor, pragmatism) but I doubt most positivists would agree with you.
For example, there is no reason ‘Google is a successful company” can’t be empirically verified, so long as you fix the meaning of the proposition in a way that it can. If you have a clear definition of ‘successful’ and ‘company’ (and the other words, of course) then it will be clear when Google does that.
I’m not sure that epistemic ‘verifiability’ is really a helpful notion here, so I wouldn’t call this any kind of ‘positivism’. Better, I think, to define your thesis directly in terms of metaphysical reduction. For example, it seems a bit of a stretch when you write:
“any statement corresponding to a state of the material universe, reducible in theory to things like quarks and photons, testable by a being who has access to the machine running the universe and who can check the logs at will—such a statement is meaningful”
It’s a vivid heuristic, I guess, but it looks like the underlying idea you’re really getting at here is simply the conjunctive claim that (i) there is a privileged class of fundamental “base facts” that specify the contingent state of the universe, and (ii) any meaningful statement must supervene on (or be reducible to) said base facts.
I discuss this more in my old post, ‘Verification and Base Facts’: http://www.philosophyetc.net/2007/05/verification-and-base-facts.html
One point worth noting is that, although most folks here happen to be physicalists, there’s no principled reason why a “soft positivist” couldn’t be a Chalmers-style property dualist, i.e. including phenomenal properties next to physical properties in the “base facts” to which all else reduces. After all, we can imagine our hypothetical observer “checking the logs” of the universe, and seeing—not only that chocolate cake briefly appeared in the center of the sun (being instantly consumed in a way that nobody inside the universe had any way to detect), but also that Eliezer became a phenomenal zombie for a day (in a way that nobody inside the universe had any way to detect).
Of course, you might have other reasons to reject property dualism—I don’t want to get into the zombie debate here. My point is simply that it seems compatible with the core reductionist idea behind Yvain’s so-called “soft positivism”. This demonstrates just how far this view is from old-fashioned positivism and its concerns about (intra-world) verifiability.
P.S. Html doesn’t work. What’s the comment markup code (blockquotes, hyperlinks, etc.) for this site?
Great post! Here is how I reduce your examples to statements about the territory.
(“Too ambiguous” means “Different humans who say this refer to different statements about the territory, and the different reductions immediately split off in contradictory directions.”)
1. All men are created equal. “It is only right to craft the law such that one’s actions screen off the parameters of one’s birth.”
2. The lottery is a waste of hope. “Presumably you think it is right and proper for there to exist a correlation between your emotion of hope, your actions, and your rational expectation of future wealth. Feeling hopeful after buying a lottery ticket seems to weaken the desired correlation.”
3. Religious people are intolerant. Too ambiguous.
4. Government is not the solution; government is the problem. Too ambiguous.
5. George Washington was a better president than James Buchanan. “If James Buchanan had been the first president instead of George Washington, commonly agreed upon metrics of national quality would be lower.”
6. The economy is doing worse today than it was ten years ago. “The economy is doing worse today according to agreed-upon metrics.”
7. God exists. Too ambiguous.
8. One impulse from a vernal wood can teach you more of man, of moral evil, and of good than all the sages can. Not sure the degree to which humans will share a dereferencing of this pointer. I don’t understand the “one impulse from a vernal wood” part myself.
9. Imagination is more important than knowledge. Too ambiguous
10. Rationalists should win. This statement actually captures a unique thought about what “rationality” means, and it’s meaningful because it proposes this fact about the universe: We in this community are motivated when we think the thought that you thought when you read that statement.
Oddly, when I tried to reduce #10 to show its relationship to the territory, I had to stop at the level of human minds processing the statement. I couldn’t follow through and describe what referent the readers’ minds should point to.
It is written:
So #10 reduces to a pretty unambiguous thought, but exactly how that thought relates to the territory is what the LW community is trying to work out.
1 and 2: Taboo the word “right”. You haven’t come anywhere close to the territory.
Eliezer can use “right” casually in this discussion, thanks to his 10,000-word reduction of it on OB; I may or may not agree with his reduction, but at least I know what sort of evidence would verify or falsify a claim he makes about “right” and “wrong”.
The rest of us should probably be more cautious with those words when verifiability is in the air.
Yes we know “right”’s relationship to the territory has to do with the complexites of the brain. But the entire “right” module can still be part of a reduction.
Tabooing is useful for:
Making sure your reduction attempt isn’t circular
Figuring out what a speaker actually means when they use a word whose referent is ambiguous (like “make a sound”) or just points to their own confusion (like Searle’s “semantics”)
No need for it ere.
In that case, I don’t disagree on substance. But as a matter of clarity,
seems too casual for good communication; it’s what a person would say who takes “right” and “wrong” to be simple ontologically basic properties of the universe.
On the other hand, the less misleading locutions I can think of would be pretty unwieldy in everyday conversation. I wonder if there’s better language we can use or invent to talk about morality from this perspective...
Great post! Here is how I reduce your examples to statements about the territory.
(“Too ambiguous” means “Different humans who say this refer to different statements about the territory, and the different reductions immediately split off in contradictory directions.”)
1. All men are created equal. “It is only right to craft the law such that one’s actions screen off the parameters of one’s birth.”
2. The lottery is a waste of hope. “Presumably you think it is right and proper for there to exist a correlation between your emotion of hope, your actions, and your rational expectation of future wealth. Feeling hopeful after buying a lottery ticket seems to weaken the desired correlation.”
3. Religious people are intolerant. Too ambiguous.
4. Government is not the solution; government is the problem. Too ambiguous.
5. George Washington was a better president than James Buchanan. “If James Buchanan had been the first president instead of George Washington, commonly agreed upon metrics of national quality would be lower.”
6. The economy is doing worse today than it was ten years ago. “The economy is doing worse today according to agreed-upon metrics.”
7. God exists. Too ambiguous.
8. One impulse from a vernal wood can teach you more of man, of moral evil, and of good than all the sages can. Not sure the degree to which humans will share a dereferencing of this pointer. I don’t understand the “one impulse from a vernal wood” part myself.
9. Imagination is more important than knowledge. Too ambiguous
10. Rationalists should win. This statement actually captures a unique thought about what “rationality” means, and it’s meaningful because it proposes this fact about the universe: We in this community are motivated when we think the thought that you thought when you read that statement.
Oddly, when I tried to reduce #10 to show its relationship to the territory, I had to stop at the level of human minds processing the statement. I couldn’t follow through and describe what referent the readers’ minds should point to.
It is written:
So #10 reduces to a pretty unambiguous thought, but exactly how that thought relates to the territory is what the LW community is trying to work out.
has multiple interpretations. The commonly intended one is an axiological statement about what rights we ought to give to people, so is not something you can argue about.
is a complex mixture of empirical statements and axiological statements. The axiological statements are things like “it is better for people to improve their lives than delude themselves into thinking that their lives are good”
is empirically testable and true
mostly testable and mostly false, though it does include some axiological component
is either testable and false, or complete nonsense, religious apologists tend to switch interpretations
not well defined enough to be testable, though there are strict interpretations involving the balance between creativity and rigor and knowledge in science that could be tested.
is a definition, not a claim.
Does “axiological” = “axiomatic”?
Dammit, no. I’ve wasted lots of time arguing against this on OB. You can’t define “rational” as “winning”. “Rational” is an adjective applied to a manner of thinking. Otherwise, you would use the word “winning”. If you say that it’s a definition, what you’re really doing is saying that we can’t criticize people who say “rationalists always win”. But when someone says that rationalists always win, they are making claims about the world. You can derive from that statement expectations about their beliefs about the Prisoner’s Dilemma and the Newcomb Paradox. If it were definitional, you couldn’t make any predictions about their beliefs from their statement.
Based on the original Newcomb Problem post, I would say this statement has a definitional, an empirical, and a normative component, which is what makes it so difficult to unpack. The normative is simple enough: the tools of rationality should be used to steer the future toward regions of higher preference, rather than for their own sake. The definitional component widens the definition of rationality from specific modes of thinking to something more general, like holding true beliefs and updating them in the face of evidence. The empirical claim is that true beliefs and updating, properly applied, will always yield equal or better results in all cases (except when faced with a rationality-punishing deity).
And even there, arguably, the true beliefs of “this deity punish rationality” and “this deity uses this algorithm to do so” could lead to applying the right kind of behaviour to avoid said punishment.
“Religious people are intolerant” is testable and true.
The way this sentence is constructed “X is a subset of Y”, you know that it is false if there is just a single counter-example. To falsify this statement you just need to find a single religious person that is tolerant. So it’s probably (!) false even if its generally true.
You probably shouldn’t be muddling the issue by declaring the statements true/false. That’s not what the exercise is about, after all, and it tempts people like me to dispute that religious people are actually intolerant, and point to the recent posting about “tolerating tolerance”.
Yeah, ok, feel free to ignore the epistemic judgements of the form “X is true/false”
No, no; one can never forget that ‘can teach’ is a two-place word. It would be sad to be able to learn more from one impulse of a vernal wood, but I, for one, can imagine a situationwhere it is strictly, if trivially, true. It would involve lack of time and a desperate need for introspection.
I don’t know about easily, given their accounting practices...
The pint about democratic presidents correlating with a good economy could have inverse causality to what you were assuming: it is known that people are more inclined to vote conservatively when times are hard.
When it refers to the party, “Democratic” is capitalized.
I believe this post would benefit from some trimming.
I usually choose to avoid debates about whether something is meaningful or meaningless. The point is, thinking and communication are usually more productive when people strive to focus on claims which are verifiable.
I would first argue on the term “created”. If we take it out, some possible reductions include:
Each men is identical.
Two quantum particles of the same type taken from two different human beings are undistinguishable. The same holds for Pluto.
Our society should aim at giving each of its member an equal chance in life.
All men are dumb.
If you keep playing lottery (with a finite initial investment), it is certain that you will loose on the long run.
People that claim to be religious are on average more intolerant than the rest of the population (intolerance level is measured by a hidden jury during a role playing situation).
The current implementation of government is imperfect.
In March 2009, a majority of Americans claim that GW was a better president than JB.
Specialists of fields A, B and C judge that GW was a better president than JB while those of fields D, E and F think the converse.
The God described by religion X exists and most of the claims of that religion are true.
There is an entity that created the universe and that can still interact with it. This is experimentally verified using X.
There is an entity that created the universe and that no longer have any interactions with it (not testable).
Of the participants showing the minimal knowledge requirement X (evaluated by test A), those that are more imaginative (evaluated by test B) were significantly better at task Y. No such correlation is observed for test A (as long as requirement X is met).
When given the same amount of information, rational agents fare better on average at any specified task than all other kind of agents. (Add some restrictions for physical strength etc.)