Rationality Quotes March 2012
Here’s the new thread for posting quotes, with the usual rules:
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself
Do not quote comments/posts on LW/OB
No more than 5 quotes per person per monthly thread, please.
- 5 Apr 2012 2:56 UTC; 0 points) 's comment on Rationality Quotes April 2012 by (
-- Benjamin Franklin, Letter to Joseph Priestley, 8 Feb 1780
One of the first transhumanists?
The hard core of transhumanism goes back to at least the Middle Ages, possibly sooner.
Interesting. The particular philosophers you have in mind?
Primarily, I had the Arabic-speaking philosophical alchemists in mind, but there are others. If there is significant interest, then I will elaborate further.
Okay, 2 comments and 3 upvotes is good enough for a quick comment but not a discussion post.
By the “hard core of transhumanism” I mean the belief that humans could use reason to obtain knowledge of the natural world that we can use in order to develop technologies that will allow us to cure sickness, eliminate the need to labor, and extend our lifespans to greater-than-human levels and that we should do these things.
During the Islamic Golden Age, many thinkers combined Aristotelianism and Neoplatonism with knowledge from indigenous craft traditions into a form of alchemy that was refined using logic and laboratory experimentation (Jābir ibn Hayyān is probably the most famous of these thinkers). These philosophers and technologists believed that their theoretical system would allow them to perform transmutation of matter (turn one element into another) unlocking the ability to create almost any “machine” or medicine imaginable. This was thought to allow them to create al ixir (elixir) of Al Khidr fame which, in principle, could extend human life indefinitely and cure any kind of disease. Also of great interest was the attainment of takwin, which is artificial, laboratory-created “life” (even including the intelligent kind). It was hoped (by some) that these artificial creations (called a homunculus by Latin speakers and analogous to the Jewish golem) could do the work of humans the way angels do Allah’s work. Not only could these AIs do our work for us, they could continue our scientific enterprise. According to William Newman, these AIs or robots ”...of the pseudo-Plato and Jabir traditions could not only talk—it could reveal the secrets of nature.” Sound familiar?
Was there any speculation about the Friendly takwin problem?
Not that I know of, but you would think they would have since they were familiar with how badly you could end up screwing yourself dealing with Jinn even though they would do exactly what you tell them to (literally). There are a great many Arabic texts that historians of science have yet to take a look at. Who knows, maybe we’ll luck out and find the solution to the FAI problem in some library in Turkey.
Might also have been an attitude like a lot of people have today, along the lines of :
Interested.
I’m interested as well.
Put me down as “interested”.
Does Imitation of Christ count as transhumanism, or is too ideologically distinct?
I would say no, because their isn’t enough emphasis on technology as the means of achieving post-humanity.
“Be perfect, like an FAI is perfect.”—Jesus
We’ve made really decent progress in only two hundred and thirty-odd years. We’re ahead of schedule.
Benjamin Franklin sure knew how to use the caps. I miss the old days.
The Germans of his day put him to shame.
In fact they still do.
--Joseph de Maistre, Les soirées de Saint-Pétersbourg, Ch. I
I think this quote implies that most false opinions were deliberately invented to further someone’s agenda, and I don’t think that’s true. People’s brains just aren’t optimised for forming true opinions.
(This is something of a sore point with me, as I’ve met too many religious people who challenge atheism with “What? You think [famously good guy X] was lying?”)
And if you say that “guilty” here means not bothering to properly investigate before forming an opinion, then those who continue circulating it are equally guilty for not bothering to investigate before accepting an opinion.
Which exemplifies why “faith” isn’t about belief in propositions so much as it is about trust in individuals (including imagined or possible individuals). Many religionists will even tell you so out front: that while the creed is important, having a trust relationship with God (or Jesus, or the Church, or a guru, etc.) is what their faith is all about.
Some guilt also falls onto those who are not eager enough to verify those opinions or the money they circulate.
The man on the top (at the beginning) is NOT guilty for everything.
To my way of thinking, it’s quite possible for me to be fully responsible for a chain of events (for example, if they would not have occurred if not for my action, and I was aware of the likelihood of them occurring given my action, and no external forces constrained my choice so as to preclude acting differently) and for other people upstream and downstream of me to also be fully responsible for that chain of events. This is no more contradictory than my belief that object A is to the left of object B from one perspective and simultaneously to the right of object A from another. Responsibility is not some mysterious fluid out there in the world that gets portioned out to individuals, it’s an attribute that we assign to entities in a mental and/or social model.
You seem to be claiming that models wherein total responsibility for an event is conserved across the entire known causal chain are superior to mental models where it isn’t, but I don’t quite see why i ought to believe that.
My instinct tells me that dividing 1 responsibility per outcome throughout responsible actors is doomed to reduce to “The full responsibility is equally divided across the entire states of the Universe leading up to this point, since any small difference could have led to a different outcome”. This would make it awfully similar to the argument that no human can be responsible for any crime in a deterministic universe since they did not have control over their actions.
To me, it feels anti-bayesian, but I lack the expertise to verify this.
I don’t endorse the model of “1 responsibility per outcome” that can be divided.
Neither do I endorse the idea that responsibility is incompatible with a deterministic universe.
Also, I have no idea what you mean by “anti-bayesian” here.
It took me a while, but his post made much more sense to me once I realized he was agreeing with you.
Oh!
Huh.
Yeah, I see what you mean.
Heh, sorry, kind of skipped the preamble there.
Yes, the post was in agreement with you, and attempting to visualize / illustrate / imagine a potential way the model could be shown to be flawed.
As for feeling “anti-bayesian”, the idea that a set amount of responsibility exists to be distributed over actors for any event seems completely uncorrelated with reality and independent of any evidence. It feels just like an arbitrary system of categorization, like using “golborf” as a new term for “LessWrong users that own a house, don’t brush their teeth daily, drink milk daily, enjoy classical music and don’t work in IT-related fields”.
That little feeling somewhere that “This thing doesn’t belong here in my model.”, that there are freeloading nodes that need to be purged.
I’m very surprised as to why is this so upvoted, other than the fact that some of the LW crowd really loves 19th century right-wing writers. The statement is patently untrue.
Even in regard to hard-line reactionaries themselves and their political circumstances; did de Maistre think that Voltaire or Rousseau or even Robespierre ever consciously produced “false opinions” to befuddle the masses?
No way; even later conservatives, like Burke and Chesterton, have admitted that if the French Revolution went wrong somewhere (and Chesterton thought it was off to a good start), it must have been a mistake, not a crime.
I know ~nothing about the historical events which you allude to, but I upvoted the quote because experience tells me it’s very true in real life. E.g. a journalist writes a news article that contains lies about its subject matter, and the link to the article gets widely shared by honest people who presume that it’s telling the truth. Or a dishonest scientist makes up his data, and then gets cited by honest scientists.
Oh. In that case, well, it’s true about local “opinions” but false about views on global things. Like the so-called free market (which is mostly not free) or the so-called democracy (which is mostly not ruled by the People): I believe that most nominally educated people today have a pretty reasonable assessment of their value: they kinda work, and even bring some standard of living, but do so very ineffectively. So the only “false opinions” on this scale are just ritual statements semi-consciously produced out of fear of empowering the enemies of the present structure. I might make a great and benevolent dictator, but I can’t trust my heir; so I’d rather endorse “democracy” steered by experts. Both the “democracy” and the “free market” are part of what we are, therefore we must defend them vigilantly.
Fortunately, we’re leaving such close-mindedness behind. Unfortunately, we might have the illusion of not needing any other abstract concepts to use for our social identity. Humans always do! If we don’t believe in Democracy, then we must believe in the Catholic Church, or Fascism, or Moldbuggery, or Communism, or Direct Theocracy (like in Banks’ Culture). But believe we will.
This sounds somewhat like the assertion, usually made by religious critics of science, that “everyone believes in something; your faith is in Science” (or Darwin, or the like). Would you care to distinguish these assertions?
I don’t think it’s a very good quote but I’d guess that the majority of readers didn’t know/notice/remember he was a 19th century right-wing writer. As such few people would associate this quote with opposition to the French Revolution, or even politics—people would first think of such things as religions.
And I’d put money on Mohammed, Joseph Smith and Apostle Paul to have been deliberate conmen. (I’m leaving out Jesus, because I’d put odds on him being just delusional)
-Michael “Kayin” O’Reilly
Or, as the Language Log puts it:
It’s Language Log, without the, goddammit!
Without the what? That isn’t grammatical.
Without the fnord, of course.
What “of course”?
Upvoted under the presumption that you’re being ironic.
Why, do you say “Less Wrong”, or “the Less Wrong”?
Swap out “grammar” and “style” for “morality” and “ethics”?
Disagree strongly. What the heck is “evidence” for morality? Unless “emulate X” is one of your values, your ethical system needn’t aspire to approximate anything.
But if you are settling a question of morality, I take it as being a question between multiple people (that’s not explicit, but seems to be implicity part of the above). One’s personal ethical system needn’t aspire, but when settling a question of group ethics or morality, how do you proceed?
Or for that matter, how do I analyze my own ethics? How do I know if I’m achieving ataraxia without looking at the evidence: do my actions reduce displeasure, etc? The result of my (or other people’s) actions are relevant evidence, providing necessary feedback to my personal system of ethics, no?
Just so we’re clear, I’m using “ethics” and “morality” as synonyms for each other and for “terminal values”.
If you’re settling a dispute, there’s no objectively true meta-morality to go to in the same way as people speaking is the objectively there state of a language. One party wants some things, the other party wants other things, and depending on what the arbitrator wants, and how much power everyone involved has, the dispute will be settled in a certain way.
As for how you analyze your own ethics: You can’t, as far as I know. The question of e.g. “do my actions reduce displeasure?” is only relevant once you’ve decided you want to reduce displeasure. We make decisions by measuring our actions’ impact on reality and then measuring that against our values, but we’ve got nothing to measure our values against.
One of my favorite things about many constructed languages is that they get rid of this distinction entirely. You don’t have to worry about whether or not “Xify” is a so-called real word for any given value X, you only have to check if it X’s type fits the pattern. This happens merely because it’s a lot easier, when you’re working from scratch anyways, to design the language that way than to have to come up with a big artificial list of -ify words.
-- Neil DeGrasse Tyson
Fits this one, two out of three.
I think he’d do better if he just made up his mind. I’d go with the second one.
watch out folks, we got a badass over here
I’d go with the first one. But then again I’m a selfish bastard.
-- Dinosaur Comics
--Alain de Botton
How poignant for me since every last bit applies to me.
--Morris Raphael Cohen, quoted by Cohen in “The Earth Is Round (p < 0.05)”
--Diane Duane, High Wizardry
I can’t remember anything about those books, other than that I liked them...
--Alain de Botton
(Perhaps this individual quote is insightful (I can’t tell), but this sort of causal analysis leads to basic confusions of levels of organization more often than it leads to insight.)
Can you give an example of how that leads to confusions about levels of organization?
-Seth Godin
A. I’m not entirely sure that things that used to be human nature no longer are. We deal with them, surpress them, sublimate, etc. Anger responses, fear, lust, possesiveness, nesting. The animal instincts of the human animal. How those manifest does indeed change, but not the “nature” of them.
B. We live (in the USA) in a long-term culture of anti-intellectualism. Obviously this doesn’t mean it can’t change… Sometimes it seems like it will (remember the days before nerd-chic?), but in a nominally democratic society, there will always be a minority of people who are relatively “intellectual” by definition, we should recognize that you don’t have to overcome anti-intellectualism, you just have to raise the bar. While still anti-intellectual, in many ways even the intentionally uninformed know more than the average person did back in the day. (just like there will always be a minority of people who will be “relatively tall”, even as the average height has tended to increased over the generations)
Which type of anti-intellectualism are you referring to?
Interesting. If my experience is representative, then a sizable subset of Less Wrongers are what the author calls epistemic-skeptical anti-intellectuals.
It seems slightly odd that there are many on LessWrong whose justification for not looking deeply into the philosophy literature is that philosophers “are too prone to overestimate their own cleverness” and end up shooting their own philosophical feet off, but that subset of LessWrong doesn’t seem to overlap much with those who are epistemic-skeptical anti-intellectuals in the more political sense. Admittedly my own view is that the former subset is basically wrong whereas the latter is basically right, but naively viewed the two positions would seem to go together much as they do with neoconservatives. …I feel like I’m not carving up reality correctly.
I’m probably referring to all of the above. That’s an interesting speciation of anti-intellectualism, but I am meaning it in the broad sense, because I’ve seen all of them.
If someone calls me a “liberal elitist”, is it version 1, 3, or 5? Does the class issue also result in a gut reaction? Is the traditionalism directly related to the totalizing? I understand the differences as described in the article, but I’m not sure they are easily separable. Sometimes yes, but not always. So: A. I think the differences are interesting, and useful, but not always clearly delineated, and B. when generalizing about a group, I’m not sure it’s necessary. If I say “New Yorkers really like dogs”, it’s probably not cricitcal which breed I mean. If I say “that person really likes his/her dog” then it matters more.
(and we all know that when you generalize about things it’s like when you assume things: it makes a general out of I and, um, ze)
As relates to the original quote: which type was Godin referring to? He talks about being ashamed at being uninformed, which touches on 1 and 5, possibly 2, and interacts with 3. (pobre quatro) One of the things we’ve slowly seen is the other side: being unashamed at being informed...or politically unpunished, for that matter. Politicians want to be “regular people” because they are berated for using subclauses in sentences (John Kerry), for being a know-it-all (Gore), elitist (everyone, per Palin), destroying the fabric (Obama), utopiansim (the 90′s Clintons), etc...
What really entertained me about this clause is that I spent a noticeable period of time trying to remember which of the many competing novel pronoun schemes “ze” was in, before realizing from context that it had to be a second-person pronoun and wondering why would we create a new second-person pronoun given that the English “you” is already ambiguous about gender and number and basically everything else, and only then did my parsing of the rest of the sentence catch up and make me realize it was a joke.
•••
.
Wait, Google says nobody’s posted this joke on LessWrong before?
...
A philosopher, a scientist, and a mathematician are travelling through Scotland, gazing out the window of the train, when they see a sheep.
“Ah,” says the philosopher, “I see that Scottish sheep are black.”
“Well,” says the scientist, “at least we see that some Scottish sheep are black.”
“No,” says the mathematician, “we merely know that there exists at least one sheep in Scotland which is black on at least one side.”
“Actually,” says the stage magician, “we merely know that there exists something in Scotland which appears to be a sheep which is black on at least one side when viewed from this spot.”
Ayn Rand
Making the (flawed) assumption that in a disagreement, they cannot both be wrong.
Also, they could be wrong about whether they actually disagree.
IME that’s the case in a sizeable fraction of disagreements between humans; but if they “let reality be [their] final arbiter” they ought to realize that in the process.
I have also heard it quoted like this.
Perhaps, but it is rather unlikely that they are equally wrong. It is far more likely that one will be less wrong than the other. Indeed, improving on our knowledge by the comparison between such fractions of correctness would seem to be the whole point of Bayesian rationality.
I think that if the other person convinces you that they are right and they are right, then it should count as “winning the argument”. It’s the idea that has lost, not you.
--SMBC Theater—Death
Karkat from Homestuck by Andrew Hussie
I’m also fond of:
Karkat’s just full of these gems of almost-wisdom.
5-Second Films looks at past self and present self (NSFW written language).
Me: “The BOFH stories are just stories and certainly not role models. Ha! Ha! Baseball bat, please.”
Boss: “The DNS stuff is driving me batty, but I’m not sure who needs taking into a small room and battering.”
Me: “Your past self.”
Boss: “Yeah, he was a right twat.”
(I was thinking of Karkat, too.)
Mencius Moldbug, A gentle introduction to Unqualified Reservations (part 2) (yay reflection!)
— Aleksandr Solzhenitsyn, The Gulag Archipelago
-Daniel Kahneman, Thinking, Fast and Slow
Yeah, a good compression algorithm—a dictionary that has short words for the important stuff—is vital to learning just about anything. I’ve noticed that in the martial arts; there’s no way to learn a parry, entry, and takedown without a somatic vocabulary for the subparts of that; and the definitions of your “words” affects both the ease of learning and the effectiveness of its execution.
Also, wouldn’t it be better to call it a hash table or a lookup-table rather than a compression algorithm. The key is swift and appropriate recall. Example: Compare a long-time practicing theoretical physicist with a physics grad student. Both know most of basic quantum mechanics. But the experienced physicist would know when to whip out which equation in which situation. So, the knowledge content is not necessarily compressed (I’m sure there is some compression) as much as the usability of the knowledge is much greater.
Interesting. So by somatic vocabulary, you basically mean composing long complicated moves from short, repeatable sub-moves?
Basically, yes. Much of the vocabulary has very long descriptions in English, but shorter ones in different arts’ parlance; some of it doesn’t really have short descriptions anywhere but in the movements of people who’ve mastered it. The Epistemic Viciousness problem makes it difficult, in general, to find and cleave at the joints.
thatguythere47, enunciating an important general principle.
Naturally not. Harry would only do something that reckless if it was to save a general of the Dark Lord on the whim of his mentor. ;)
I of course agree with thatguy, with substitution of ‘the most viable immediate’ in there somewhere. It is a solution to all sorts of things.
If Eliezer Yudkowsky, the author, is lauding this statement, I think we can rule this out as Harry’s solution.
As previously stated, Harry is not a perfect rationalist.
Neither is Eliezer Yudkowsky.
My philosophy is that it’s okay to be imperfect, but not so imperfect that other people notice.
I propose that it’s okay to be imperfect, but not so imperfect that reality notices.
Reality* notices everything.
*and Chuck Norris
No way! Chuck Norris and didn’t notice!
This is a cool-sounding slogan that doesn’t actually say anything beyond “Winning is good.”
No, it says that practical degrees of excellence are just fine and you don’t actually have to achieve philosophically perfect excellence to be sufficiently effective.
It’s the difference between not being able to solve an NP-complete problem perfectly, and being able to come up with pretty darn close numerical approximations that do the practical job just fine. (I think evolution achieves a lot of the latter, for example.)
I agree with your version, but “not getting caught” as a proxy for “good enough” is, at least to humans, not just wrong but actively misleading.
This variant of when all you have is a hammer is seen often enough to merit a name.
“When all you have is a powered-up Patronus, every problem looks like storming Azkaban is the answer”?
I meant something along the lines of “When your hammer is too darn impressive, everything begins to look like a nail.”
--Daniel Kahneman in Thinking, Fast and Slow
--Matt Yglesias
I found that very poignant, but I’m not sure I agree with his final claim. I think he’s committing the usual mistake of claiming impossible what seems hard.
Is it even hard? JFDI, or as we might say here, shut up and do the impossible. Is “efficient” a tendentious word? Taboo it. Is discussion being confused by mixing normative and positive concepts? DDTT.
The quote smells like rationalising to me.
Yeah, agreed. It’s entirely possible to describe a system of economic agents without using such value-laden terns (though in some cases we may have to make up new terms). We don’t do it, mostly because we don’t want to. Which IMHO is fine; there’s no particular reason why we should.
Yglesias seems to be committing an error here by confusing technical jargon with common English. Efficient has a very specific meaning in economics (well, two specific meanings, depending on what kind market you’re talking about). The word efficient is not meant to refer to universal goodness and it’s a mistake to treat it as if it were.
I know of three, although it is a matter of parametrization (weak, strong, semi-strong). What two meanings do you have in mind?
The three you mention are all subtypes of the same efficiency—informational efficiency. Informational efficiency is used in finance and refers to how well a financial market incorporates information into prices. Basically a market is informationally efficient if you can’t out-predict without using information it doesn’t have. The weak / semi-strong / strong distinction merely indicates how much information it is incorporating into prices: weak means it’s incorporating it’s own past prices, semi-strong includes all public information, and strong includes all information held in private as well.
The other type of efficiency is allocative efficiency, a concept used in microeconomics. An allocatively efficient market is one that assigns goods to the people who place the highest value on them (subject to the constraints of each person’s endowments). It is effectively a utility-maximising condition. The whole concept of market failure in economics is built around situations where markets are failing to be allocatively efficient.
The first thought that I have when considering how to describe the economy without using normative language is that all of the values that are commonly measured (i.e. GDP, unemployment, etc.) are chosen to be measured because they are proxies for things that people value.
In fact, the whole study of economics seems to me like the study of things people value and how they are distributed. If you choose proxies for value you’re having a profound effect on what gets measured (consider the recent discussions of statistical significance as a proxy for evidence) and if you try to list everything that everyone values you end up butting up against unsolved problems.
-- W. V. O. Quine
(1978). I expected this to be older.
Did anyone ever track down the catalogue in question?
(Did the university in question later offer degrees in alternative medicine?)
-Carl Sagan, The Demon Haunted World
Plutarch, found here
Steven Kaas
I think the original is instrumentally more useful. On hearing “the road to hell is paved with good intentions”, one of my reactions is “I have good intentions, I’d better make sure I’m not on the road to hell”. On hearing your version my first reaction is “whew, this doesn’t apply to me, only to those people with bad epistemology”.
Interesting, my immediate reaction is “oh, I guess I need to seriously work on my epistemology rather than work on having better intentions as such”.
I suspect that’s a standard reaction to hearing of any cognitive bias.
“Hah, this article nails those assholes perfectly!”
—some asshole
Or different values to the damner’s, which may or may not count as “bad intentions” depending on your semantic preferences.
Diana Wynne Jones, Dark Lord of Derkholm
Adolfo Bioy Casares (my translation)
-- Douglas Adams
~ Sca-自 Subarashiki Hibi ~Discontinuous Existence~ (personal translation)
Why don’t they just play tag with each other? Sounds like it would be fun.
Because they’re jerks.
Indeed. The kind of people who would go “Whee! Let’s play tag!” in this situation do not find themselves in Hell (at least in this particular one) in the first place.
Related to Schelling fences on slippery slopes:
— Thomas De Quincey
I don’t get this quote, it strikes me as wit with no substance.
Presumably the quote is from De Quincey’s essay “On Murder Considered as one of the Fine Arts”, and with that context & perspective in mind it has a tad more substance.
I have always read it as intentionally ironic commentary on the ‘slippery slope’ more than anything else.
I read it more specifically as a parody of moral slipperyslopism, in which slight moral infractions lead to the worst sort of behavior.
Arguably, we live in an era strongly shaped by revulsion at moral slipperyslopism.
Me too, honestly.
HULK EXPLAINS WHY WE SHOULD STOP IT WITH THE HERO JOURNEY SHIT
-David Deutsch, The Beginning of Infinity.
Of course there is. A proof of a mathematical proposition is just as much itself a mathematical object as the proposition being proved; it exists just as independently of physics. The proof as written down is a physical object standing in the same relation to the real proof as the digit 2 before your eyes here bears to the real number 2.
But perhaps in the context Deutsch isn’t making that confusion. What scope and limitations on mathematical knowledge, conditioned by the laws of nature, does he draw out from these considerations?
The Pythagorean theorem isn’t proved or or even checked by measuring right triangles and noticing that a^2 + b^2 = c^2. Is the Pythagorean theorem not knowledge?
I don’t think Deutsch means that mathematical proofs are all inductive. I think he means that proofs are constructed and checked on physical computing devices like brains or GPGPUs; and that because of that mathematical knowledge is not in a different ontological category than empirical knowledge.
I feel quite confident saying that mathematics will never undergo paradigm shifts, to use the terminology of Kuhn.
The same is not true for empirical sciences. Paradigm shifts have happened, and I expect them to happen in the future.
It believe it already has. Consider the Weierstrass revolution. Before Weierstrass, it was commonly accepted that while continuous functions may lack a derivative at a set of discrete points, it still had to have a derivative somewhere. Then Weierstrass developed a counterexample, which I think satisfies the Kuhnian “anomaly that cannot be explained within the current paradigm.”
Another quick example: during the pre-War period, most differential geometry was concerned with embedded submanifolds in Euclidean space. However, this formulation made it difficult to describe or classify surfaces—I seem to believe but don’t have time to verify that even deciding whether two sets of algebraic equations determine isomorphic varieties is NP-hard. Hence, in the post-War period, intrinsic properties and descriptions.
EDIT: I was wrong, or at least imprecise. Isomorphism of varieties can be decided with Grobner bases, the reduction of which is still doubly-exponential in time, as far as I can tell. Complexity classes aren’t in my domain; I shouldn’t have said anything about them without looking it up. :(
Reading the wiki page, it looks like Weierstrass corrected an error in the definition or understanding of limits. But mathematicians did not abandon the concept of limit the way physicists abandoned the concept of epicycle, so I’m not sure that qualifies as a paradigm shift. But I’m not mathematician, so my understanding may be seriously incomplete.
I can’t even address your other example due to my failure of mathematical understanding.
Hindsight bias. The old limit definition was not widely considered either incorrect or incomplete.
They abandoned reasoning about limits informally, which was de rigeur beforehand. For examples of this, see Weierstrass’ counterexample to the Dirichlet principle. Prior to Weierstrass, some people believed that the Dirichlet principle was true because approximate solutions exist in all natural examples, and therefore the limit of approximate solutions will be a true solution.
Not true. The “old limit definition” was non-existent beyond the intuitive notion of limit, and people were fully aware that this was not a satisfactory situation.
We need to clarify what time period we’re talking about. I’m not aware of anyone in the generation of Newton/Leibniz and the second generation (e.g., Daniel Bernoulli and Euler) who felt that way, but it’s not as if I’ve read everything these people ever wrote.
The earliest criticism I’m aware of is Berkeley in 1734, but he wasn’t a mathematician. As for mathematicians, the earliest I’m aware of is Lagrange in 1797.
I’m also curious about this history.
That’s pretty clear, thanks. Obviously, experts aren’t likely to think there is a basic error before it has been identified, but I’m not in position to have a reliable opinion on whether I’m suffering from hindsight bias.
Still, what fundamental object did mathematics abandon after Weierstrass’ counter-example? How is this different from the changes to the definition of set provoked by Russell’s paradox?
I don’t recall where it is said that such an object is necessary for a Kuhnian revolution to have occurred. There was a crisis, in the Kuhnian sense, when the old understanding of limit (perhaps labeling it as limit1 will be clearer) could not explain the existence of e.g., continuous functions without derivatives anywhere, or counterexamples to the Dirichlet principle. Then Weierstrass developed limit2 with deltas and epsilons. Limit1 was then abandoned in favor of limit2.
Wikipedia gives the acceptance of non-Euclidean geometry as a “classical case” of a paradigm shift. I suspect that there were several other paradigm shifts involved from Euclid’s math to our math: for instance, coordinate geometry, or the use of number theory applied to abstract quantities as opposed to lengths of line segments.
Would the whole Russel’s paradox incident count as a mathematical paradigm shift?
Reading Wikipedia, it looks like a naive definition of a set turns out to be internally inconsistent. Does that mean the concept of set was abandoned by mathematicians the way epicyles have been abandoned by physicists? That’s not my sense, so I hesitate to say redefining set in a more coherent way is a paradigm shift. But I’m no mathematician.
Its a matter of degree rather than an absolute line. However, I would say a time when even the very highest experts in a field believed something of great importance to their field with quite high confidence, and then turned out to wrong, probably counts.
I don’t think “everyone in field X made an error” is that same thing as saying “Field X underwent a paradigm shift.”
Why not ? That sounds like a massive shift in the core beliefs of the field in question. If that’s not a paradigm shift, then what is ?
The “non-expressible in the new concept-space” thing that you think never actually happens.
This looks very like trying to define away something that sure felt like a paradigm shift to the people in the field. Remember that “paradigm” is a belief held by people, not a property inherent in the universe.
Perhaps this is a limitation of my understanding of Kuhn, in that I’m misusing his terminology. I am unaware of mathematics abandoning fundamental objects as inherently misguided the way physics abandoned epicycles or impetus. I expect physics will have similar abandonments in the future, but I expect mathematics never will. The difference is a property of the difference between mathematics and empirical facts. This comment makes the argument I’m trying to assert in slightly different form.
Isn’t that exactly what happened? The phrase “set of all sets that do not contain themselves” isn’t really expressible in Zermelo-Fraenkel set theory, since that has a more limited selection of ways to construct new sets and “the set of everything that satisfies property X” is not one of them.
I don’t think it’s terribly useful to frame the discussion in terms of concepts that never actually happen :-)
What would count as one?
As I understand it, a paradigm shift would include the abandonment of a concept. That is, the concept cannot be coherently expressed using the new terminology. For example, there’s no way to express coherent concepts in things like Ptolomy’s epicycles or Aristole’s impetus. I think Kuhn would say that these examples are evidence that empirical science is socially mediated.
I’m not aware of any formerly prominent mathematical concepts that can’t even be articulated with modern concepts. Because mathematics is non-empirical and therefore non-social, I would be surprised if they existed.
A totally trivial nit pick, I admit, but there’s no such thing as the Aristotelian theory of impetus. The theory of impetus was an anti-Aristotelian theory developed in the middle ages. Aristotle has no real dynamical theory.
Thanks. Did not know that.
Thanks, I did not actually know that. But I should have known.
There are perfectly fine ways to express those things. Epicycles might even be useful in some cases, since they can be used as a simple approximation of what’s going on.
The reason people don’t use epicycles any more isn’t because they’re unthinkable, in the really strong “science is totally culture-dependent” sense. It’s because using them was dependent on whether we thought they reflected the structure of the universe, and now we don’t. Ptolemy’s claim behind using epicycles was that circles were awesome, so it was likely that the universe ran on circles. This is a fact that could be tested by looking at the complexity of describing the universe with circles vs. ellipses.
So this paradigm shift stuff doesn’t look very unique to me. It just looks like the refutation of an idea that happened to be central to using a model. Then you might say that math can have no paradigm shifts because it constructs no models of the world. But this isn’t quite true—there are models of the mathematical world that mathematicians construct that occasionally get shaken up.
My point was that trying to express epicycles in the new terminology is not possible. That is, modern physicists say, “Epicycles don’t exist.”
Obviously, it is possible to use sociological terminology to describe epicycles. You yourself said that they were useful at times. But that’s not the language of physics.
Since you mentioned it, I would endorse “Science is substantially culturally dependent”, NOT “Science is totally culturally dependent.” So culturally dependent that there is not reason to expect correspondence between any model and reality. Better science makes better predictions, but it’s not clear what a “better” model would be if there’s no correspondence with reality.
I brought all this up not to advocate for the cultural dependence of science. Rather, I think it would be surprising for a discipline independent of empirical facts to have paradigm shifts. Thus, the absence of paradigm shifts is a reason to think that mathematics is independent of empirical facts.
If you don’t think science is substantially culturally dependent, then there’s no reason my argument should persuade you that mathematics is independent of empirical facts.
This is false in an amusing way: expressing motion in terms of epicycles is mathematically equivalent to decomposing functions into Fourier series—a central concept in both physics and mathematics since the nineteenth century.
To be perfectly fair, AFAIK Ptolemy thought in terms of a finite (and small) number of epicycles, not an infinite series.
And so for the curves in question, the Fourier expansion would have only a finite number of terms.
The point being that, in contrast to what was being asserted, Ptolemy’s concept is subsumed within the modern one; the modern language is more general, capable of expressing not only Ptolemy’s thoughts, but also a heck of a lot more. In effect, modern mathematical physics uses epicycles even more than Ptolemy ever dreamed.
That’s good point; I haven’t thought about that. Go epicycles ! Epicycles to the limit !
ducks and runs away
But it is! You simply specify the position as a function of time and you’ve done it! The reason why that seems so strange isn’t because modern physics has erased our ability to add circles together, it’s because we no longer have epicycles as a fundamental object in our model of the world.
So if you want the copernican revolution to be a paradigm shift, the idea needs to be extended a bit. I think the best way is to redefine paradigm shift as a change in the language that we describe the world in. If we used to model planets in terms of epicycles, and now we model them in terms of ellipses, that’s a change of language, even though ellipses can be expressed as sums of epicycles, and vice versa.
In fact, in every case of inexpressibility that we know of, it’s been because one of the ways of thinking about the world didn’t give correct predictions. We have yet to find two ways of thinking about the world that let you get different experimental results if you plan the experiment two different ways. In these cases, the paradigm shift included the falsification of a key claim.
I don’t think it’s necessarily true (for example, you can imagine an abstract game having a revolution in how people thought about what it was doing), but it seems reasonable for math, depending on how you define “math.” I think people are just giving you a hard time because you’re trying to make this general definitional argument (generally not worth the effort) on pretty shaky ground.
Thanks, that’s quite clear. Should I reference abandonment of fundamental objects as the major feature of a paradigm shift?
Yes, every successful paradigm shift. Proponents of failed paradigm shifts are usually called cranks. :)
My position is that the repeated pattern of false fundamental objects suggest that we should give up on the idea of fundamental objects, and simply try to make more accurate predictions without asserting anything else about the “accuracy” of our models.
How can you make accurate predictions while at the same time discarding the notion of accuracy ?
I have no reason to expect that our models correspond to reality in any meaningful way, but I still think that useful predictions are possible.
Predictions about the world are only possible to the extent the world controls the predictions, to the extent considerations you use to come up with the predictions correspond to the state of the world. So it’s not possible to make useful predictions based on considerations that don’t correspond to reality, or conversely if you manage to make useful predictions, there must be something in your considerations that corresponds to the world. See Searching for Bayes-Structure.
Isn’t “makes accurate predictions” synonymous with “corresponds to reality in some way” ? If there was absolutely no correspondence between your model and reality, you wouldn’t be able to judge how accurate your predictions were. In order to make such a judgement, you need to compare your predictions to the actual outcome. By doing so, you are establishing a correspondence between your model and reality.
I’m not seeing how the second sentence is an example of the criterion in your first sentence. That criterion seems to strict, too: in general the new paradigm subsumes the old (as in the canonical example of Newtonian vs relativistic physics).
I’m also not seeing what the attributes “empirical” and “non-social” have to do (causally) with the ability to form coherent concepts.
Maybe you should also unpack what you mean by “coherent”?
I’m not a mathematician, but from my outside perspective I would cheerfully qualify something like Wilf-Zeilberger theory as the math equivalent to a paradigm shift in the empirical sciences.
WP lists “non-euclidean geometry” as a paradigm shift, BTW.
Using modern physics, there is no way to express the concept that Ptolomy intended when he said epicycles. More casually, modern physicists would say “Epicycles don’t exist” But contrast, the concept of set is still used in Cantor’s sense, even though his formulation contained a paradox. So I think the move from geocentric theory to heliocentric theory is a paradigm shift, but adjusting the definition of set is not.
I’m using the word science as synonymous with “empirical studies” (as opposed to making stuff up without looking). That’s not intended to be controversial in this community. What is controversial is the assertion that studying the history of science shows examples of paradigm shifts.
One possible explanation of this phenomena is that science is socially mediated (i.e. affected by social factors when the effect is not justified by empirical facts).
I’m asserting that mathematics is not based on empirical facts. Therefore, one would expect that it could avoid being socially mediated by avoiding interacting with reality (that is, I think a sufficiently intelligent Cartesian skeptic could generate all of mathematics). IF I am correct that they are caused by the socially mediated aspects of the scientific discipline and IF mathematics can avoid being socially mediated by virtue of its non-empirical nature, then I would expect that no paradigm shifts would occur.
This whole reference to paradigm shifts is an attempt to show a justification for my belief that mathematics is non-empirical, contrary to the original quote. If you don’t believe in paradigm shifts (as Kuhn meant them, not as used by management gurus), then this is not a particularly persuasive argument.
If Wikipedia says that, I don’t think it is using the word the way Kuhn did.
For Kuhn, the word was, if anything, a sociological term—not something referring to the structure of reality itself. (Kuhn was not himself a postmodernist; he still believed in physical reality, as distinct from human constructs.) So it seems to me that it would be entirely consistent with his usage to talk about paradigm shifts in mathematics, since the same kind of sociological phenomena occur in the latter discipline (even if you believe that the nature of mathematical reality itself is different from that of physical reality).
As I’d mentioned elsewhere, there’s actually a pretty easy way to express that, IMO: “Ptolemy thought that planets move in epicycles, and he was wrong for the following reasons, but if we had poor instruments like he did, we might have made the same mistake”.
The abovementioned non-euclidean geometry is one such shift, as far as I understand (though I’m not a mathematician). I’m not sure what the difference is between the history of this concept, and what Kuhn meant.
But there were other, more powerful paradigm shifts in math, IMO. For example, the invention of (or discovery of, depending on your philosophy) zero (or, more specifically, a positional system for representing numbers). Irrational numbers. Imaginary numbers. Infinite sets. Calculus (contrast with Zeno’s Paradox). The list goes on.
I should also point out that many, if not all, of these discoveries (or “inventions”) either arose as a solution to a scientific problem (f.ex. Calculus), or were found to have a useful scientific application after the fact (f.ex. imaginary numbers). How can this be, if mathematics is entirely “non-empirical” ?
Hmm, I’ll have to think about the derivation of zero, the irrational numbers, etc.
The motivation for derivation of mathematical facts is different from the ability to derive them. I don’t why the Cartesian skeptic would want to invent calculus. I’m only saying it would be possible. It wouldn’t be possible if mathematics was not independent of empirical facts (because the Cartesian skeptic is isolated from all empirical facts except the skeptic’s own existence).
My point is that we humans are not ideal Cartesian skeptics. We live in a universe which, at the very least, appears to be largely independent of our minds (though of course our minds are parts of it). And in this universe, a vast majority of mathematical concepts have practical applications. Some were invented with applications in mind, while others were found to have such applications after their discovery. How could this be, if math is entirely non-empirical ? That is, how do you explain the fact that math is so useful to science and engineering ?
Hmm, “justified” generally has a social component, so I doubt that this definition is useful.
So this WP page doesn’t exist? ;)
My position, FWIW, is that all of science is socially mediated (as a consequence of being a human activity), mathematics no less than any other science. Whether a mathematical proposition will be assessed as true by mathematicians is a property ultimately based on physics—currently the physics of our brains.
I disagree, as, I suspect, you already know :-)
But I have a further disagreement with your last sentence:
What do you mean, “and therefore” ? As I see it, “empirical” is the opposite of “social”. Gravity exists regardless of whether I like it or not, and regardless of how many passionate essays I write about Man’s inherent freedom to fly by will alone.
Yes, non-empirical is the wrong word. I mean to assert that mathematics is independent of empirical fact (and therefore non-social. A sufficiently intelligent Cartesian skeptic could derive all of mathematics in solitude).
Didn’t Gödel show that nobody can derive all of mathematics in solitude because you can’t have a complete and consistented mathamatical framework?
Goedel showed that no one can derive all of mathematics at all, whether in solitude or in a group, because any consistent system of axioms can’t lead to all the true statements from their domain.
Anyone know whether it’s proven that there are guaranteed to be non-self-referential truths which can’t be derived from a given axiom system? (I’m not sure whether “self-referential” can be well-defined.)
It is. At least, it’s possible to express Goedel statements in the form “there exist integers that satisfy this equation”.
It can’t.
I don’t know whether this is true or not; arguments could (and have) been made that such a skeptic could not exist in a non-empirical void. But that’s a bit offtopic, as I still have a problem with your previous sentence:
Are you asserting that all things which are “dependent on empirical fact” are “social” ? In this case, you must be using the word “social” in a different way than I am.
If we lived in a culture where belief in will-powered flight was the norm, and where everyone agreed that willing yourself to fly was really awesome and practically a moral imperative… then people would still plunge to their deaths upon stepping off of skyscraper roofs.
:) It is the case that the coherence of the idea of the Cartesian skeptic is basically what we are debating.
I’m specifically asserting that things that are independent of empirical facts are non-social.
I think that things that are subject to empirical fact are actually subject to social mediation, but that isn’t a consequence of my previous statement.
What does rejection of the assertion “If you think you can fly, then you can” have to do with the definition of socially mediated? I don’t think post-modern thinking is committed to the anti-physical realism position, even if it probably should endorse the anti-physical models position. The ability to make accurate predictions doesn’t require a model that corresponds with reality.
That might be a bit orthogonal to the discussion; I’m certainly willing to grant you the Cartesian skeptic for the duration of this thread :-)
If you are talking about pure reason, don’t the conclusions depend on your axioms ? If so, the results may not be social, per se, but they’re certainly arbitrary. If you pick different axioms, you get different conclusions.
To me, these two sentences sound diametrically opposed to each other. If your model does not correspond to reality, how is it different from any other arbitrary social construct (such as the color of Harry Potter’s favorite scarf or whatever) ? On the other hand, if your model makes specific predictions about reality, which are found to be true time and time again (f.ex., “if you step off this ledge, you’ll plummet to your splattery doom”), then how can you say that your model does not correspond to reality in any meaningful way ?
The frequentist vs. baysian debate is a debate of computing mathematical paradigms. True mathematicians however shun statistics. They don’t like the statistical pradigm ;)
Gödel’s discovery ended a certain mathmatical pradigm of wanting to construct a complete mathematics from the ground up.
I could imagine a future paradigm shift way from the ideal of mathmatical proofs to more experimental math. Neural nets or quantum computers can give you answer to mathematical question that you ask that might be better than the answer s that axiom and proof based math provides.
Except, in practice mathematics still works this way.
It had damn well better be checked that way, because it rests on the assumption of flat space, which may or may not be true. The derivation from the axioms is not checked by empirical data; the axioms themselves are. If you don’t check the axioms, you don’t have knowledge, you have pretty equations on paper, unconnected to any fact. Pythagoras is just as much empirical knowledge as Einstein; it’s just that the axioms are closer to being built-in to the human brain, so you get an illusion of Eternal Obviousness. Try explaining the flat-space axioms to squid beings from the planet Rigel, which as it happens has a gravity field twenty times that of Earth, and see how far you get. “There’s only one parallel line through a given point”, you say, and the squid explodes in scorn. “Of course there’s more than one! Here, I’ll draw them for you and you can see for yourself!”
I agree. Isn’t deriving propositions from axioms what mathematics is?
A mathematician might say so, yes. I’m a physicist; I’m not really interested in what can be derived from axioms unconnected to reality.
I am having trouble with this as a statement of historical fact. Isn’t that how they did it?
You could call it a pradigm shift that we today don’t like how they did it ;)
I’m not sure that’s how it was motivated historically. Note that Euclid’s proof (Edit: not Euler) doesn’t require measuring anything at all.
To use a different example, how would one go about measuring whether there are more real numbers than integers? The proof is pretty easier, but it doesn’t require any empirical facts as far as I can tell.
I think you mean Euclid’s proof, and he was working centuries after Pythagoras, who was himself working over a thousand years after the Babylonians, who discovered Pythogorean Triples (the ones you notice by measuring).
To restate, I’m fine with saying that a proof for the Pythogorean Theorem exists that does not require measuring physical triangles, but I’m not comfortable with the statement that it cannot be proved by measuring physical triangles, which is what your original comment implied to me.
As discussed in the other subthread, I think that Deutsch’s intention was to argue that any instance of a proof, as an object, has to exist in reality somewhere, which is a very different claim.
It depends on what you mean by “proved”. The Pythagorean Theorem applies to all possible triangles (on a flat Euclidean plane), and the answer it gives you is infinitely precise. If you are measuring real triangles on Earth, however, the best you could do is get close to the answer, due to the uncertainty inherent in your instruments (among other factors). Still, you could very easily disprove a theorem that way, and you could also use your experimental results to zero in on the analytical solution much faster than if you were operating from pure reason alone.
It’s just the problem of induction.
Other subthread? Don’t see where anyone made that point. Moreover, I don’t think it is a good reading of the original quote.
That’s not fairly represented by saying “All actual proofs are on physical paper (or equivalent).”
I was thinking of this comment. If by “knowledge” he means “a piece of memory in reality,” then by definition there is no abstract knowledge, and no abstract proofs, because he limited himself to concrete knowledge.
That knowledge can describe concepts that we don’t think of as concrete- the Pythogorean Theorem doesn’t have a physical manifestation somewhere- but my knowledge of it does have a physical manifestation.
There are all kinds of quantitative ways in which there are more real numbers than integers. On the other hand a tiny minority of us regard Cantor’s argument (that I think you’re alluding to) as misleading and maybe false.
No, that’s not how you prove it, but you can check it pretty easily with right triangles. Similarly, if you believe that Pi == 3, you only need a large wheel and a piece of string to discover that you’re wrong. This won’t tell you the actual value of Pi, nor would it constitute a mathematical proof, but at least the experience would point you in the right direction.
If you find a right triangle with sides (2.9, 4, 5.15) rather than (3,4,5), are you ever entitled to reject the Pythagrean theorem? Doesn’t measurement error and the non-Euclidean nature of the actual universe completely explain your experience?
In short, it seems like you can’t empirically check the Pythagorean theorem.
That is not what I said. I said, regarding Pi == 3, “this won’t tell you the actual value of Pi, nor would it constitute a mathematical proof, but at least the experience would point you in the right direction”. If you believe that a^2 + b^2 = c^5, instead of c^2; and if your instruments are accurate down to 0.2 units, then you can discover very quickly that your formula is most probably wrong. You won’t know which answer is right (though you could make a very good guess, by taking more measurements), but you will have enough evidence to doubt your theorem.
The words “most probably” in the above sentence are very important. No amount of empirical measurements will constitute a 100% logically consistent mathematical proof. But if your goal is to figure out how the length of the hypotenuse relates to the lengths of the two sides, then you are not limited to total ignorance or total knowledge, with nothing in between. You can make educated guesses. Yes, you could also get there by pure reason alone, and sometimes that approach works best; but that doesn’t mean that you cannot, in principle, use empirical evidence to find the right path.
Peer review. If the next two hundred scientists who measure your triangle get the same measurements from other rulers by different manufacturers, you’d be completely justified in rejecting the Pythagorean theorem.
My challenge to you: go out and see if you can find a right triangle with those measurements.
Sure, how about a triangle just outside a black hole.
That was a quick trip. Which black hole was it?
You’re completely justified in rejecting Euclid’s axioms. You’re not at all justified in rejecting the Pythagorean theorem.
Upvoted for your excellent demonstration of peer review ;) I stand corrected.
Natalie Wolchover
I saw on TV some kid lose convincingly against a RPS champion when the kid had been given a prepared (random) list of moves to make ahead of time. That can’t be explained by strategy—it was either coincidence or it’s possible to cheat by seeing which way your opponent’s hand is unfolding and change your move at the last moment.
The latter is definitely possible. Back when I was still playing RPS as a kid, I was fairly good at it; enough for somewhere upwards of 70% of my plays to be wins.
You don’t want to change your move at the last moment though so much as you want to keep your hand in a plausibly formless configuration you can turn into a move at the last moment. Less likely to be called out for cheating.
Or the losers were unintentionally signaling their moves beforehand.
Sam Hughes, talking about the first season finale of Doctor Who, differentiating between the subjective feeling of certainty and the actual probability estimate.
-Posted outside the mathematics reading room, Tromsø University
From the homepage of Kim C. Border
-Charlie Munger
I’m surprised by how consistently misinterpreted the EMH is, even by people with the widest possible perspective on markets and economics. The EMH practically requires that some people make money by trading, because that’s the mechanism which causes the market to become efficient. The EMH should really be understood to mean that as more and more money is leached out of the market by speculators, prices become better and better approximations to real net present values.
I’ve always thought of the Efficient Market Hypothesis as the anti-Tinkerbell: if everybody all starts clapping and believing in it, it dies.
See, for example, every bubble ever. “We don’t need to worry about buying that thing for more than it seems to be worth, because prices are going up so we can always resell it for even more than that later!”
If they actually believed the market they were trading in was efficient they wouldn’t believe that prices would continue to go up. They would expect them to follow the value of capital invested at that level of risk. Further—as applicable to any bubble that doesn’t represent overinvestment in the entire stockmarket over all industries—they wouldn’t jump on a given stock or group of stocks more than any other. They would buy random stocks from the market, probably distributed as widely as possible.
No, belief in an efficient market can only be used as a scapegoat here, not as a credible cause.
That’s pretty much the thesis of Markets are Anti-Inductive by EY.
--Steve Sailer, here
For all that it’s fun to signal our horror at the ignorance/irrationality/stupidity of those in charge, I still think real-world 2012 Britain, USA, Canada and Australia are all better than Oceania circa 1984. For one thing, people are not very often written out of existence.
Or … are they?
At a certain point, conspiracy theories become indistinguishable from skeptical hypotheses.
The quote states that the current establishment has no idea what’s going on. How would they be competent enough in this state to band together, write people out of existence, then keep it a secret indefinitely?
The response was a joke.
Mustapha Mond evil?
Of course. He keeps the brave new world running. I don’t think there are many takers here for the idea that Brave New World depicts a society we should desire and work for.
Jay-Z, Forever Young
[Taking the lyrics literally, the whole thing is a pretty sweet transhumanist anthem.]
-Douglas Adams
-- Tim Minchin, Storm
That could just mean we’re no good at solving mysteries that involve magic.
Also, I think there is a selection effect in so far as there are solved mysteries where the solution was magic; however, you’d probably argue that they were not solved correctly using no other evidence than that the solutions involved magic.
It depends what you mean by magic. Nowadays we communicate by bouncing invisible light off the sky, which would sure as hell qualify as “magic” to someone six hundred years ago.
The issue is that “magic”, in the sense that I take Minchin to be using it, isn’t a solution at all. No matter what the explanation is, once you’ve actually got it, it’s not “magic” any more; it’s “electrons” or “distortion of spacetime” or “computers” or whatever, the distinction being that we have equations for all of those things.
Take the witch trials, for example—to the best of my extremely limited knowledge, most witch trials involved very poorly-defined ideas about what a witch was capable of or what the signs of a witch were. If they had known how the accused were supposed to be screwing with reality, they wouldn’t have called them “witches”, but “scientists” or “politicians” or “guys with swords”.
Admittedly all of those can have the same blank curiosity-stopping power as “magic” to some people, but “magic” almost always does. Which is why, once you’ve solved the mystery, it turns out to be Not Magic.
Consider something like this and notice that our modern “explanations” aren’t much better.
And because of those damned atheists we can’t even start a witch hunt to figure out who’s responsible!
Sure we can.
We just need to rephrase “witch” in scientific terms.
(Also sorry about the political link, but with a topic like this that’s inevitable).
UPDATE: This post goes into more details.
I think Tim Minchin was using “magic” the same way most people use “magic”—meaning ontologically basic mental things
To be fair, I’ve never asked him. But he included homoeopathy, which its practitioners claim isn’t mental.
So he was using magic in the sense of “disagrees with current scientific theory”, in that case the initial quote is circular.
It’s possible, but when I first heard it I honestly thought he meant “fundamentally mysterious stuff”.
And wrong. E.g., the perihelion precession of Mercury turned out to be caused by all matter being able to warp space and time by its very existence. We like to call that Not Magic, but it’s magic in the sense of disagreeing with established scientific theory, and in the sense of being something that, if explained to someone who believed in Newtonian physics, would sound like magic.
I wouldn’t say it would sound like magic. It would sound weird and inexplicable, but magic doesn’t just sound inexplicable, it sounds like reality working in a mentalist, top-down sort of way. It sounds like associative thinking, believing that words or thoughts can act on reality directly, or things behaving in agentlike ways without any apparent mechanism for agency.
Relativity doesn’t sound magical; in fact, I’d even say that it sounds antimagical because it runs so counter to our basic intuitions. Quantum entanglement does sound somewhat magical, but it’s still well evidenced
Interesting. I hadn’t thought about that. Now that I think about it, you’re right; most fictional magic does act on things that are fundamental concepts in people’s minds, rather than on things that are actually fundamental.
That said, I still say it all sounds like magic. I couldn’t tell you exactly what algorithm my brain uses to come up with “sounds like magic”, though.
I didn’t just have fictional magic in mind; concepts like sympathetic magic are widespread, maybe even universal in human culture. Humans seem to have strong innate intuitions about the working of magic.
-- Reg Braithwaite (raganwald)
.
Sounds like a counter to “Never interrupt your enemy when he is making a mistake.” (Attributed but seemingly falsely to Napoleon Bonaparte)
--Gregory Cochran, in a comment here
.
Yes but I didn’t at first want to post that because it is slightly political. Though I guess the rationality core does outweigh any mind-killing.
You have a Rationality Core, too?
.
This has 6 karma points, so I’m left curious about whether people have anything in mind about what real intellectuals shouldn’t know.
I could be interpreting it entirely wrong, but I’d guess this is the list Cochran had in mind:
•
Real intellectuals shouldn’t know the details of fictional worlds. They shouldn’t know the private business of their neighbors. They shouldn’t know more about sports than is necessary for casual conversation on the matter (though no less either). They shouldn’t know how to lie, how to manipulate people, they shouldn’t know much about how to make money, they shouldn’t know much about concrete political affairs unless that is their business. They shouldn’t know too much about food or the maintenance of their health.
Real intellectuals should be able to play an instrument, but not very well. They shouldn’t know too much about crimes, mental disorders, disasters, diseases, or wars. They should know the broad strokes of history, but not the details unless that is their primary business.
Real intellectuals should enjoy music, but never study it, unless that is their primary business. Most essentially, real intellectuals shouldn’t know what they don’t have the time or inclination to know well.
Is this meant to be funny?
Seemed serious and somewhat reasonable to me.
I’ll take what I can get.
Real intellectuals shouldn’t know things that science doesn’t know.
Then science would have nothing to learn from them.
Why? They could submit their tentative results to science, wait for verification, and only then become confident. In fact I think that’s the right way.
What about philosophy? Science doesn’t know about philosophy of science, yet a real intellectual should know about philosophy of science. Do you mean “science” in a really broad sense or “intellectual” in a really narrow sense?
I don’t understand your question yet. Can you give an example statement that philosophy of science knows but science doesn’t?
“A mature science, according to Kuhn, experiences alternating phases of normal science and revolutions. In normal science the key theories, instruments, values and metaphysical assumptions that comprise the disciplinary matrix are kept fixed, permitting the cumulative generation of puzzle-solutions, whereas in a scientific revolution the disciplinary matrix undergoes revision, in order to permit the solution of the more serious anomalous puzzles that disturbed the preceding period of normal science.”—SEP on Kuhn
?
This is an instance of “X said Y”. Science isn’t forbidden from knowing that X said Y, but such knowledge is mostly useless and I’m not sure why people should bother learning it. The only interesting question is which bits of Y stay true without the “X said”.
I suspect that Will meant that “A mature science experiences alternating phases of normal science and revolutions. In normal science the key theories, instruments, values and metaphysical assumptions that comprise the disciplinary matrix are kept fixed, permitting the cumulative generation of puzzle-solutions, whereas in a scientific revolution the disciplinary matrix undergoes revision, in order to permit the solution of the more serious anomalous puzzles that disturbed the preceding period of normal science.” is a statement of philosophy of science, and consequently (according to Will) something that science doesn’t know, and that the “according to Kuhn” part is irrelevant.
I suspect that your response is that, insofar as that statement is true and meaningful, science does know it.
If I’m wrong about either of those suspicions I’ll be very surprised and inclined to update strongly accordingly, but I’m not yet sure in what directions beyond sharply reduced confidence that I understand either of you.
Science doesn’t know everything that’s true. Make it “insofar as that statement is scientifically proven” :-)
Mm, yes.
And much as I ought to distrust myself for saying this after having previously said I’d be very surprised and significantly update if I was wrong: “well, yes, that’s what I meant.”
I am chagrined.
I think the trouble here is that ‘science’ is a somewhat loosely held together institution of journals, technical practices, university departments, labs, etc. It doesn’t ‘know’ anything, any more than it speculates, opines, believes, doubts, or worries. People know things, often (perhaps entirely) by engaging with other people.
(I thought User:cousin_it was making a descriptive statement about what academia thinks intellectuals should know, ’cuz as a normative statement it’s obviously wrong.)
I interpret the quote as saying that to be a “good intellectual” one needs to not know the problems with the positions “good intellectuals” are expected to defend.
My immediate thought was a ‘real intellectual’ shouldn’t fill their brain with random useless information, (e.g. spend their time reading tvtropes).
Lynne Murray
Reminds me of a Bateson quote.
-- nostrademons on Hacker news
That’s a good quote! +1.
Unfortunately, for every rational action, there appears to be an equal and opposite irrational one: did you see bhousel’s response?
Sigh.
The Princess Bride:
Man in Black: Inhale this, but do not touch.
Vizzini: [sniffs] I smell nothing.
Man in Black: What you do not smell is called iocane powder. It is odorless, tasteless, dissolves instantly in liquid, and is among the more deadlier poisons known to man.
[He puts the goblets behind his back and puts the poison into one of the goblets, then sets them down in front of him]
Man in Black: All right. Where is the poison? The battle of wits has begun. It ends when you decide and we both drink, and find out who is right… and who is dead.
[Vizzini stalls, then eventually chooses the glass in front of the man in black. They both drink, and Vizzini dies.]
Buttercup: And to think, all that time it was your cup that was poisoned.
Man in Black: They were both poisoned. I spent the last few years building up an immunity to iocane powder.
Vizzini of the Princess Bride, on the dangers of reasoning in absolutes—both logically (“this is proof it’s not in my goblet”) and propositionally (the implicit assumption Vizzini has that one and only one wine goblet is poisoned—P or ~P, as it were)
I don’t agree that Vizzini is trying to reason in logical absolutes. He talks like he is, but he doesn’t necessarily believe the things he’s saying.
Man in Black: You’re trying to trick me into giving away something. It won’t work.
Vizzini: It has worked! You’ve given everything away! I know where the poison is!
My interpretation is that he really is trying to trick the man.
Later he distracts the man and swaps the glasses around; then he pretends to choose his own glass. He makes sure the man drinks first. I think he’s reasoning/hoping that the man would not deliberately drink from the poisoned cup. So when the man does drink he believes his chosen cup is safe. If the man had been unwilling to drink, Vizzini would have assumed that he now held the poisoned glass, and perhaps resorted to treachery.
He’s overconfident, but he’s not a complete fool.
(I don’t have strong confidence in this analysis, because he’s a minor character in a movie.)
That the Man in Black describes it as a battle of wits—and not a puzzle—agrees with you.
Well, yes, he only pretends to reason in logical absolutes…
… which was why I wrote “and propositionally”—because he does actually reason in propositional absolutes. I agree with your analysis but note that it is only a good strategy if it’s true that one and only one cup contains poison (or the equivalent, that one and only one cup will kill the Man in Black).
On re-reading I may have lost that subtlety in the clumsy (parenthetical-filled) expression of the final line.
Said by a pub manager I know to someone who came into his pub selling lucky white heather:
“I’m running a business turning over half a million pounds a year, and you’re selling lucky heather door to door. Doesn’t seem to work, does it?”
Albert Jay Nock, The Theory of Education in the United States
On the mind projection fallacy:
-John Stuart Mill
Every subjective feeling IS at least one thing—a bunch of neurons firing. Whether stored representational content activated in that firing has any connection to events represented happening outside the brain is another question.
Found here.
.
-David Wong
Why did this quote get down-voted by at least two people? I thought it was much, much better than the other quote I posted this month, which is currently sitting pretty at 32 karma despite not adding anything we didn’t already know from the Human’s Guide to Words sequence.
Although not directly contradictory, the idea expressed in the quote is somewhat at odds with libertarianism, which is popular on LW.
Is this true? I mean, isn’t that universally recognized as a mind killer?, just like most other political philosophies?
Are there any demographical studies of LW’s composition in personspace?
The closest things we have to those are probably the mid-2009 and late 2011 surveys. People could fill in their age, gender, race, profession, a few other things, and...politics!
The politics question had some default categories people could choose: libertarian, liberal, socialist, conservative & Communist. In 2009, 45% ticked the libertarian box, and in 2011, 32% (among the people who gave easy-to-categorize answers). Although those obviously aren’t majorities, libertarianism is relatively popular here.
Political philosophies are like philosophies in general, I think. However mind-killy they are, a person can’t really avoid having one; if they believe they don’t have one, they usually have one they just don’t know about.
Well, it’s true and it’s false.
It’s popular “on” LW in the sense that many of the people here identify as libertarians.
It’s not popular “on” LW, in the sense that discussions of libertarianism are mostly unwelcome.
And, yes, the same is true of many other political philosophies.
I upvoted it. The main point is sound as a point of plain logic. However I suspect it isn’t quite clear enough and so prone to pattern matching to various political ideologies.
IDK, but suspect it has to do with including taxes as a way to “ask for help” which is dangerously close to double-speak. To some ears, this sounds like you are saying rape is a form of asking for sex.
Can one say “I’ve never gotten that form of help?” And does “I think that help will hurt you in the long run” fall under “I think you’re lying about needing help”?
At best, it would fall under “you are mistaken about needing help”.
.
---Tim Ingold, “Clearing the Ground”
-William Hazlitt, attacking phrenology.
This quote is itself an example of the phenomenon it describes since it stems from a desire to be able to separate true from false science without the hard and messy process of looking at the territory.
Also hindsight bias.
I don’t see that in the quote—it seems to be an attempted explanation for the existence of pseudoscience, not a heuristic for identifying such.
The problem is that it’s still false. A lot of false science was developed by people honestly trying to find true causes. I also suspect that a good deal of actual science was developed by people who accepted a cause without enough evidence out of a desire to have a cause for everything and got lucky.
-- Evan V Symon, Cracked.com http://www.cracked.com/article_19669_the-5-saddest-attempts-to-take-over-country.html
Not completely serious, but think of it in relation to the sanity waterline...
Winston Churchill
Incidentally, you need a double-newline to break the quote bar.
Thank you, I’ve rewritten it now.
Friedrich Nietzsche
I don’t think that is a good description of what people mean by “faith”.
For a better idea of the concept of faith start here.
It’s not what people intend “faith” to mean, but nevertheless it often ends up being its effective definition. (EDIT: To clarify, by “it” I am referring to Nietzsche’s definition.)
Except that faith has little to nothing to do with social obligations. Faith is believing something without proof or even reason to believe it.
Unless you mean “faith” as in being “faithful” to your spouse, in which case, that’s not even the same thing as what Nietzsche is talking about.
Except for, well, being one in most social circumstances and for certain beliefs.
Let me restate: social obligations are not at the core of what faith is. One could believe something without proof if she were alone in the universe. Faith certainly can be a social obligation, and depending upon what it is faith in, could easily necessitate social obligations, but the general idea of “believing in something without evidence” can be done by one person alone, and social obligations are by no means part of that definition.
Agree with this restatement.
The problem is that Nietzsche was confused about what religious people mean by “faith”, as a result his argument is essentially a straw-man.
What religious people mean by “faith” and what faith actually is do not have to be the same thing.
Also, Nietzsche was definitely not confused about what religious people mean by faith. You’re just confused because that quote isn’t a statement about what faith is, but rather, a statement about the psychology of the faithful.
As for the psychology of faith, to use your example of being faithful to you spouse, you want your spouse not to cheat on you. Thus this is a game of prisoner’s dilemma or at least stag hunt, faith amounts to the Timeless Decision Theory solution which requires the belief that your spouse won’t cheat on you if you don’t cheat on her. Because there is no direct causal relationship between these two events it sounds a lot like believing without proof, especially if one doesn’t know enough game theory to understand accusal relationships.
You seem to be missing the point. “Faith” in terms of religious belief is not the same thing as being “faithful” to your spouse.
You’re equivocating. Also, that’s not a Prisoner’s Dilemma. A Prisoner’s Dilemma allows no precommittments(you don’t expect to get arrested; neither does your partner), and no communication with your partner once the game starts. It’s clear that neither of those requirements is true when considering fidelity to one’s partner. Relationships are not Prisoner’s Dilemma situations. It takes an extreme stretch of the situation, and a skewed placement of values for BOTH players for it to resemble one. If both players can gain more utility from being unfaithful, why not implement an open relationship? If the utility from being unfaithful is high enough(higher than the utility of the relationship itself), why continue the relationship?
Loyalty to one’s partner differs in many many many ways from religious faith.
No, this has been standard usage since at least as far back as the High Middle Ages.
That has to be the worst citation in support of an argument I’ve ever seen. “Standard usage”...is number 6 on a list of different models of faith in philosophical terms? Right. That’s clearly what most people mean when they talk about faith.
Also, trusting someone else is the opposite of fidelity to that person, not the same thing.
Regardless, the definition Nietzsche is using is obviously not referring to a trust-based model.
Let me be the first to welcome you, since it appears this is your first day on the Internet.
I wasn’t aware of the context in which your back-and-forth with Eugine_Nier was taking place, since I only started reading at this comment when it was in the recent comments feed. My bad. I assumed you thought he was using “faith” in an idiosyncratic way, rather than in a way that has been part of theology for almost a millennium. After reading a few comments up I can see that you were referring to a particular quote by Nietzsche (one in which he probably did not mean to refer to the concept of faith as trust).
Obviously, “trusting someone” is not the same as “fidelity to that person”. I never claimed otherwise. On the other hand, opposite is way too strong a word for this. Moreover, Eugine_Nier’s comment never made such an equivalence claim. He said that “faith amounts” to the “belief that your spouse won’t cheat on you”. This sounds very much like the concept of faith as trust (and not its opposite).
We are in full agreement on this point.
It is a usage of the same original word that has clearly diverged such that to substitute the intended meaning across contexts is most decidedly equivocation. “Faith” as in a kind of belief is not the same meaning as “faithful” as in not fucking other people. This should be obvious. The origin of the (nearly euphemistic) usage of the term is beside the point.
What evidence, if it existed, would cause you to change your mind?
“Could be” is to “is” as “ought” (or faith) is to “must”? Strikes me as a very nuanced term with diverse associations across brains, interesting analogy.
-- Niels Henrik Abel, on how he developed his mathematical ability.
--George Orwell, here
Since I have just read that “the intelligentsia” is usually now used to refer to artists etc. and doesn’t often include scientists, this isn’t as bad as I first thought; but still, it seems pretty silly to me—trying to appear deep by turning our expectations on their head. A common trick, and sometimes it can be used to make a good point… but what’s the point being made here? Ordinary people are more rational than those engaged in intellectual pursuits? I doubt that, though rationality is in short supply in either category; but in any case, we know the “ordinary man” is extremely foolish in his beliefs.
Folk wisdom and common sense are a favored refuge of those who like to mock those foolish, Godless int’lectual types, and that’s what this reminds me of; you know, the entirely too-common trope of the supposedly intelligent scientist or other educated person being shown up by the homespun wisdom and plain sense of Joe Ordinary. (Not to accuse Orwell of being anti-intellectual in general—I just don’t like this particular quote.)
This quote isn’t just about seeming deep, it refers to a frequently observed phenomenon. I think two main reasons for it are that intellectuals are better at rationalizing beliefs they arrived at for non-smart reasons (there is even a theory that some intellectuals signal their intelligence by rationalizing absurd beliefs) and the fact that they’re frequently in ivory towers where day to day reality is less available.
Depends on which type of anti-intellectualism you’re referring to.
I remember Tetlock’s Expert Political Judgment suggested a different mechanism for intelligence to be self-defeating: clever arguing. In a forecaster’s field of expertise, they have more material with which to justify unreasonable positions and refute reasonable ones, and therefore they are more able to resist the force of reality.
-Douglas Adams
--George F. Stigler, “Economics or Ethics?”
--- pseudonym
Thomas Henry Huxley—about Darwin’s theory of evolution
Meh. That’s just hindsight bias.
Galileo Galilei (translated by me)
With the great historical exception of quantum mechanics.
I suspect this is because we’re still missing major parts of quantum mechanics.
Richard Feynman’s famous quote is accurate. Before I studied physics in college I was pretty sure that I still had a lot to learn about quantum mechanics. After studying it for several years, I now have a high level of confidence that I know almost nothing about quantum mechanics.
Try reading this.
In fact, most people don’t understand the Relativity. Most still rejects Evolution. It wasn’t easy to understand the Copernican system in the Galileo’s time.
It is easy to understand for a handful, and it seems obvious only to a few, when a new major breakthrough is made. Galileo was wrong. It may be easier, but not “easy to understand once a truth is revealed”.
I suppose people didn’t understand it because they didn’t want to, not because they couldn’t manage to. (Same with evolution—what the OP was about. I might agree about relativity, though I guess for some people at least the absolute denial macro does play some part.)
More like stuff that was true back them is no longer true now.
I suppose not. Why? People either have an inborn concept of the absolute up-down direction, either they develop it early in life. Updating to the round (let alone moving and rotating Earth) is not that easy and trivial for a naive mind of a child or for a Medieval man.
A new truth is usually heavy to understand for everybody. Had not been so, the science would progress faster.
I don’t see how that contradicts my claim that it’s not that people couldn’t understand the meaning of the statement “the Earth revolves around the Sun”, but rather they disagreed with it because it was at odds with what they thought of the world. iħ∂|Ψ⟩/∂t = Ĥ|Ψ⟩, now that’s a statement most people won’t even understand enough to tell whether they think it’s true or false.
I don’t see how that contradicts my claim that it’s not that people couldn’t understand the meaning of the statement “the Earth revolves around the Sun” but rather they disagreed with it. iħ∂|Ψ⟩/∂t = Ĥ|Ψ⟩, now that’s a statement most people won’t even understand enough to tell whether they think it’s true or false.
Historical? I know you count many worlds as “understanding”, but I wouldn’t until this puzzle is figured out. (Or maybe it’s that I like Feynman’s (in)famous quote so much I want to keep on using it, even if this means using a narrower meaning for understand.)
I certainly hope that EY means that the problem of the origins of the Born rule is still open, not that the MWI has somehow solved it.
IIRC he said something to the effect that it is no longer true that nobody understands QM since we have the MWI; my point is that I wouldn’t count MWI as ‘understanding’ if the very rule connecting it to (probabilities of) experimental results is still not understood.
Not sure which part of QM you’re referring to, but arguably QM hasn’t really been “found out” yet, so we shouldn’t be surprised that it’s not easy to understand. I mean seriously, what the hell are complex numbers doing in the Dirac equation?
It’s hard to engage with someone whose readiness to opine so vastly exceeds their readiness to meaningfully opine.
Eh?
QM is a solid theory that reliably predicts every known experiment dependent on it.
And it seems you need to brush up on your arithmetic theory. There is a progression in the usual number fields, Naturals (w/ or w/o 0), Integers, Rationals, Reals, Complex, Quarternions, Octonions, Sedenions, etc.
Naturals have a starting point, countability, no negatives, no inverse elements and no algebraic closure.
Integers sacrifice a starting points to gain negative elements.
Rationals sacrifice finiteness of subsets to gain inverse elements.
Reals sacrifice uniqueness of representation to gain uncountability.
Complex numbers sacrifice absolute order to gain algebraic closure.
Then it gets a bit hazy in memory, but I know Quarternions sacrifice commutativity of multiplication and Octonions aren’t associative but I can’t remember what neat tricks you gain there. The Sedenions have zero divisors but I can’t remember what they loose.
Now the point is that complex numbers are the most interesting because they have algebraic closure; you cannot construct an equation with multiplication and addition or almost any other operation in which the solution isn’t a Complex number. Not so with the Reals (sqrt −1). Thus Complex numbers are completely logical to be physics rather than Reals.
Not what I would have said. Instead, I think it would be better to say that the reals sacrifice countability in order to gain completeness.
(“Uniqueness of representation” isn’t a big deal at all. In fact, it doesn’t even hold for the natural numbers, which is why there is such a thing as “arithmetic”.)
Well, the simplest way to represent a real number is with infinite decimal expansion. Every rational number with a finite digit expansion has two infinite digit expansions. Every Natural has one corresponding string of digits in any positional number system with natural base.
I think at the time of writing I considered ‘completeness’ to be ill defined, since the real numbers don’t have algebraic closure under the exponential operator with negative base and fractional exponent, while with ordinary arithmetic it is impossible to shoot outside of the Complex numbers.
(EDIT: I cant arithmetic field theory today) The best I can come up with is uniqueness of representation, since it implies infinite representations and thus loss of countability. (insofar as I remember my ZFC Sets correctly, a set of all infinite strings with a finite alphabet is uncountable and isomorphic at least to the interval [0,1] of the reals)
EDITED to fix elementary error.
No. Every terminating number has two infinite decimal expansions, one ending with all zeros, the other with all nines.
1⁄3, for instance is only representable as 0.333… , while 1/8th is representable as 0.124999… and 0.125.
Oh right, thanks for catching that.
No, they don’t; that’s precisely the point. There are Cauchy sequences of rational numbers which don’t converge to any rational number. For an example, simply take the sequence whose nth term is the decimal expansion of pi (or your favorite irrational number) carried out to n digits.
Noted and corrected.
As I’ve pointed out to you before, if you have a problem with physical applications of complex numbers, you should be equally offended by physical applications of matrices, because matrices of the form [[a,-b],[b,a]] are isomorphic to complex numbers. In fact, your problem isn’t just with quantum mechanics; if you can’t stand complex numbers, you should also have a problem with (for just one example) simple harmonic motion.
In detail: we model a mass attached to a spring with the equation F=-kx: the force F on the mass is proportional to a constant -k times the displacement from the equilibrium position x. But because force is mass times acceleration, and acceleration is the second time derivative of position, this is actually the differential equation x″(t) + (k/m)x(t) = 0, which has the solution x(t) = ae^(i*sqrt(k/m)t) + be^(-i*sqrt(k/m)t) where a and b are arbitrary constants.
It’s true that people tend to write this as ccos(sqrt(k/m)t)+dsin(sqrt(k/m)t), but the fact that we use a notation that makes the complex numbers less visible, doesn’t change the underlying math. Trig functions are sums of complex exponentials.
Complex numbers are perfectly well-behaved, non-mysterious mathematical entities (consider also MagnetoHydroDynamics’s point about algebraic closure); why shouldn’t they appear in the Dirac equation?
I would say instead that many truths are easy to understand once you understand them. But still hard to explain to other people.
So that I can google for it—what’s the original text? Thanks!
The version I’ve read is “Tutte le verità sono facili da capire quando sono rivelate, il difficile è scoprirle!” But that sounds like suspiciously modern Italian to me, so I wouldn’t be surprised to find out that it’s itself a paraphrase.
ETA: Apparently it was quoted in Criminal Minds, season 6, episode 11, and I suspect the Italian dubbing backtranslated the English version of the show rather than looking for the original wording by Galileo. (Which would make my version above a third-level translation.)
ETA2: In the original version of Criminal Minds, it’s “All truths are easy to understand once they are discovered; the point is to discover them” according to Wikiquote. (How the hell did point become difficile? And why the two instances of discover were translated with different verbs? That’s why I always watch shows and films in the original language!)
ETA3: And Wikiquote attributes that as “As quoted in Angels in the workplace : stories and inspirations for creating a new world of work (1999) by Melissa Giovagnoli”.
Edited Wikiquote—thanks!
Generally, yes. But in this particular casa we can trust, that the later Darwin’s bulldog really felt that way and that this was a justified statement. He obviously understood the matter well.
All those English animal breeders had a good insight. It was more or less a wild generalization for them. Non so wild for Huxley.
-Douglas Adams
― Cory Doctorow, For The Win
I interpret this to mean that often times questions are overlooked because the possibility of them being true seems absurd. Similar to the Sherlock Holmes saying, “When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.”
When you’ve eliminated the impossible, if whatever’s left is sufficiently improbable, you probable haven’t considered a wide enough space of candidate possibilities.
Seems fair. The Holmes saying seems a bit funny to me now that I think about it, because the probability of an unlikely event changes to become more likely when you’ve shown that reality appears constrained from the alternatives. I mean, I guess that’s what he’s trying to convey in his own way. But, by the definition of probability, the likelihood of the improbable event increases as constraints appear preventing the other possibilities. You’re going from P(A) to P(A|B) to P(A|(B&C)) to.. etc. You shouldn’t be simultaneously aware that an event is improbable and seeing that no other alternative is true at the same time, unless you’re being informed of the probability, given the constraints, by someone else, which means that yes, they appear to be considering more candidate possibilities (or their estimate was incorrect. Or something I haven’t thought of...).
Maybe he meant how a priori improbable it is?
That sounds right.
I interpret it to mean that Cory Doctorow doesn’t fully consider the implications of hindsight bias when it comes to predicting the merits of asking questions from a given class.
Usually asking stupid questions really is just stupid.
Hrm. Okay, I see your point, I think. I think there’s some benefit in devoting a small portion of your efforts to pursuing outlying hypotheses. Probably proportional to the chance of them being true, I guess, depending on how divisible the resources are. If by “stupid”, Doctorow means “basic”, he might be talking about overlooked issues everyone assumed had already been addressed. But I guess probabilistically that’s the same thing—its unlikely after a certain amount of effort that basic issues haven’t been addressed, so its an outlying hypothesis, and should again get approximately as much attention as its likelihood of being true, depending on resources and how neatly they can be divided up. And maybe let the unlikely things bubble up in importance if the previously-thought-more-likely things shrink due to apparently conflicting evidence… A glaring example to me seems the abrahamic god’s nonexplanatory abilities going unquestioned for as long as they did. Like, treating god as a box to throw unexplained things in and then hiding god behind “mysteriousness” begs the question of why there’s a god clouded in mysteriousness hanging around.
But the expected return on asking a stupid question is still positive.
No, not with even the slightest semblance of opportunity cost being taken into account.
I’d say there are probably cases where people have gotten hurt by not asking “stupid” questions.
Also, I think we need to dissolve what exactly a stupid question is?
Almost certainly. I am also fairly confident that there is someone who has been hurt because he did look before crossing the road.
But does the negative utility from the situations “find out, get hurt from it” outweight “don’t find out, get hurt from it?”
Isn’t the heuristic More Knowledge ⇒ Better Decisions quite powerful?
Get to the stupid questions after all the sensible questions have been exhausted if, for some reason, the expected utility of the next least stupid question is still positive.
I think we need to find out what we mean by stupid and sensible questions.
Of course one should in any given situation perform the experiments (ask questions) that gives highes expected information (largest number of bits) yield, I.E. ask if it is a vertebrae before you ask if it is a dog. What I think we disagree upon is the nature of a stupid question.
And now, it seems I cannot come up with a good definition of a stupid question as anything I previously would refer to as a “stupid question” can be equally reduced to humility.
Asking stupid questions costs status.
From a slightly different perspective we could say that asking ‘silly’ questions (even good silly questions) costs status while asking stupid questions can potentially gain status in those cases where the people who hear you ask are themselves stupid (or otherwise incentivised to appreciate a given stupid gesture).
And this sort of thing is why some of us think all this ‘status’ talk is harmful.
It doesn’t go away if you stop talking about it.
Personally, I think Robin Hanson tends to treat status as a hammer that turns all issues into nails; it’s certainly possible to overuse a perspective for analyzing social interaction. But that doesn’t mean that there aren’t cases where you can only get a meaningful picture of social actions by taking it into consideration.
Nowadays, I can ask a question of the entire WEIRD world without losing any status. There are still some that just aren’t worth wasting my time on. For example: Is the moon actually a moose?
No, but worrying about status can keep you from getting answers to your ‘stupid’ questions.
This is partly why nerds have largely internalized the “there are no stupid questions” rule. See Obvious Answers to Simple Questions by isaacs of npm fame.
-- Hans Moravec Time Travel and Computing
Extremely cool in an armchair-physicist sort of way, but what’s the rationality?
Fair point—I actually wasn’t 100% convinced myself it fits here… Reason for posting it anyway was that (a) it somehow reminded me of the omega/2-boxes problem (i.e., the paradoxal way how present and past seem to influence each other), (b) Hans Moravec work touches on so many of the AI/transhumanist themes common in LW and (c) I found it such a clever observation that I thought people here would appreciate.
Not sure if that’s enough reason, but that’s how it went.
I guess ‘it all adds up to normality’, but that’s a stretch.
There might be other equilibria in which the past and future adjust to form a new symmetry. So you kill your grandfather, say, but you’re no longer related to him. Oh yeah… rationality. Haven’t a clue:/. Its a nice quote.
Not quiet, since you need time travel to establish the final timeline.
Friedrich Nietzsche, foreseeing the CEV-problem? (Just kidding, of course)
If a sufficient number of people who wanted to stop war really did gather together, they would first of all begin by making war upon those who disagreed with them. And it is still more certain that they would make war on people who also want to stop wars but in another way. -G.I. Gurdjieff
Great quote, but I think I would just go ahead and make trade embargoes on anyone who started a war… and anyone who didn’t also embargo anyone who, etc.
Not saying it would work (getting enough people to agree just wouldn’t happen) but not everyone who wants to stop war is stupid.
I dont think the idea is that anyone who wants to stop war is stupid … its that anyone who thinks war is necessary clearly does not see that the diversity of viewpoints exists and that others viewpoints are just as valid as theirs (as hard as it may be to understand) and deserves respect.
In most cases where unnecessary violence has occurred, the suppression of individual freedom and loss / harm of human life has always been justified in an effort to end the conflict of one viewpoint and it’s antithesis.
The blind spot of the oppressor will always be that their “oppressing” of others is justified for the viewpoint of their subjective view of “greater” good and not the good of all people, as they all would objectively see it.
I do not think that is what Gurdjieff meant. The idea that all viewpoints are valid could hardly be more alien to his system. From my reading of Gurdjieff, I take him to be speaking here of the mechanical nature of the ordinary man, who imagines himself to be thinking and acting, an idea contradicted as soon as one observes him in his life.
Scott Adams
Do the same with a Chiropractor and let me know if you get different results.
If you read the link, that’s exactly the author’s point
I was reading www.sciencebasedmedicine.org at the same time and my natural smart ass went for a walk. There’s probably a creme for that somewhere.
Ksawery Tartakower
Suppose White gives away a pawn, and then the next move White accidentally lets Black put him in checkmate. White made the next-to-last mistake, but lost, so the saying must be false in a mundane sense. Is there an esoteric sense in which the saying is true?
I read this as implying that the loser is the one who makes the last mistake — the mistake that allows his opponent to win.
But yeah, I think the quote is kinda sloppy — it assumes that the opponents take turns in making mistakes.
This is true if you only count as mistakes moves which turn a winning position into a losing position, as gRR said elsethread. (I think I picked up this meaning from Chessmaster 10′s automatic analyses, and was implicitly assuming it when reading the Tartakower quote.)
On a purely empirical level most amateur games once they reach critical positions are blunderfests punctuated by a few objectively strong moves that decide the game, and many complex positions near the end of games are similar blunderfests even among masters, and if you’re assuming that the majority of moves are blunders then Tartakower’s point is generally true. But I don’t think that’s what he meant.
Hmm, I suppose, a “mistake” in a technical sense is defined in terms of mini-max position evaluation, assuming infinite computing power:
eval(position) = −1 (loss), 0 (tie), or +1(win)
IsFatalMistake(move) = (eval(position before the move) > eval(position after the move) AND eval(position after the move) == −1)
With this definition, either giving away the pawn or missing the checkmate (or both) wasn’t a fatal mistake, since the game was already lost before the move :)
Does “you shouldn’t give up after a mistake, because many chess games involve both players, even the winner, making multiple mistakes” count as esoteric?
I like this even though it violates the correct standard of “mistake”: was the choice expected-optimal, before the roll of the die?
I like that it suggests continuing to focus on the rest of the game rather than beating yourself up over a past mistake.
Tartakower was a chess player.
Somehow I’d imagined chess without really knowing.
The roll of the die is still in effect: unanticipated consequences of only-boundedly-optimal moves by each player can’t make the original move more or less of a true mistake.
Tartakower also said “No one ever won a game by resigning” indeed.
Quintopia on #lesswrong.
“The greatest lesson in life is to know that even fools are sometimes right.”
-Winston Churchill
-Morpheus, Deus Ex
Yes, I know, generalization from fictional evidence and the dangers thereof, etc. . . I think it a genuine insight, though. Just remember that humans are (almost) never motivated by just one thing.
Explain for me?
Certainly. The idea is that God was invented not just to explain the world (the standard answer to that question) but also as a sort of model of how a particular group of people wanted to be governed. One of the theses of the game is that governments constitute a system for (attempting to) compensate for the inability of people to rationally govern themselves, and that God is the ultimate realization of that attempt. A perfect government with a perfect understanding of human nature and access to everyone’s opinions and desires (but without any actual humans involved). Over time, of course, views of what ‘God’ should be like shift with the ambient culture.
I agree, with the caveat that humans usually (and probably in this case) do things for multiple complicated reasons rather than just one. Also the caveat that Deus Ex is a video game.
Interesting theory, and perhaps one that’s got legs, but there’s some self-reinforcement going on in the religious sphere that keeps it from being unicausal—if we’ve got a religion whose vision of God (or of a god of rulership like Odin or Jupiter, or of a divine hierarchy) is initially a simple reflection of how its members want to be governed, I’d nonetheless expect that to drift over time to variants which are more memorable or more flattering to adherents or more conducive to ingroup cohesion, not just to those which reflect changing mores of rulership. Then group identity effects will push those changes into adherents’ models of proper rulership, and a nice little feedback loop takes shape.
This probably helps explain some of the more blatantly maladaptive aspects of religious law we know about, although I imagine costly signaling plays an important role too.
Can you expand on this a little? I’m interested to see what in particular you’re thinking of.
Chesterton, found here
--Benjamin Vigoda, “Analog Logic: Continuous-Time Analog Circuits for Statistical Signal Processing” (2003 PhD thesis)
And the very next year, Intel abandoned its plans to make 4 GHz processors, and we’ve been stuck at around 3 GHz ever since.
Since when, parallel computing has indeed had the industry juggernaut behind it.
Yep, and that’s why we all have dual-core or more now rather than long ag. Parallel computers of various architectures have been around since at least the ’50s (mainframes had secondary processors for IO operations, IIRC), but were confined to niches until the frequency wall was hit and the juggernaut had to do something else with the transistors Moore’s law was producing.
(I also read this quote as an indictment of the Lisp machine and other language-optimized processor architectures, and more generally, as a Hansonesque warning against ‘not invented here’ thinking; almost all innovation and good ideas are ‘not invented here’ and those who forget that will be roadkill under the juggernaut.)
-- G.B. Shaw, “Man and Superman”
Shaw evinces a really weird, teleological view of evolution in that play, but in doing so expresses some remarkable and remarkably early (1903) transhumanist sentiments.
I love that quote, but if it carries a rationality lesson, I fail to see it. Seems more like an appeal to the tastes of the audience here.
Yeah, you’re correct. Wasn’t thinking very hard.
I have to disagree; the lesson in the quote is “Win as hard as you can”, which is very important if not very complicated.
I don’t see the connection. If bringing a superior being to myself into existence is maximum win for me, that’s not obvious. Not everyone, like Shaw’s Don Juan, values the Superman.
Okay, I think I see what’s going on. I originally interpreted “something better than myself” from the quote to include self-improvement. In context though, that’s clearly not what it’s implying.
-Charles Dodgeson(Lewis Carrol), Through the Looking Glass
Isn’t Humpty Dumpty wrong, if the goal is intelligible conversation?
Absolutely. But if the goal is to establish dominance, as Humpty Dumpty (appears to) suggest, its technique often works.
At first when I posted it I think I was thinking of it as kind of endorsing a pragmatic approach to language usage. I mean, it hurts communication to change the meanings of words without telling anyone, but occasionally it might be useful to update meanings when old ones are no longer useful. It used to be that a “computer” was a professional employed to do calculations, then it became a device to do calculations with, now its a device to do all sorts of things with.
But I feel like that’s kind of a dodge—you’re absolutely right when you say changing the meanings arbitrarily (or possibly to achieve a weird sense of anthropomorphic dominance over it) harms communication, and should be avoided, unless the value of updating the sense of the word outweighs this.
It’s also a useful way to establish a nonweird sense of dominance over my conversational partner.
“Let us have faith that right makes might, and in that faith, let us, to the end, dare to do our duty as we understand it”—Abraham Lincoln’s words in his February 26, 1860, Cooper Union Address
If right makes might, is the might you see right? Since blight and spite can also make might, is it safe to sight might and think it right?
Now, an application for Bayes’ Theorem that rhymes!! Sweet Jesus!
I love it! How about in response: Since blight and spite can make might, its just not polite by citing might to assume that there’s right, the probabilities fight between spite, blight and right so might given blight and might given spite must be subtracted from causes for might if the order’s not right!
You have no idea how hard I’m giggling right now. Or maybe you do, because I’m telling you about it. Well met, mathpoet!
(I hope that mathpoets become enough of a real thing to warrant an unhyphenated word.)
Check out this alliteration: “When you see an infinite regress, consider a clever quining.”
--Gregory House, M.D. - S02E11 “Need to Know”
Thought it was a duplicate of this superior quote, but it wasn’t.
-- the character Sherkaner Underhill, from A Fire Upon the Deep, by Vernor Vinge.
If people believe traditions are valuable, they should anticipate that searching the past for more traditions is valuable. But we don’t see that; we see most past traditions (paradoxically!) rejected with “things are different now”.
Hmm...my subjective impression is that people that talk a lot about tradition actually are more interested in history than people who don’t.
My subjective impression is that people who talk a lot about tradition are more interested in “the past” than they are interested in “history”. e.g. the history of our nation does not bear out the traditional idea that everyone is equal. Or for that matter, the tradition of social mobility in our country, or the tradition of a wedding veil, or the tradition of Christmas caroling v. wassailing, etc.
This implication is true, but the premise typically is not. The conservative defense of tradition-for-tradition’s-sake isn’t really a defense of all traditions, it’s a defense of long-term-stable, surviving traditions. Don’t think, “It’s old; revere it.” Think, “It’s working; don’t break it.” For traditions which weren’t working well enough to be culturally preserved with no searching necessary, this heuristic doesn’t apply. To the contrary, if it turned out that there was no correlation between how long a tradition survives and how worthwhile it is, then there would be no point in giving a priori respect to any traditions.
David Deutsch, The Beginning of Infinity
-- Jonathan Haidt, The Righteous Mind, quoted here
I’m still trying to decide whether going off to live in the metaphorical colonies orbiting the moon is to be considered a bad thing or a really awesome idea.
It really depends how many catgirls I’m allowed to bring.
I mean, realistic orbiting colonies done using present-day space technology would be horrifying death traps, but metaphorical orbiting colonies are the future of humanity. I’m really confused here.
— orthonormal
Politics is the art of the possible. Sometimes I’m tempted to say that political philosophy is the science of the impossible.
John Holbo
-Captain Kirk
Nonsense. I just threw Schrodinger’s cat outside the future light cone. In your Everett branch is the cat alive or dead?
Ok, sure, having a physics where faster than light and even (direct) time travel are possible makes things easier.
Both?
No.
Well, in this case the universal wavefunction does factorise into a product of two functions 𝛙(light cone)𝛙(cat), where 𝛙(cat) has an “alive” branch and “dead” branch, but 𝛙(light cone) does not. I’d rather identify with 𝛙(light cone) than 𝛙(light cone × cat) [i.e. 𝛙(universe)], but whatever.
The point you were trying to make is correct anyway, either way.
It seems to me that asking about the state of something in “your” Everett branch while it’s outside your light cone is rather meaningless. The question doesn’t really make sense. Someone with a detailed knowledge of physics in this situation can predict what an observer anywhere will observe.
But in general, your point is correct. We do have a very hard time trying to learn about events outside our light cone, etc. But the message in the quote is simply the idea that an uncertain map != an uncertain territory.
So, if it was someone you care about instead of a cat, would you prefer that this happened or that they disappeared entirely?
It is still not meaningful from a physical standpoint. If you were to throw something I valued outside my future lightcone, then I would take the same as you destroying said thing.
And may I remind you that Schrödingers cat was proposed as a thought experimental counter argument to the copenhagen inteprentation, so asking if it is alive or dead before I have had particle interaction with it is equally meaningless, because it has yet to decohere.
Yes it is. Physics doesn’t revolve around you. The fact that you can’t influence or observe something is a limitation in you, not in physics. Stuff keeps existing when you can’t see it.
I don’t believe you. I would bet that if actually given the choice between someone you loved being sent outside your future lightcone then destroyed or just sent outside the future lightcone and given delicious cookies then you would prefer them to be given the far-away cookies than the far away destruction.
Yes, of course I believe in the implied invisible. But from a personal standpoint It does not matter because the repercussions are the same either way, unless you can use your magical “throw stuff outside my future lightcone” powers to bring them back. Outside f-lightcone = I can never interact with it.
And if I have to be really nitpicky, current macroscopic physcis does revolve around the observer, but certain things can be agreed upon; such as the hamiltonian, timelike, spacelike and lightlike distances, etc. Saying physics does not revolve aroud me implies that there is a common reference point, which there isn’t.
Also, I think we are straying from meaningful discussion.
No they can’t. They most certainly can’t predict what the observer that is right next to the damn box with the cat in it will observe when it opens the box. In fact, they can’t even predict what all observers anywhere in my future light cone will observe (just those observations that could ever be sent back to me).
“Temporarily” can be quite a long time… So when can we expect to probe plank-energy physics solidly enough to really test how quantum gravity works? :)
— Henry Kuttner, Or Else
Ernest Hemingway
Though I don’t remember who said it.
1 Corinthians 15:54-57
(I like this quote, as long as it’s shamelessly presented without context of the last line: “But thanks be to God, who gives us the victory through our Lord Jesus Christ.” )
How do you interpret that line?
The sting of death is ignorance, and the power of ignorance is the indifference of the universe. We do not yet know how to stop death, and until we do, we will go on dying.
-Anonymous
“Do you believe in revolution
Do you believe that everything will change
Policemen to people
And rats to pretty women
Do you think they will remake
Barracks to bar-rooms
Yperit to Coca-Cola
And truncheons to guitars?
Oh-oh, my naive
It will never be like that
Oh-oh, my naive
Life is like it is
Do you think that ever
Inferiority complexes will change to smiles
Petržalka to Manhattan
And dirty factories to hotels
Do you think they will elevate
Your idols to gods
That you will never have to
Bathe your sorrow with alcohol?
Oh-oh, my naive...
Do you think that suddenly
Everyone will reconcile with everyone
That no one will write you off
If you will have holes in your jeans
Do you think that in everything
Everyone will help you
That you will never have to be
Afraid of a higher power?
Oh-oh, my naive...”
My translation of a Slovak punk-rock song in 1990s “Slobodná Európa: Nikdy to tak nebude”. Is it an example of an outside view, or just trying to reverse stupidity?
--Dara O’Briain
Duplicate: http://lesswrong.com/r/all/lw/9pk/rationality_quotes_february_2012/5tm0
-Jeff Olson
-- Terry Goodkind, Faith of the fallen. I know quite a few here dislike the author, but there’s still a lot of good material, like this one, or the Wizard Rules.
Wrong. Ockham’s Razor is, at best, deducible from the axioms of probability theory, which are logically independent of “what is, is”. Without the Razor, most of human knowledge is not justifiable.
I have yet to study probability theory in depth, but how can it be wrong? It simply means relying on facts, reason of wishes, facts instead of faith. Probability might be interesting, but since it’s subjective, it only serves as an estimate in figuring out what might be true. The above quote tells us that facts are facts, and we can choose not to believe them, but they are still there. Using logic, for example Occam’s Razor, helps in discerning fact from belief.
The quote can be boiled down to “what is is, regardless of our knowledge”.
I’m sorry, I wasn’t clear. Specifically:
is wrong. Our knowledge is, as you say, subjective; it’s based on our calculations, which are fallible, and on our axioms (or “priors”), which are even more fallible.
Thanks for clearing that up! As far as I can tell, however, all subjective knowledge is based on interpretations of the objective. We can all be wrong, but what is, is. We experiment to figure out what is in the first place, before we can try to form calculations, no? It would be more of something like “look at the territory first, else you might fall and break your neck if your map’s wrong”. I feel like I’m missing something painfully obvious here, though. Where am I going wrong here?
“What is, is” is a true statement, and one would do well to bear it in mind. My objection was to the assertion (as perceived by me) that we—as rationalists—can claim to deduce everything we know from that simple fact. We can’t, and it’s a flaw I don’t think we pay enough attention to.
Maybe we should, then. I’ve always percieved it as we can potentially deduce everything from… Well, not just that fact, but the assumption that what is is, and we can only do our best to interpret it. We’ll most likely never be completely right, I know damn well I’m not, but I understand your reasoning, anyway. What would in your view be impossible to deduce, then?
The important thing, I take it, is to decide the level of our contribution on your own, without doing any detailed gathering of data or modeling. —LeoChopper, at sluggy.net, summarizing an argument against AGW.
(Okay, I understand it sitting at 0. Downvoted for what? Putting modeling on the same footing as detailed gathering of data?)
I’m not the one who downvoted it, but I’m about to add another, because the quote makes little to no sense without context. Who is “our” and “your”? Does “contribution” refer to CO2 emissions, or to poltiical activism, or to planning work, or to research?
People also tend to downvote pro- and anti-AGW arguments here as “mindkilling”, but this one hasn’t even reached that point yet. From just the quoted text I can’t even be certain whether this is an anti-AGW statement (climate modelling is insufficiently detailed and data is too sparse to justify economic contributions to mitigating global warming!) or a pro-AGW sarcastic summary of an anti-AGW argument (you’re ignoring our detailed data and modeling and just deciding how much CO2 we should contribute to the air!) or something I’ve missed entirely.
I see. It was the latter—someone had just pooh-poohed basically all climate science, explicitly citing gut feeling. The above was a very straightforward summary of the ‘argument’, not really sarcastic.
Modest Mouse, lyrics Isaac Brock
I don’t see why this got downvoted. It’s making “humans are still apes, despite their pretensions” into a memorable image.
“Humans are still apes” is un-Darwinian.
Darwin is saying that all animals are linked by genealogical ties. The mouse and the elephant share a common ancestor, a small, shrew-like creature, 200million years ago. So is he saying elephants are still mice, just big mice with a funny nose? No, the theory, as the book title suggests, is a theory of origins. Given 10million years descent with modification can come up with something genuinely new. By spreading the necessary changes across millions of generations, descent with modification can even produce genuine novelty without needing a mouse to give birth to an elephant.
Some people look at modern technological civilization and see it as evidence that humans are not apes, but are their own kind of thing, genuinely new. Darwinians accept that sufficient such evidence can prove the point that humans (or maybe post-humans) are not apes, because it is central to Darwin’s theory that some kinds of genuine novelty arise despite (and indeed through) long chains of descent.
Uh, the reason to say “humans are apes” is because doing so turns out to have useful predictive power. That being the actual point of the original quote.
Humans are still apes according to any monophyletic definition of ape, given that bonobos are more closely related to us than to orangutans. (Also, birds are dinosaurs and dogs are wolves.)
“Birds are dinosaurs” is becoming commonplace. Even the Wikipedia article on dinosaurs has given up and gone present tense.
Human civilizations are extremely complicated, and defy current attempts to understand them. One indirect approach is to leave humans to one side for the moment and to study bonobos, chimpanzees, and gorillas first. Where does that get us? There are two competing ideas.
ONE The huge differences between modern human civilizations and the social behaviour of bonobos, chimpanzees, and gorillas, are a reflection of recent evolution. In the past few million years, since the last common ancestor, human evolution has taken some strange turns, leading to the advanced technological society we see around us. When we study bonobos, chimpanzees, and gorillas we are looking at creatures without key adaptions and when we try to transfer insights to help us understand human social behaviour we end up mislead.
TWO Once we understand bonobos, chimpanzee, and gorilla behaviour, we have the key to understanding all apes, including humans. Human civilisation may be incomprehensible when we come at it cold, but having warmed up on puzzling out the basis of the simpler social behaviours of other apes, we can expect to start making progress.
Which of these two views is correct? That strikes me as a very hard question. I’m uncomfortable with the words “humans are still apes” because that phrase seems to be used to beg the question. The more conservative formulation “humans and apes had a common ancestor a few million years ago.” dodges giving a premature opinion on a hard question.
Here is a thought experiment to dramatize the issue: A deadly virus escapes from a weapons lab and kills all humans. Now the talking-animal niche on earth is vacant again. Will chimpanzees or gorillas evolve to fill it, building their own technologically advanced civilizations in a few million years time. If you believe view number two, this seems reasonably likely. If you believe view number one, it seems very unlikely. One is much more interested in the idea that the strange turns in human evolution in the past million years are a one in a million freak and are a candidate for the great filter
More like “any common ancestor of all apes is also an ancestor of all humans”.
(Humans are not apes if you define apes paraphiletically e.g. as ‘the descendants of the most recent common ancestor of bonobos and gibbons, excluding humans’, but then “humans are not apes” becomes a tautology.)
Humans might have adaptations which set us apart from all the other apes behavior-wise, but we share a common ancestor with chimps and bonobos more recently than they share a common ancestor with orangutans. It doesn’t make a lot of sense to say we split off from the apes millions of years ago, when we’re still more closely related to some of the apes than those apes are to other species of ape.
Edit: already pointed out in the grandparent, I guess this is what I get for only looking at the local context.
How you define the word “ape” makes no difference to the facts about our relationships with our ancestors and their other descendants.
I don’t see why being monophyletic is the most relevant property of definitions.
Also, are you also going to attempt to argue that humans are fish?
The fish thing is irrelevant. If what makes bonobos and orangutans apes is that they share a common ancestor, then that also makes us an ape, since that’s our ancestor too. Can’t adapt that argument to fish, because descendants of the ancestor we share with fish are not generally called fish, the way descendants of the ancestor we share with orangutans are generally called apes.
I’m not sure this holds water: a common-ancestry approach would have to take in lobe-finned fishes like the lungfish, who’re more closely related to tetrapods but are called fish on the basis of a morphological similarity derived from a common ancestor. Essentially the same process as for apes. They’re in good company, though: there are plenty of traditional taxonomical groups which turn out to be polyphyletic when you take a cladistic approach, including reptiles.
There would be no point in defining fish monophyletically anyway, as it would then be just a synonym of craniates. (Also note that “apes, i.e. non-human hominoids, do not include humans” is a tautology but “fish, i.e. non-tetrapod craniates, do not include humans” is not.)
(Of course, you could then say “There would be no point in defining apes monophyletically anyway, as it would then be just a synonym of hominoids.” But hominoids is a much uglier word, and hominoids/hominids/hominines/etc. are much harder to remember than apes/great apes/African apes/etc. (plus, my spell checker baulks at some of the former, FWIW). (See this proposal to rename the scientific names of the clades.)
Bad spelling and bad punctuation would suffice.
Aleister Crowley, Magick, Liber ABA, Book 4
Could you explain what you mean by that?
If you search the text of the book, e.g. with Google Books, you can see the four places where it appears and get a sense of the context and meaning. His talk of pyramids is similar to Eliezer’s Void or Musashi’s nameless virtue or God; seeing that connection should I think be enough to figure the rest out? Maybe? It’s a pretty deep piece of wisdom though so a lot of the meaning might not be immediately obvious. Hence my trepidation about explaining it; it’d take too long.
If it’s too deep to be understandable without explanation, and you don’t think it’s feasible to explain it here, then why did you put the quote up in the first place?
Heterogeneous audience and asymmetric costs/benefits to reading it: people who don’t get it aren’t harmed much by its presence, the few people who do get it should benefit quite a bit.
Shouldn’t a good pithy saying work in the opposite way ? The people who don’t get it walk away enlightened (or, at least, filled with curiosity regarding the topic), while the ones in the know are unharmed.
What’s the point of telling the chosen few something which they already know ?
It’s something that you could have derived if you’d thought to but didn’t, like Bayes’ rule. Once it’s pointed out you immediately see why it’s true and gain a fair bit of insight, but first you have to understand basic algebra. It’s basically like clichés like “be the change you want to see in the world” but on a higher level; most normal people don’t have enough knowledge to correctly interpret “be the change you want to see in the world”, and most smart people don’t have enough knowledge to correctly interpret “interpret every phenomenon as a particular dealing of God with your soul”, but the few who do should benefit a lot.
In that case I’m voting down your quote, because, not being one of the Elect, I see no particular meaning in it. But if you wrote some sort of a Sequence on the topic, I might vote it up.
I think that’s the correct choice; the quote and quotes like it should be voted down to minus ten or so, because most people will get no benefit from it.
Do you consider it more than negligibly likely that the benefit-receiving subset will read a comment voted down to −10 or so?
I am more likley to read heavily downovted quotes, simply for the sake of novelty, than quotes voted at −2 to 4 karma. I don’t think I’m in the benefit-receiving subset though.
I read strongly downvoted posts as well, but perhaps they have more than just novelty value. For a post that is merely bad, people usually stop downvoting it once it’s negative. But something voted to −10 or below is often bad in a way that serves as an example of what not to do. Heavily downvoted comments can be educational.
Yes, especially if I point them to it. Having it already sitting there with links is useful. There’s also a non-negligible subset of people that read my comments from my user page.
This might actually be the highest wisdom-to-length ratio I’ve ever seen in an English sentence. “Take heed therefore how ye hear: for whosoever hath, to him shall be given; and whosoever hath not, from him shall be taken even that which he seemeth to have” from Jesus is also pretty high up there.
Well let me impress you:
So heed this: whoever has, will be given to; and whoever has not, more will be taken from.
Exchanges like this make me wish we had a signalling-analysis novelty account, akin to reddit’s joke-explainer.
Italian is even more awesome: the proverb Piove sempre sul bagnato (lit. ‘it always rain on the wet’) says the same thing in eight syllables. :-)
(There was once a discussion in Italy about whether to stop teaching Latin in a certain type of high schools. Someone said that Latin should be taught because it’s the intellectual equivalent of high-nutrient food, giving the example of the proverb Homini fingunt et credunt and pointing out that a literal translation (‘People feign and believe’) would be nearly meaningless, and an actually meaningful translation (‘People make up things and then they end up believing them themselves’) wouldn’t be as terse and catchy. But ISTM that all natural languages have proverbs whose point is not immediately obvious from the literal meaning, so that’s hardly an argument as to why one particular language should be taught.)
Lolz, but the “how ye hear” part is actually an important nuance. (And sadly it doesn’t appear in a few of the other gospels I think.) ETA: Also the “seemeth to have” part is actually an important nuance. (And sadly it doesn’t appear in a few of the other gospels I think.)
Yeah, I couldn’t parse “how ye hear” into English. I mean, I turned it into “Heed how you listen: ” but that doesn’t have any poignancy, any poetry to it at all.
The rich get rich, but the poor stay poor.
But that’s not as abstract and makes it seem like it’s literally only about money, rather than a general principle of credit assignment that has important implications for people who want to have better epistemic habits. That’s why the “take heed therefore how ye hear” part is important.
Take heed therefore how ye hear: for whosoever hath good inductive biases, to him more evidence shall be given, and he shall have an abundance: but whosoever hath not good inductive biases, from him shall be taken away even what little evidence that he hath.
ETA: I feel like some pedantic snobbish artist going on about this sort of thing, it’s kinda funny.
It’s conceivable that “take care” is also a clue that this process will just happen—it’s not your job to be taking advantage of those who have little.
What is Jesus even talking about? Arguing that capitalism leads to monopolistic capitalism? Arguing against economic inequality? Discussing utility monsters? Ordering followers to strengthen the economic inequality by giving to the rich?
Imagine the LW, after fall of civilization, became a cult of Eliezer, misquoting and taking out of context anything said at any topics… after destruction of internet, relying on the memories.
You can improve the wisdom to length ratio just by taking the “so” out of the whosoevers.
Edit: already done, and right below me too.
Length isn’t measured in number of letters, it’s measured in ease of memorization, the encoding scheme of the brain. “Whosoever” flows better.
Probably because you’ve already heard that quotation with the whosoever. In the encoding scheme where 0 encodes the lyrics to “Bohemian Rhapsody” and the encodings of all other messages start with 1, the lyrics to “Bohemian Rhapsody” have the shortest “length” in your sense of the word.
I’m talking about writing to memory, not reading from it. I don’t think it’s just because I’ve heard it with “whosoever”, I think it’s because “whosoever” is more poetic and distinct in context.