I dislike the “post utopian” example, and here’s why:
Language is pretty much a set of labels. When we call something “white”, we are saying it has some property of “whiteness.” NOW we can discuss wavelengths and how light works, or whatnot, but 200 years ago, they had no clue. They could still know that snow is white, though. At the same time, even with our knowledge of how colors work, we can still have difficulties knowing exactly where the label “white” ends, and grey or yellow begins.
Say I’m carving up music-space. I can pretty easily classify the differences between Classical and Rap, in ways that are easy to follow. I could say that classical features a lot of instrumentation, and rap features rhythmic language, or something. But if I had lots of people spending all their lives studying music, they’re going to end up breaking music space into much smaller pieces. For example, dub step and house.
Now, I can RECOGNIZE dubstep when I hear it, but if you asked me to teach you what it was, I would have difficulties. I couldn’t necessarily say “It’s the one that goes, like, WOPWOPWOPWOP iiinnnnnggg” if I’m a learned professor, so I’ll use jargon like “synthetic rhythm,” or something.
But not having a complete explainable System 2 algorithm for “How to Tell if it’s Dubstep” doesn’t mean that my System 1 can’t readily identify it. In fact, it’s probably easier to just listen to a bunch of music until your System 1 can identify the various genres, even if your System 2 can’t codify it. The example is treating the fact that your professor can’t really codify “post utopianism” to mean that it’s not “true”. (this example has been used in other sequence posts, and I disagreed with it then too)
Have someone write a bunch of short stories. Give them to English Literature professors. If they tend to agree which ones are post utopian, and which ones aren’t, then they ARE in fact carving up literature-space in a meaningful way. The fact that they can’t quite articulate the distinction doesn’t make it any less true than knowing that snow was white before you knew about wavelengths. They’re both labels, we just understand one better.
Anyways, I know it’s just an example, but without a better example, i can’t really understand the question well enough to think of a relevant answer.
I think Eliezer is taking it as a given that English college professors who talk like that are indeed talking without connection to anticipated experience. This may not play effectively to those he is trying to teach, and as you say, may not even be true.
In particular, “post-utopian” is not a real term so far as I know, and I’m using it as a stand-in for literary terms that do in fact have no meaning. If you think there are none of those, Alan Sokal would like to have a word with you.
There’s a sense in which a lot of fuzzy claims are meaningless: for example, it would be hard for a computer to evaluate “Socrates is kind” even if the computer could easily evaluate more direct claims like “Socrates is taller than five feet”. But “kind” isn’t really meaningless; it would just be a lot of work to establish exactly what goes into saying “kind” and exactly where the cutoff point between “kind” and “not so kind” is.
I agree that literary critical terms are fuzzy in the same sense as “kind”, but I don’t think they’re necessarily any more fuzzy. For example, replacing “post-utopian” with its likely inspiration “post-colonial”, I don’t know much about literature, but I feel pretty okay designating Salman Rushdie as “post-colonial” (since his books very often take place against the backdrop of the issues surrounding British decolonization of India) and J. K. Rowling as “not post-colonial” (since her books don’t deal with issues surrounding decolonization at all.)
Likewise, even though “post-utopian” was chosen specifically to be meaningless, I can say with confidence that Sir Thomas More’s Utopia was not post-utopian, and I bet most other people will agree with me.
The Sokal Hoax to me was less about totally disproving all literary critical terms, and more about showing that it’s really easy to get a paper published that no one understands. People elsewhere in the thread have already given examples of Sokalesque papers in physics, computer science, etc that got published, even though those fields seem pretty meaningful.
Literary criticism does have a bad habit of making strange assertions, but I don’t think they hinge on meaningless terms. A good example would be deconstruction of various works to point out the racist or sexist elements within. For example, “It sure is suspicious that Moby Dick is about a white whale, as if Melville believed that only white animals could possibly be individuals with stories of their own.”
The claim that Melville was racist when writing Moby Dick seems potentially meaningful—for example, we could go back in time, put him under truth serum, and ask him whether that was intentional. Even if it was wholly unconscious, it still implies that (for example) if we simulate a society without racism, it will be less likely to produce books like Moby Dick, or that if we pick apart Melville’s brain we can draw some causal connection between the racism to which he was exposed and the choice to have Moby Dick be white.
However, if I understand correctly literary critics believe these assertions do not hinge on authorial intent; that is, Melville might not have been trying to make Moby Dick a commentary on race relations, but that doesn’t mean a paper claiming that Moby Dick is a commentary on race relations should be taken less seriously.
Even this might not be totally meaningless. If an infinite monkey at an infinite typewriter happened to produce Animal Farm, it would still be the case that, by coincidence, it was a great metaphor for Communism. A literary critic (or primatologist) who wrote a paper saying “Hey, Animal Farm can increase our understanding and appreciation of the perils of Communism” wouldn’t really be talking nonsense. In fact, I’d go so far as to say that they’re (kind of) objectively correct, whereas even someone making the relatively stupid claim about Moby Dick above might still be right that the book can help us think about our assumptions about white people.
If I had to criticize literary criticism, I would have a few vague objections. First, that they inflate terms—instead of saying “Moby Dick vaguely reminds me of racism”, they say “Moby Dick is about racism.” Second, that even if their terms are not meaningless, their disputes very often are: if one critic says “Moby Dick is about racism” and another critic says “No it isn’t”, then if what the first one means is “Mobdy Dick vaguely reminds me of racism”, then arguing this is a waste of time. My third and most obvious complaint is opportunity costs: to me at least the whole field of talking about how certain things vaguely remind you of other things seems like a waste of resources that could be turned into perfectly good paper clips.
But these seem like very different criticisms than arguing that their terms are literally meaningless. I agree that to students they may be meaningless and they might compensate by guessing the teacher’s password, but this happens in every field.
I liked your comment and have a half-formed metaphor for you to either pick apart or develop:
LW/ rationalist types tend towards hard sciences. This requires more System 2 reasoning. Their fields are like computer programs. Every step makes sense, and is understood.
Humanities tends toward more System 1 pattern recognition. This is more akin to a neural network. Even if you are getting the “right” answer, it is coming out of a black box.
Because the rationalist types can’t see the algorithm, they assume it can’t be “right”.
I like the idea that this comment produces in my mind. But nitpickingly, a neural network is a type of computer program. And most of the professional bollocks-talkers of my acquaintance think very hard in system-two like ways about the rubbish they spout.
It’s hard to imagine a system-one academic discipline. Something like ‘Professor of telling whether people you are looking at are angry’, or ‘Professor of catching cricket balls’....
I wonder if you might be thinking more of the difference between a computer program that one fully understands (a rare thing indeed), and one which is only dimly understood, and made up of ‘magical’ parts even though its top level behaviour may be reasonably predictable (which is how most programmers perceive most programs).
Well, in the case of answers to questions like that in the humanities what does the word ‘right’ actually mean? If we say a particular author is ‘post utopian’ what does it actually mean for the answer to that question to be ‘yes’ or ‘no’? It’s just a classification that we invented. And like all classification groups there is a set of rules characteristics that mean that the author is either post utopian or not. I imagine it as a checklist of features which gets ticked off as a person reads the book. If all the items in the checklist are ticked then the author is post utopian. If not then the author is not.
The problem with this is that different people have different items in their checklist and differ in their opinion on how many items in the list need to be checked for the author to be classified as post utopian. You can pick any literary classification and this will be the case. There will never be a consensus on all the items in the checklist. There will always be a few points that everybody does not agree on. This makes me think that objectively speaking there is not ‘absolutely right’ or ‘absolutely wrong’ answer to a question like that.
In hard science on the other hand. There is always an absolutely right answer. If we say: “Protons and neutrons are oppositely charged.” There is an answer that is right because no matter what my beliefs, experiment is the final arbiter. Nobody who follows through the logical steps can deny that they are oppositely charged without making an illogical leap.
In the literary classification, you or your neural network can go through logical steps and still arrive at an answer that is not the same for everybody.
EDIT: I meant “protons and electrons are oppositely charged” not “protons and neutrons”. Sorry!
One: Protons and neutrons aren’t oppositely charged.
Two: You’re using particle physics as an example of an area where experiment is the final arbiter; you might not want to do that. Scientific consensus has more than a few established beliefs in that field that are untested and border on untestable.
Honestly, he’d be hard pressed to find a field that has better tested beliefs and greater convergence of evidence. The established beliefs you mention are a problem everywhere, and pretty much no field is backed with as much data as particle physics.
Fair enough; I had wanted to say that but don’t have sufficiently intimate awareness of every academic field to be comfortable doing so. I think it works just as well to illustrate that we oughtn’t confuse passing flaws in a field with fundamental ones, or the qualities of a /discipline/ with the qualities of seeking truth in a particular domain.
No, it’s just that FluffyC used slashes to indicate that the word in the middle was to be italisized, so she probably hadn’t read the help section, and I thought that reading the help section would, well, help FluffyC.
I don’t think that the fact that everyone having a different checklist is the point. In this perfect, hypothetical world, everyone has the same checklist.
I think that the point is that the checklist is meaningless, like having a literary genre called y-ism and having “The letter ‘y’ constitutes 1/26th of the text” on the checklist.
Even if we can identify y-ism with our senses, the distinction is doesn’t “mean” anything. It has zero application outside of the world of y-ism. It floats.
I agree that literary critical terms are fuzzy in the same sense as “kind”, but I don’t think they’re necessarily any more fuzzy.
That is an important point. It is not so easy to come up up with a criterion of “meaningfulness” that excludes the stuff rationalists don’t like, but doens’t exclude a lot of everyday terninology at the same time.
I could add that others have their own criteria of “meaningfulness”. Humanities types aren’t very bothered about questions like how many moons saturn has, because it doens’t affect them or their society. The common factor seems to both kinds of “meaningfullness” is that they amount to “the stuff I personally consider to be worth bothering about”.
A concern with objective meaningfullness is still a subjective concern.
FWIW, the Moby Dick example is less stupid than you paint it, given the recurrence of whiteness as an attribute of things special or good in western culture—an idea that pre-dates the invention of race. I think a case could be made out that (1) the causality runs from whiteness as a special or magical attribute, to its selection as a pertinent physical feature when racism was being invented (considering that there were a number of parallel candidates, like phrenology, that didn’t do so well memetically), and (2) in a world that now has racism, the ongoing presence of valuing white things as special has been both consciously used to reinforce it (cf the KKK’s name and its connotations) and unconsciously reinforces it by association,
FWIW, the Moby Dick example is less stupid than you paint it, given the recurrence of whiteness as an attribute of things special or good in western culture—an idea that pre-dates the invention of race.
I can’t resist. I think you should read Moby Dick. Whiteness in that novel is not used as any kind of symbol for good:
This elusive quality it is, which causes the thought of whiteness, when divorced from more kindly associations, and coupled with any object terrible in itself, to heighten that terror to the furthest bounds. Witness the white bear of the poles, and the white shark of the tropics; what but their smooth, flaky whiteness makes them the transcendent horrors they are? That ghastly whiteness it is which imparts such an abhorrent mildness, even more loathsome than terrific, to the dumb gloating of their aspect. So that not the fierce-fanged tiger in his heraldic coat can so stagger courage as the white-shrouded bear or shark.
If you want to talk about racism and Moby Dick, talk about Queequeg!
Not that white animals aren’t often associated with good things, but this is not unique in western culture:
So in spring, when appears the constellation Visakha, the Bodhisatwa, under the appearance of a young white elephant of six defenses, with a head the color of cochineal, with tusks shining like gold, perfect in his organs and limbs, entered the right side of his mother, and she, by means of a dream, was conscious of the fact.
WMSCI, the World Multiconference on Systemics, Cybernetics and Informatics, is a computer science and engineering conference that has occurred annually since 1995. [...] WMSCI attracted publicity of a less favorable sort in 2005 when three graduate students at MIT succeeded in getting a paper accepted as a “non-reviewed paper” to the conference that had been randomly generated by a computer program called SCIgen
I think you are playing to what you assume are our prejudices.
Suppose X is a meaningless predicate from a humanities subject. Suppose you used it, not a simulacrum. If it’s actually meaningless by the definition I give elsewhere in the thread, nobody will be able to name any Y such that p(X|Y) differs from p(X|¬Y) after a Bayesian update. Do you actually expect that, for any significant number of terms in humanities subjects, you would find no Y, even after grumpy defenders of X popped up in the thread? Or did you choose a made-up term so as to avoid flooding the thread with Y-proponents? If you expect people to propose candidates for Y, you aren’t really expecting X to be meaningless.
The Sokal hoax only proves one journal can be tricked by fake jargon. Not that bona fide jargon is meaningless.
I’m sure there’s a lot of nonsense, but “post-utopian” appears to have a quite ordinary sense, despite the lowness of the signal to noise ratio of some of those hits. A post-utopian X (X = writer, architect, hairdresser, etc.) is one who is working after, and in reaction against, a period of utopianism, i.e. belief in the perfectibility of the world by man. Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.
Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.
By this definition, wouldn’t the belief that science will not lead to perfection but we can still look forward to more of what we already have (rather than ruin and destruction) be equally post-utopian?
Not as I see the word used, which appears to involve the sense of not merely less enthusiastic than, but turning away from. You can’t make a movement on the basis of “yes, but not as sparkly”.
“Post-utopian” is a real term, and even in the absence of examples of its use, it is straightforward to deduce its (likely) meaning, since “post-” means “subsequent to, in reaction to” and “utopian” means “believing in or aiming at the perfecting of polity or social conditions”. So post-utopian texts are those which react against utopianism, express skepticism at the perfectibility of society, and so on. This doesn’t seem like a particularly difficult idea and it is not difficult to identify particular texts as post-utopian (for example, Koestler’s Darkness at Noon, Huxley’s Brave New World, or Nabokov’s Bend Sinister).
So I think you need to pick a better example: “post-utopian” doesn’t cut it. The fact that you have chosen a weak example increases my skepticism as to the merits of your general argument. If meaningless terms are rife in the field of English literature, as you seem to be suggesting, then it should be easy for you to pick a real one.
There is the literature professor’s belief, the student’s belief, and the sentence “Carol is ‘post-utopian’”. While the sentence can be applied to both beliefs, the beliefs themselves are quite different beasts. The professor’s belief is something that carve literature space in a way most other literature professors do. Totally meaningful. The student’s belief, on the other hand, is just a label over a set of authors the student have scarcely read. Going a level deeper, we can find an explanation for this label, which turns out to be just another label (“colonial alienation”), and then it stops. From Eliezer’s main post (emphasis mine) :
Some literature professor lectures that the famous authors Carol, Danny, and Elaine are all ‘post-utopians’, which you can tell because their writings exhibit signs of ‘colonial alienation’. For most college students the typical result will be that their brain’s version of an object-attribute list will assign the attribute ‘post-utopian’ to the authors Carol, Danny, and Elaine.
That mysterious explanation generates a floating belief in the student’s mind.
Well, not that floating. The student definitely expects a sensory experience: grades. The problem isn’t the lack of expectations, but that they’re based on an overly simplified model of the professor’s beliefs, with no direct ties to the writing themselves –only to the authors’ names. Remove professors and authors’ names, and the students’ beliefs are really floating: they will have no way to tie them to reality –the writing. And if they try anyway, I bet their carvings won’t agree.
Now when the professor grades an answer, only a label will be available (“post-utopian”, or whatever). This label probably reflects the student’s belief directly. That answer will indeed be quickly patterned matched against a label inside the professor’s brain, generating a quick “right” or “wrong” response (and the corresponding motion in the hand that wield the red pen). Just as drawn in the picture actually.
However, the label in the professor’s head is not a floating belief like the student’s. It’s a cached thought, based on a much more meaningful belief (or so I hope).
Okay, now that I recognize your name, I see you’re not exactly a newcomer here. Sorry if I didn’t told anything you don’t know. But it did seem like you conflated mysterious answers (like “phlogiston”) and floating beliefs (actual neural constructs). Hope this helped.
If that is what Eliezer meant, then it was confusing to use an example for which many people suspect that the concept itself is not meaningful. It just generates distraction, like the “Is Nixon a pacifist?” example in the original Politics is the mind-killer post (and actually,the meaningfulness of post-colonialism as a category might be a political example in the wide sense of the word). He could have used something from physics like “Heat is transmitted by convention”, or really any other topic that a student can learn by rot without real understanding.
I don’t think Eliezer meant all what I have written (edit: yep, he didn’t). I was mainly analysing (and defending) the example to death, under Daenerys’ proposed assumption that the belief in the professor’s head is not floating. More likely, he picked something familiar that would make us think something like “yeah, if those are just labels, that’s no use”.¹
By the way is there any good example? Something that (i) clearly is meaningful, and (ii) let us empathise with those who nevertheless extract a floating belief out of it? I’m not sure. I for one don’t empathise with the students who merely learn by rot, for I myself don’t like loosely connected belief networks: I always wanted to understand.
Also, Eliezer wasn’t very explicit about the distinction between a statement, embodied in text, images, or whatever our senses can process, and belief, embodied in a heap of neurons. But this post is introductory. It is probably not very useful to make the distinction so soon. More important is to realize that ideas are not floating in the void, but are embodied in a medium: paper, computers… and of course brains.
[1] We’re not familiar to “post-utopianism” and “colonial alienation” specifically, but we do know the feeling generated by such literary mumbo jumbo.
Thank you! Your post helped me finally to understand what it was that I found so dissatisfying with the way I’m being taught chemistry. I’m not sure right now what I can do to remedy this, but thank you for helping me come to the realization.
If the teacher does not have a precise codification of what makes a writer “post-utopian”, then how should he teach it to students?
I would say the best way is a mix of demonstrating examples (“Alice is not a post-utopian; Carol is a post-utopian”), and offering generalizations that are correlated with whether the author is a post-utopian (“colonial alienation”). This is a fairly slow method of instruction, at least in some cases where the things being studied are complicated, but it can be effective. While the student’s belief may not yet be as well-formed as the professor’s, I would hesitate to call it meaningless. (More specifically, I would agree denotatively but object connotatively to such a classification.) I would definitely not call the belief useless, since it forms the basis for a later belief that will be meaningful. If a route to meaningful, useful belief B goes through “meaningless” belief A, then I would say that A is useful, and that calling A meaningless produces all the wrong sorts of connotations.
To over-extend your metaphor, dubstep is electronic music with a breakbeat and certain BPM. Bassnectar described it in an inverview once as hip-hop beats at half time in breakbeat BPMs.
It’s really easy to tell the difference between dubstep and house, because dubstep has a broken kick..kickSNARE beat, while house has a 4⁄4 kick.kick.kick.kick beat.
(Interestingly, the dubstep you seem to describe is what people who listened to earlier dubstep commonly call “brostep,” and was inspired by one Rusko song (“Cockney Thug,” if I remember correctly).)
The point I mean to make by this is that most concepts do have system 2 algorithms that identify them, even if most people on LW would disagree with the social groups that advance those concepts.
I have many friends and comrades that are liberal arts students, and most of the time, if they said something like “post-utopian” or “colonial alienation” they’d have a coherent system-2 algorithm for identifying which authors or texts are more or less post-utopian.
Really, I agree that this is a bad example, because there are two things going on: the students have to guess the teacher’s password (which is the same as if you had Skirllex teaching MUSC 202: Dubstep Identification, and only accepted “songs with that heavy wobble bass shit” as “real dubstep, bro”), and there’s an alleged unspoken conspiracy of academics to have a meaningless classifier (which is maybe the same as subgenres of hard noise music, where there truly is no difference between typical songs in each subgenre, and only artist self-identification or consensus among raters can be used as a grouping strategy).
As others have said better than me, the Sokal affair seems to be better evidence of how easy it is to publish a bad paper than it is evidence that postmodernism is a flawed field.
In that case, they’re arguing about the wrong thing. Their real dispute is that the painting isn’t what the Mongolian wanted as a result of a miscommunication which neither of them noticed until one of them had spent money (or promised to) and the other had spent days painting.
So, no, even in that situation, there’s no such thing as a dragon, so they might as well be arguing about the migratory patterns of unicorns.
While the English profs may consistently classify writing samples as post utopian or not, the use of the label “post utopian” should be justified by the english meanings of “post” and “utopian” in some way. “Post” and “utopian” are concepts with meaning, they’re not just nonsense sounds available for use as labels.
If you have no conceptual System 1 algorithm for “post utopian”, and just have some consistent System 2 algorithm, it’s a conceptual confusion to use compound of labels for concepts that may have nothing at all to do with your underlying System 2 defined concept.
Likely the confusion serves an intellectually dishonest purpose, as in euphemism. When you see this kind of nonsense, there is some politically motivated obfuscation nearby.
I dislike the “post utopian” example, and here’s why:
Language is pretty much a set of labels. When we call something “white”, we are saying it has some property of “whiteness.” NOW we can discuss wavelengths and how light works, or whatnot, but 200 years ago, they had no clue. They could still know that snow is white, though. At the same time, even with our knowledge of how colors work, we can still have difficulties knowing exactly where the label “white” ends, and grey or yellow begins.
Say I’m carving up music-space. I can pretty easily classify the differences between Classical and Rap, in ways that are easy to follow. I could say that classical features a lot of instrumentation, and rap features rhythmic language, or something. But if I had lots of people spending all their lives studying music, they’re going to end up breaking music space into much smaller pieces. For example, dub step and house.
Now, I can RECOGNIZE dubstep when I hear it, but if you asked me to teach you what it was, I would have difficulties. I couldn’t necessarily say “It’s the one that goes, like, WOPWOPWOPWOP iiinnnnnggg” if I’m a learned professor, so I’ll use jargon like “synthetic rhythm,” or something.
But not having a complete explainable System 2 algorithm for “How to Tell if it’s Dubstep” doesn’t mean that my System 1 can’t readily identify it. In fact, it’s probably easier to just listen to a bunch of music until your System 1 can identify the various genres, even if your System 2 can’t codify it. The example is treating the fact that your professor can’t really codify “post utopianism” to mean that it’s not “true”. (this example has been used in other sequence posts, and I disagreed with it then too)
Have someone write a bunch of short stories. Give them to English Literature professors. If they tend to agree which ones are post utopian, and which ones aren’t, then they ARE in fact carving up literature-space in a meaningful way. The fact that they can’t quite articulate the distinction doesn’t make it any less true than knowing that snow was white before you knew about wavelengths. They’re both labels, we just understand one better.
Anyways, I know it’s just an example, but without a better example, i can’t really understand the question well enough to think of a relevant answer.
I think Eliezer is taking it as a given that English college professors who talk like that are indeed talking without connection to anticipated experience. This may not play effectively to those he is trying to teach, and as you say, may not even be true.
In particular, “post-utopian” is not a real term so far as I know, and I’m using it as a stand-in for literary terms that do in fact have no meaning. If you think there are none of those, Alan Sokal would like to have a word with you.
There’s a sense in which a lot of fuzzy claims are meaningless: for example, it would be hard for a computer to evaluate “Socrates is kind” even if the computer could easily evaluate more direct claims like “Socrates is taller than five feet”. But “kind” isn’t really meaningless; it would just be a lot of work to establish exactly what goes into saying “kind” and exactly where the cutoff point between “kind” and “not so kind” is.
I agree that literary critical terms are fuzzy in the same sense as “kind”, but I don’t think they’re necessarily any more fuzzy. For example, replacing “post-utopian” with its likely inspiration “post-colonial”, I don’t know much about literature, but I feel pretty okay designating Salman Rushdie as “post-colonial” (since his books very often take place against the backdrop of the issues surrounding British decolonization of India) and J. K. Rowling as “not post-colonial” (since her books don’t deal with issues surrounding decolonization at all.)
Likewise, even though “post-utopian” was chosen specifically to be meaningless, I can say with confidence that Sir Thomas More’s Utopia was not post-utopian, and I bet most other people will agree with me.
The Sokal Hoax to me was less about totally disproving all literary critical terms, and more about showing that it’s really easy to get a paper published that no one understands. People elsewhere in the thread have already given examples of Sokalesque papers in physics, computer science, etc that got published, even though those fields seem pretty meaningful.
Literary criticism does have a bad habit of making strange assertions, but I don’t think they hinge on meaningless terms. A good example would be deconstruction of various works to point out the racist or sexist elements within. For example, “It sure is suspicious that Moby Dick is about a white whale, as if Melville believed that only white animals could possibly be individuals with stories of their own.”
The claim that Melville was racist when writing Moby Dick seems potentially meaningful—for example, we could go back in time, put him under truth serum, and ask him whether that was intentional. Even if it was wholly unconscious, it still implies that (for example) if we simulate a society without racism, it will be less likely to produce books like Moby Dick, or that if we pick apart Melville’s brain we can draw some causal connection between the racism to which he was exposed and the choice to have Moby Dick be white.
However, if I understand correctly literary critics believe these assertions do not hinge on authorial intent; that is, Melville might not have been trying to make Moby Dick a commentary on race relations, but that doesn’t mean a paper claiming that Moby Dick is a commentary on race relations should be taken less seriously.
Even this might not be totally meaningless. If an infinite monkey at an infinite typewriter happened to produce Animal Farm, it would still be the case that, by coincidence, it was a great metaphor for Communism. A literary critic (or primatologist) who wrote a paper saying “Hey, Animal Farm can increase our understanding and appreciation of the perils of Communism” wouldn’t really be talking nonsense. In fact, I’d go so far as to say that they’re (kind of) objectively correct, whereas even someone making the relatively stupid claim about Moby Dick above might still be right that the book can help us think about our assumptions about white people.
If I had to criticize literary criticism, I would have a few vague objections. First, that they inflate terms—instead of saying “Moby Dick vaguely reminds me of racism”, they say “Moby Dick is about racism.” Second, that even if their terms are not meaningless, their disputes very often are: if one critic says “Moby Dick is about racism” and another critic says “No it isn’t”, then if what the first one means is “Mobdy Dick vaguely reminds me of racism”, then arguing this is a waste of time. My third and most obvious complaint is opportunity costs: to me at least the whole field of talking about how certain things vaguely remind you of other things seems like a waste of resources that could be turned into perfectly good paper clips.
But these seem like very different criticisms than arguing that their terms are literally meaningless. I agree that to students they may be meaningless and they might compensate by guessing the teacher’s password, but this happens in every field.
I liked your comment and have a half-formed metaphor for you to either pick apart or develop:
LW/ rationalist types tend towards hard sciences. This requires more System 2 reasoning. Their fields are like computer programs. Every step makes sense, and is understood.
Humanities tends toward more System 1 pattern recognition. This is more akin to a neural network. Even if you are getting the “right” answer, it is coming out of a black box.
Because the rationalist types can’t see the algorithm, they assume it can’t be “right”.
Thoughts?
I like your idea and upvoted the comment, but I don’t know enough about neural networks to have a meaningful opinion on it.
I like the idea that this comment produces in my mind. But nitpickingly, a neural network is a type of computer program. And most of the professional bollocks-talkers of my acquaintance think very hard in system-two like ways about the rubbish they spout.
It’s hard to imagine a system-one academic discipline. Something like ‘Professor of telling whether people you are looking at are angry’, or ‘Professor of catching cricket balls’....
I wonder if you might be thinking more of the difference between a computer program that one fully understands (a rare thing indeed), and one which is only dimly understood, and made up of ‘magical’ parts even though its top level behaviour may be reasonably predictable (which is how most programmers perceive most programs).
Well, in the case of answers to questions like that in the humanities what does the word ‘right’ actually mean? If we say a particular author is ‘post utopian’ what does it actually mean for the answer to that question to be ‘yes’ or ‘no’? It’s just a classification that we invented. And like all classification groups there is a set of rules characteristics that mean that the author is either post utopian or not. I imagine it as a checklist of features which gets ticked off as a person reads the book. If all the items in the checklist are ticked then the author is post utopian. If not then the author is not.
The problem with this is that different people have different items in their checklist and differ in their opinion on how many items in the list need to be checked for the author to be classified as post utopian. You can pick any literary classification and this will be the case. There will never be a consensus on all the items in the checklist. There will always be a few points that everybody does not agree on. This makes me think that objectively speaking there is not ‘absolutely right’ or ‘absolutely wrong’ answer to a question like that.
In hard science on the other hand. There is always an absolutely right answer. If we say: “Protons and neutrons are oppositely charged.” There is an answer that is right because no matter what my beliefs, experiment is the final arbiter. Nobody who follows through the logical steps can deny that they are oppositely charged without making an illogical leap.
In the literary classification, you or your neural network can go through logical steps and still arrive at an answer that is not the same for everybody.
EDIT: I meant “protons and electrons are oppositely charged” not “protons and neutrons”. Sorry!
One: Protons and neutrons aren’t oppositely charged.
Two: You’re using particle physics as an example of an area where experiment is the final arbiter; you might not want to do that. Scientific consensus has more than a few established beliefs in that field that are untested and border on untestable.
Honestly, he’d be hard pressed to find a field that has better tested beliefs and greater convergence of evidence. The established beliefs you mention are a problem everywhere, and pretty much no field is backed with as much data as particle physics.
Fair enough; I had wanted to say that but don’t have sufficiently intimate awareness of every academic field to be comfortable doing so. I think it works just as well to illustrate that we oughtn’t confuse passing flaws in a field with fundamental ones, or the qualities of a /discipline/ with the qualities of seeking truth in a particular domain.
Press the Show help button to figure out how to italisize and bold and all that.
Was this intended to be a response to a different comment?
No, it’s just that FluffyC used slashes to indicate that the word in the middle was to be italisized, so she probably hadn’t read the help section, and I thought that reading the help section would, well, help FluffyC.
Oh Whoops! I mean protons and electrons! Silly mistake!
I don’t think that the fact that everyone having a different checklist is the point. In this perfect, hypothetical world, everyone has the same checklist.
I think that the point is that the checklist is meaningless, like having a literary genre called y-ism and having “The letter ‘y’ constitutes 1/26th of the text” on the checklist.
Even if we can identify y-ism with our senses, the distinction is doesn’t “mean” anything. It has zero application outside of the world of y-ism. It floats.
That is an important point. It is not so easy to come up up with a criterion of “meaningfulness” that excludes the stuff rationalists don’t like, but doens’t exclude a lot of everyday terninology at the same time.
I could add that others have their own criteria of “meaningfulness”. Humanities types aren’t very bothered about questions like how many moons saturn has, because it doens’t affect them or their society. The common factor seems to both kinds of “meaningfullness” is that they amount to “the stuff I personally consider to be worth bothering about”. A concern with objective meaningfullness is still a subjective concern.
FWIW, the Moby Dick example is less stupid than you paint it, given the recurrence of whiteness as an attribute of things special or good in western culture—an idea that pre-dates the invention of race. I think a case could be made out that (1) the causality runs from whiteness as a special or magical attribute, to its selection as a pertinent physical feature when racism was being invented (considering that there were a number of parallel candidates, like phrenology, that didn’t do so well memetically), and (2) in a world that now has racism, the ongoing presence of valuing white things as special has been both consciously used to reinforce it (cf the KKK’s name and its connotations) and unconsciously reinforces it by association,
I can’t resist. I think you should read Moby Dick. Whiteness in that novel is not used as any kind of symbol for good:
If you want to talk about racism and Moby Dick, talk about Queequeg!
Not that white animals aren’t often associated with good things, but this is not unique in western culture:
If that’s your criteria, you could use some stand-in for computer science terms that have no meaning.
I think you are playing to what you assume are our prejudices.
Suppose X is a meaningless predicate from a humanities subject. Suppose you used it, not a simulacrum. If it’s actually meaningless by the definition I give elsewhere in the thread, nobody will be able to name any Y such that p(X|Y) differs from p(X|¬Y) after a Bayesian update. Do you actually expect that, for any significant number of terms in humanities subjects, you would find no Y, even after grumpy defenders of X popped up in the thread? Or did you choose a made-up term so as to avoid flooding the thread with Y-proponents? If you expect people to propose candidates for Y, you aren’t really expecting X to be meaningless.
The Sokal hoax only proves one journal can be tricked by fake jargon. Not that bona fide jargon is meaningless.
I’m sure there’s a lot of nonsense, but “post-utopian” appears to have a quite ordinary sense, despite the lowness of the signal to noise ratio of some of those hits. A post-utopian X (X = writer, architect, hairdresser, etc.) is one who is working after, and in reaction against, a period of utopianism, i.e. belief in the perfectibility of the world by man. Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.
We’re all utopians here.
By this definition, wouldn’t the belief that science will not lead to perfection but we can still look forward to more of what we already have (rather than ruin and destruction) be equally post-utopian?
Not as I see the word used, which appears to involve the sense of not merely less enthusiastic than, but turning away from. You can’t make a movement on the basis of “yes, but not as sparkly”.
Pity. “It will be kind of like it is now” is an under-utilized prediction.
Dunno, Futurama is pretty much entirely based on that.
What would he have to say? The Sokal Hoax was about social engineering, not semantics.
“Post-utopian” is a real term, and even in the absence of examples of its use, it is straightforward to deduce its (likely) meaning, since “post-” means “subsequent to, in reaction to” and “utopian” means “believing in or aiming at the perfecting of polity or social conditions”. So post-utopian texts are those which react against utopianism, express skepticism at the perfectibility of society, and so on. This doesn’t seem like a particularly difficult idea and it is not difficult to identify particular texts as post-utopian (for example, Koestler’s Darkness at Noon, Huxley’s Brave New World, or Nabokov’s Bend Sinister).
So I think you need to pick a better example: “post-utopian” doesn’t cut it. The fact that you have chosen a weak example increases my skepticism as to the merits of your general argument. If meaningless terms are rife in the field of English literature, as you seem to be suggesting, then it should be easy for you to pick a real one.
(I made a similar point in response to your original post on this subject.)
There is the literature professor’s belief, the student’s belief, and the sentence “Carol is ‘post-utopian’”. While the sentence can be applied to both beliefs, the beliefs themselves are quite different beasts. The professor’s belief is something that carve literature space in a way most other literature professors do. Totally meaningful. The student’s belief, on the other hand, is just a label over a set of authors the student have scarcely read. Going a level deeper, we can find an explanation for this label, which turns out to be just another label (“colonial alienation”), and then it stops. From Eliezer’s main post (emphasis mine) :
The professor have a meaningful belief.
Unable to express it properly (it may not be his fault), gives a mysterious explanation.
That mysterious explanation generates a floating belief in the student’s mind.
Well, not that floating. The student definitely expects a sensory experience: grades. The problem isn’t the lack of expectations, but that they’re based on an overly simplified model of the professor’s beliefs, with no direct ties to the writing themselves –only to the authors’ names. Remove professors and authors’ names, and the students’ beliefs are really floating: they will have no way to tie them to reality –the writing. And if they try anyway, I bet their carvings won’t agree.
Now when the professor grades an answer, only a label will be available (“post-utopian”, or whatever). This label probably reflects the student’s belief directly. That answer will indeed be quickly patterned matched against a label inside the professor’s brain, generating a quick “right” or “wrong” response (and the corresponding motion in the hand that wield the red pen). Just as drawn in the picture actually.
However, the label in the professor’s head is not a floating belief like the student’s. It’s a cached thought, based on a much more meaningful belief (or so I hope).
Okay, now that I recognize your name, I see you’re not exactly a newcomer here. Sorry if I didn’t told anything you don’t know. But it did seem like you conflated mysterious answers (like “phlogiston”) and floating beliefs (actual neural constructs). Hope this helped.
If that is what Eliezer meant, then it was confusing to use an example for which many people suspect that the concept itself is not meaningful. It just generates distraction, like the “Is Nixon a pacifist?” example in the original Politics is the mind-killer post (and actually,the meaningfulness of post-colonialism as a category might be a political example in the wide sense of the word). He could have used something from physics like “Heat is transmitted by convention”, or really any other topic that a student can learn by rot without real understanding.
I don’t think Eliezer meant all what I have written (edit: yep, he didn’t). I was mainly analysing (and defending) the example to death, under Daenerys’ proposed assumption that the belief in the professor’s head is not floating. More likely, he picked something familiar that would make us think something like “yeah, if those are just labels, that’s no use”.¹
By the way is there any good example? Something that (i) clearly is meaningful, and (ii) let us empathise with those who nevertheless extract a floating belief out of it? I’m not sure. I for one don’t empathise with the students who merely learn by rot, for I myself don’t like loosely connected belief networks: I always wanted to understand.
Also, Eliezer wasn’t very explicit about the distinction between a statement, embodied in text, images, or whatever our senses can process, and belief, embodied in a heap of neurons. But this post is introductory. It is probably not very useful to make the distinction so soon. More important is to realize that ideas are not floating in the void, but are embodied in a medium: paper, computers… and of course brains.
[1] We’re not familiar to “post-utopianism” and “colonial alienation” specifically, but we do know the feeling generated by such literary mumbo jumbo.
Thank you! Your post helped me finally to understand what it was that I found so dissatisfying with the way I’m being taught chemistry. I’m not sure right now what I can do to remedy this, but thank you for helping me come to the realization.
If the teacher does not have a precise codification of what makes a writer “post-utopian”, then how should he teach it to students?
I would say the best way is a mix of demonstrating examples (“Alice is not a post-utopian; Carol is a post-utopian”), and offering generalizations that are correlated with whether the author is a post-utopian (“colonial alienation”). This is a fairly slow method of instruction, at least in some cases where the things being studied are complicated, but it can be effective. While the student’s belief may not yet be as well-formed as the professor’s, I would hesitate to call it meaningless. (More specifically, I would agree denotatively but object connotatively to such a classification.) I would definitely not call the belief useless, since it forms the basis for a later belief that will be meaningful. If a route to meaningful, useful belief B goes through “meaningless” belief A, then I would say that A is useful, and that calling A meaningless produces all the wrong sorts of connotations.
The example assumed bad teaching based on rote learning. Your idea might actually work.
(Edit: oops, you’re probably aware of that. Sorry for the noise)
To over-extend your metaphor, dubstep is electronic music with a breakbeat and certain BPM. Bassnectar described it in an inverview once as hip-hop beats at half time in breakbeat BPMs.
It’s really easy to tell the difference between dubstep and house, because dubstep has a broken kick..kickSNARE beat, while house has a 4⁄4 kick.kick.kick.kick beat.
(Interestingly, the dubstep you seem to describe is what people who listened to earlier dubstep commonly call “brostep,” and was inspired by one Rusko song (“Cockney Thug,” if I remember correctly).)
The point I mean to make by this is that most concepts do have system 2 algorithms that identify them, even if most people on LW would disagree with the social groups that advance those concepts.
I have many friends and comrades that are liberal arts students, and most of the time, if they said something like “post-utopian” or “colonial alienation” they’d have a coherent system-2 algorithm for identifying which authors or texts are more or less post-utopian.
Really, I agree that this is a bad example, because there are two things going on: the students have to guess the teacher’s password (which is the same as if you had Skirllex teaching MUSC 202: Dubstep Identification, and only accepted “songs with that heavy wobble bass shit” as “real dubstep, bro”), and there’s an alleged unspoken conspiracy of academics to have a meaningless classifier (which is maybe the same as subgenres of hard noise music, where there truly is no difference between typical songs in each subgenre, and only artist self-identification or consensus among raters can be used as a grouping strategy).
As others have said better than me, the Sokal affair seems to be better evidence of how easy it is to publish a bad paper than it is evidence that postmodernism is a flawed field.
Example: an irishman arguing with a mongolian over what dragons look like.
When the Irishman is a painter and the Mongolian a dissatisfied customer, does their disagreement have meaning?
In that case, they’re arguing about the wrong thing. Their real dispute is that the painting isn’t what the Mongolian wanted as a result of a miscommunication which neither of them noticed until one of them had spent money (or promised to) and the other had spent days painting.
So, no, even in that situation, there’s no such thing as a dragon, so they might as well be arguing about the migratory patterns of unicorns.
While the English profs may consistently classify writing samples as post utopian or not, the use of the label “post utopian” should be justified by the english meanings of “post” and “utopian” in some way. “Post” and “utopian” are concepts with meaning, they’re not just nonsense sounds available for use as labels.
If you have no conceptual System 1 algorithm for “post utopian”, and just have some consistent System 2 algorithm, it’s a conceptual confusion to use compound of labels for concepts that may have nothing at all to do with your underlying System 2 defined concept.
Likely the confusion serves an intellectually dishonest purpose, as in euphemism. When you see this kind of nonsense, there is some politically motivated obfuscation nearby.