I dislike the “post utopian” example, and here’s why:
Language is pretty much a set of labels. When we call something “white”, we are saying it has some property of “whiteness.” NOW we can discuss wavelengths and how light works, or whatnot, but 200 years ago, they had no clue. They could still know that snow is white, though. At the same time, even with our knowledge of how colors work, we can still have difficulties knowing exactly where the label “white” ends, and grey or yellow begins.
Say I’m carving up music-space. I can pretty easily classify the differences between Classical and Rap, in ways that are easy to follow. I could say that classical features a lot of instrumentation, and rap features rhythmic language, or something. But if I had lots of people spending all their lives studying music, they’re going to end up breaking music space into much smaller pieces. For example, dub step and house.
Now, I can RECOGNIZE dubstep when I hear it, but if you asked me to teach you what it was, I would have difficulties. I couldn’t necessarily say “It’s the one that goes, like, WOPWOPWOPWOP iiinnnnnggg” if I’m a learned professor, so I’ll use jargon like “synthetic rhythm,” or something.
But not having a complete explainable System 2 algorithm for “How to Tell if it’s Dubstep” doesn’t mean that my System 1 can’t readily identify it. In fact, it’s probably easier to just listen to a bunch of music until your System 1 can identify the various genres, even if your System 2 can’t codify it. The example is treating the fact that your professor can’t really codify “post utopianism” to mean that it’s not “true”. (this example has been used in other sequence posts, and I disagreed with it then too)
Have someone write a bunch of short stories. Give them to English Literature professors. If they tend to agree which ones are post utopian, and which ones aren’t, then they ARE in fact carving up literature-space in a meaningful way. The fact that they can’t quite articulate the distinction doesn’t make it any less true than knowing that snow was white before you knew about wavelengths. They’re both labels, we just understand one better.
Anyways, I know it’s just an example, but without a better example, i can’t really understand the question well enough to think of a relevant answer.
I think Eliezer is taking it as a given that English college professors who talk like that are indeed talking without connection to anticipated experience. This may not play effectively to those he is trying to teach, and as you say, may not even be true.
In particular, “post-utopian” is not a real term so far as I know, and I’m using it as a stand-in for literary terms that do in fact have no meaning. If you think there are none of those, Alan Sokal would like to have a word with you.
There’s a sense in which a lot of fuzzy claims are meaningless: for example, it would be hard for a computer to evaluate “Socrates is kind” even if the computer could easily evaluate more direct claims like “Socrates is taller than five feet”. But “kind” isn’t really meaningless; it would just be a lot of work to establish exactly what goes into saying “kind” and exactly where the cutoff point between “kind” and “not so kind” is.
I agree that literary critical terms are fuzzy in the same sense as “kind”, but I don’t think they’re necessarily any more fuzzy. For example, replacing “post-utopian” with its likely inspiration “post-colonial”, I don’t know much about literature, but I feel pretty okay designating Salman Rushdie as “post-colonial” (since his books very often take place against the backdrop of the issues surrounding British decolonization of India) and J. K. Rowling as “not post-colonial” (since her books don’t deal with issues surrounding decolonization at all.)
Likewise, even though “post-utopian” was chosen specifically to be meaningless, I can say with confidence that Sir Thomas More’s Utopia was not post-utopian, and I bet most other people will agree with me.
The Sokal Hoax to me was less about totally disproving all literary critical terms, and more about showing that it’s really easy to get a paper published that no one understands. People elsewhere in the thread have already given examples of Sokalesque papers in physics, computer science, etc that got published, even though those fields seem pretty meaningful.
Literary criticism does have a bad habit of making strange assertions, but I don’t think they hinge on meaningless terms. A good example would be deconstruction of various works to point out the racist or sexist elements within. For example, “It sure is suspicious that Moby Dick is about a white whale, as if Melville believed that only white animals could possibly be individuals with stories of their own.”
The claim that Melville was racist when writing Moby Dick seems potentially meaningful—for example, we could go back in time, put him under truth serum, and ask him whether that was intentional. Even if it was wholly unconscious, it still implies that (for example) if we simulate a society without racism, it will be less likely to produce books like Moby Dick, or that if we pick apart Melville’s brain we can draw some causal connection between the racism to which he was exposed and the choice to have Moby Dick be white.
However, if I understand correctly literary critics believe these assertions do not hinge on authorial intent; that is, Melville might not have been trying to make Moby Dick a commentary on race relations, but that doesn’t mean a paper claiming that Moby Dick is a commentary on race relations should be taken less seriously.
Even this might not be totally meaningless. If an infinite monkey at an infinite typewriter happened to produce Animal Farm, it would still be the case that, by coincidence, it was a great metaphor for Communism. A literary critic (or primatologist) who wrote a paper saying “Hey, Animal Farm can increase our understanding and appreciation of the perils of Communism” wouldn’t really be talking nonsense. In fact, I’d go so far as to say that they’re (kind of) objectively correct, whereas even someone making the relatively stupid claim about Moby Dick above might still be right that the book can help us think about our assumptions about white people.
If I had to criticize literary criticism, I would have a few vague objections. First, that they inflate terms—instead of saying “Moby Dick vaguely reminds me of racism”, they say “Moby Dick is about racism.” Second, that even if their terms are not meaningless, their disputes very often are: if one critic says “Moby Dick is about racism” and another critic says “No it isn’t”, then if what the first one means is “Mobdy Dick vaguely reminds me of racism”, then arguing this is a waste of time. My third and most obvious complaint is opportunity costs: to me at least the whole field of talking about how certain things vaguely remind you of other things seems like a waste of resources that could be turned into perfectly good paper clips.
But these seem like very different criticisms than arguing that their terms are literally meaningless. I agree that to students they may be meaningless and they might compensate by guessing the teacher’s password, but this happens in every field.
I liked your comment and have a half-formed metaphor for you to either pick apart or develop:
LW/ rationalist types tend towards hard sciences. This requires more System 2 reasoning. Their fields are like computer programs. Every step makes sense, and is understood.
Humanities tends toward more System 1 pattern recognition. This is more akin to a neural network. Even if you are getting the “right” answer, it is coming out of a black box.
Because the rationalist types can’t see the algorithm, they assume it can’t be “right”.
I like the idea that this comment produces in my mind. But nitpickingly, a neural network is a type of computer program. And most of the professional bollocks-talkers of my acquaintance think very hard in system-two like ways about the rubbish they spout.
It’s hard to imagine a system-one academic discipline. Something like ‘Professor of telling whether people you are looking at are angry’, or ‘Professor of catching cricket balls’....
I wonder if you might be thinking more of the difference between a computer program that one fully understands (a rare thing indeed), and one which is only dimly understood, and made up of ‘magical’ parts even though its top level behaviour may be reasonably predictable (which is how most programmers perceive most programs).
Well, in the case of answers to questions like that in the humanities what does the word ‘right’ actually mean? If we say a particular author is ‘post utopian’ what does it actually mean for the answer to that question to be ‘yes’ or ‘no’? It’s just a classification that we invented. And like all classification groups there is a set of rules characteristics that mean that the author is either post utopian or not. I imagine it as a checklist of features which gets ticked off as a person reads the book. If all the items in the checklist are ticked then the author is post utopian. If not then the author is not.
The problem with this is that different people have different items in their checklist and differ in their opinion on how many items in the list need to be checked for the author to be classified as post utopian. You can pick any literary classification and this will be the case. There will never be a consensus on all the items in the checklist. There will always be a few points that everybody does not agree on. This makes me think that objectively speaking there is not ‘absolutely right’ or ‘absolutely wrong’ answer to a question like that.
In hard science on the other hand. There is always an absolutely right answer. If we say: “Protons and neutrons are oppositely charged.” There is an answer that is right because no matter what my beliefs, experiment is the final arbiter. Nobody who follows through the logical steps can deny that they are oppositely charged without making an illogical leap.
In the literary classification, you or your neural network can go through logical steps and still arrive at an answer that is not the same for everybody.
EDIT: I meant “protons and electrons are oppositely charged” not “protons and neutrons”. Sorry!
One: Protons and neutrons aren’t oppositely charged.
Two: You’re using particle physics as an example of an area where experiment is the final arbiter; you might not want to do that. Scientific consensus has more than a few established beliefs in that field that are untested and border on untestable.
Honestly, he’d be hard pressed to find a field that has better tested beliefs and greater convergence of evidence. The established beliefs you mention are a problem everywhere, and pretty much no field is backed with as much data as particle physics.
Fair enough; I had wanted to say that but don’t have sufficiently intimate awareness of every academic field to be comfortable doing so. I think it works just as well to illustrate that we oughtn’t confuse passing flaws in a field with fundamental ones, or the qualities of a /discipline/ with the qualities of seeking truth in a particular domain.
No, it’s just that FluffyC used slashes to indicate that the word in the middle was to be italisized, so she probably hadn’t read the help section, and I thought that reading the help section would, well, help FluffyC.
I don’t think that the fact that everyone having a different checklist is the point. In this perfect, hypothetical world, everyone has the same checklist.
I think that the point is that the checklist is meaningless, like having a literary genre called y-ism and having “The letter ‘y’ constitutes 1/26th of the text” on the checklist.
Even if we can identify y-ism with our senses, the distinction is doesn’t “mean” anything. It has zero application outside of the world of y-ism. It floats.
I agree that literary critical terms are fuzzy in the same sense as “kind”, but I don’t think they’re necessarily any more fuzzy.
That is an important point. It is not so easy to come up up with a criterion of “meaningfulness” that excludes the stuff rationalists don’t like, but doens’t exclude a lot of everyday terninology at the same time.
I could add that others have their own criteria of “meaningfulness”. Humanities types aren’t very bothered about questions like how many moons saturn has, because it doens’t affect them or their society. The common factor seems to both kinds of “meaningfullness” is that they amount to “the stuff I personally consider to be worth bothering about”.
A concern with objective meaningfullness is still a subjective concern.
FWIW, the Moby Dick example is less stupid than you paint it, given the recurrence of whiteness as an attribute of things special or good in western culture—an idea that pre-dates the invention of race. I think a case could be made out that (1) the causality runs from whiteness as a special or magical attribute, to its selection as a pertinent physical feature when racism was being invented (considering that there were a number of parallel candidates, like phrenology, that didn’t do so well memetically), and (2) in a world that now has racism, the ongoing presence of valuing white things as special has been both consciously used to reinforce it (cf the KKK’s name and its connotations) and unconsciously reinforces it by association,
FWIW, the Moby Dick example is less stupid than you paint it, given the recurrence of whiteness as an attribute of things special or good in western culture—an idea that pre-dates the invention of race.
I can’t resist. I think you should read Moby Dick. Whiteness in that novel is not used as any kind of symbol for good:
This elusive quality it is, which causes the thought of whiteness, when divorced from more kindly associations, and coupled with any object terrible in itself, to heighten that terror to the furthest bounds. Witness the white bear of the poles, and the white shark of the tropics; what but their smooth, flaky whiteness makes them the transcendent horrors they are? That ghastly whiteness it is which imparts such an abhorrent mildness, even more loathsome than terrific, to the dumb gloating of their aspect. So that not the fierce-fanged tiger in his heraldic coat can so stagger courage as the white-shrouded bear or shark.
If you want to talk about racism and Moby Dick, talk about Queequeg!
Not that white animals aren’t often associated with good things, but this is not unique in western culture:
So in spring, when appears the constellation Visakha, the Bodhisatwa, under the appearance of a young white elephant of six defenses, with a head the color of cochineal, with tusks shining like gold, perfect in his organs and limbs, entered the right side of his mother, and she, by means of a dream, was conscious of the fact.
WMSCI, the World Multiconference on Systemics, Cybernetics and Informatics, is a computer science and engineering conference that has occurred annually since 1995. [...] WMSCI attracted publicity of a less favorable sort in 2005 when three graduate students at MIT succeeded in getting a paper accepted as a “non-reviewed paper” to the conference that had been randomly generated by a computer program called SCIgen
I think you are playing to what you assume are our prejudices.
Suppose X is a meaningless predicate from a humanities subject. Suppose you used it, not a simulacrum. If it’s actually meaningless by the definition I give elsewhere in the thread, nobody will be able to name any Y such that p(X|Y) differs from p(X|¬Y) after a Bayesian update. Do you actually expect that, for any significant number of terms in humanities subjects, you would find no Y, even after grumpy defenders of X popped up in the thread? Or did you choose a made-up term so as to avoid flooding the thread with Y-proponents? If you expect people to propose candidates for Y, you aren’t really expecting X to be meaningless.
The Sokal hoax only proves one journal can be tricked by fake jargon. Not that bona fide jargon is meaningless.
I’m sure there’s a lot of nonsense, but “post-utopian” appears to have a quite ordinary sense, despite the lowness of the signal to noise ratio of some of those hits. A post-utopian X (X = writer, architect, hairdresser, etc.) is one who is working after, and in reaction against, a period of utopianism, i.e. belief in the perfectibility of the world by man. Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.
Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.
By this definition, wouldn’t the belief that science will not lead to perfection but we can still look forward to more of what we already have (rather than ruin and destruction) be equally post-utopian?
Not as I see the word used, which appears to involve the sense of not merely less enthusiastic than, but turning away from. You can’t make a movement on the basis of “yes, but not as sparkly”.
“Post-utopian” is a real term, and even in the absence of examples of its use, it is straightforward to deduce its (likely) meaning, since “post-” means “subsequent to, in reaction to” and “utopian” means “believing in or aiming at the perfecting of polity or social conditions”. So post-utopian texts are those which react against utopianism, express skepticism at the perfectibility of society, and so on. This doesn’t seem like a particularly difficult idea and it is not difficult to identify particular texts as post-utopian (for example, Koestler’s Darkness at Noon, Huxley’s Brave New World, or Nabokov’s Bend Sinister).
So I think you need to pick a better example: “post-utopian” doesn’t cut it. The fact that you have chosen a weak example increases my skepticism as to the merits of your general argument. If meaningless terms are rife in the field of English literature, as you seem to be suggesting, then it should be easy for you to pick a real one.
There is the literature professor’s belief, the student’s belief, and the sentence “Carol is ‘post-utopian’”. While the sentence can be applied to both beliefs, the beliefs themselves are quite different beasts. The professor’s belief is something that carve literature space in a way most other literature professors do. Totally meaningful. The student’s belief, on the other hand, is just a label over a set of authors the student have scarcely read. Going a level deeper, we can find an explanation for this label, which turns out to be just another label (“colonial alienation”), and then it stops. From Eliezer’s main post (emphasis mine) :
Some literature professor lectures that the famous authors Carol, Danny, and Elaine are all ‘post-utopians’, which you can tell because their writings exhibit signs of ‘colonial alienation’. For most college students the typical result will be that their brain’s version of an object-attribute list will assign the attribute ‘post-utopian’ to the authors Carol, Danny, and Elaine.
That mysterious explanation generates a floating belief in the student’s mind.
Well, not that floating. The student definitely expects a sensory experience: grades. The problem isn’t the lack of expectations, but that they’re based on an overly simplified model of the professor’s beliefs, with no direct ties to the writing themselves –only to the authors’ names. Remove professors and authors’ names, and the students’ beliefs are really floating: they will have no way to tie them to reality –the writing. And if they try anyway, I bet their carvings won’t agree.
Now when the professor grades an answer, only a label will be available (“post-utopian”, or whatever). This label probably reflects the student’s belief directly. That answer will indeed be quickly patterned matched against a label inside the professor’s brain, generating a quick “right” or “wrong” response (and the corresponding motion in the hand that wield the red pen). Just as drawn in the picture actually.
However, the label in the professor’s head is not a floating belief like the student’s. It’s a cached thought, based on a much more meaningful belief (or so I hope).
Okay, now that I recognize your name, I see you’re not exactly a newcomer here. Sorry if I didn’t told anything you don’t know. But it did seem like you conflated mysterious answers (like “phlogiston”) and floating beliefs (actual neural constructs). Hope this helped.
If that is what Eliezer meant, then it was confusing to use an example for which many people suspect that the concept itself is not meaningful. It just generates distraction, like the “Is Nixon a pacifist?” example in the original Politics is the mind-killer post (and actually,the meaningfulness of post-colonialism as a category might be a political example in the wide sense of the word). He could have used something from physics like “Heat is transmitted by convention”, or really any other topic that a student can learn by rot without real understanding.
I don’t think Eliezer meant all what I have written (edit: yep, he didn’t). I was mainly analysing (and defending) the example to death, under Daenerys’ proposed assumption that the belief in the professor’s head is not floating. More likely, he picked something familiar that would make us think something like “yeah, if those are just labels, that’s no use”.¹
By the way is there any good example? Something that (i) clearly is meaningful, and (ii) let us empathise with those who nevertheless extract a floating belief out of it? I’m not sure. I for one don’t empathise with the students who merely learn by rot, for I myself don’t like loosely connected belief networks: I always wanted to understand.
Also, Eliezer wasn’t very explicit about the distinction between a statement, embodied in text, images, or whatever our senses can process, and belief, embodied in a heap of neurons. But this post is introductory. It is probably not very useful to make the distinction so soon. More important is to realize that ideas are not floating in the void, but are embodied in a medium: paper, computers… and of course brains.
[1] We’re not familiar to “post-utopianism” and “colonial alienation” specifically, but we do know the feeling generated by such literary mumbo jumbo.
Thank you! Your post helped me finally to understand what it was that I found so dissatisfying with the way I’m being taught chemistry. I’m not sure right now what I can do to remedy this, but thank you for helping me come to the realization.
If the teacher does not have a precise codification of what makes a writer “post-utopian”, then how should he teach it to students?
I would say the best way is a mix of demonstrating examples (“Alice is not a post-utopian; Carol is a post-utopian”), and offering generalizations that are correlated with whether the author is a post-utopian (“colonial alienation”). This is a fairly slow method of instruction, at least in some cases where the things being studied are complicated, but it can be effective. While the student’s belief may not yet be as well-formed as the professor’s, I would hesitate to call it meaningless. (More specifically, I would agree denotatively but object connotatively to such a classification.) I would definitely not call the belief useless, since it forms the basis for a later belief that will be meaningful. If a route to meaningful, useful belief B goes through “meaningless” belief A, then I would say that A is useful, and that calling A meaningless produces all the wrong sorts of connotations.
To over-extend your metaphor, dubstep is electronic music with a breakbeat and certain BPM. Bassnectar described it in an inverview once as hip-hop beats at half time in breakbeat BPMs.
It’s really easy to tell the difference between dubstep and house, because dubstep has a broken kick..kickSNARE beat, while house has a 4⁄4 kick.kick.kick.kick beat.
(Interestingly, the dubstep you seem to describe is what people who listened to earlier dubstep commonly call “brostep,” and was inspired by one Rusko song (“Cockney Thug,” if I remember correctly).)
The point I mean to make by this is that most concepts do have system 2 algorithms that identify them, even if most people on LW would disagree with the social groups that advance those concepts.
I have many friends and comrades that are liberal arts students, and most of the time, if they said something like “post-utopian” or “colonial alienation” they’d have a coherent system-2 algorithm for identifying which authors or texts are more or less post-utopian.
Really, I agree that this is a bad example, because there are two things going on: the students have to guess the teacher’s password (which is the same as if you had Skirllex teaching MUSC 202: Dubstep Identification, and only accepted “songs with that heavy wobble bass shit” as “real dubstep, bro”), and there’s an alleged unspoken conspiracy of academics to have a meaningless classifier (which is maybe the same as subgenres of hard noise music, where there truly is no difference between typical songs in each subgenre, and only artist self-identification or consensus among raters can be used as a grouping strategy).
As others have said better than me, the Sokal affair seems to be better evidence of how easy it is to publish a bad paper than it is evidence that postmodernism is a flawed field.
In that case, they’re arguing about the wrong thing. Their real dispute is that the painting isn’t what the Mongolian wanted as a result of a miscommunication which neither of them noticed until one of them had spent money (or promised to) and the other had spent days painting.
So, no, even in that situation, there’s no such thing as a dragon, so they might as well be arguing about the migratory patterns of unicorns.
While the English profs may consistently classify writing samples as post utopian or not, the use of the label “post utopian” should be justified by the english meanings of “post” and “utopian” in some way. “Post” and “utopian” are concepts with meaning, they’re not just nonsense sounds available for use as labels.
If you have no conceptual System 1 algorithm for “post utopian”, and just have some consistent System 2 algorithm, it’s a conceptual confusion to use compound of labels for concepts that may have nothing at all to do with your underlying System 2 defined concept.
Likely the confusion serves an intellectually dishonest purpose, as in euphemism. When you see this kind of nonsense, there is some politically motivated obfuscation nearby.
A set of beliefs is not like a bag of sand, individual beliefs unconnected with each other, about individual things. They are connected to each other by logical reasoning, like a lump of sandstone. Not all beliefs need to have a direct connection with experience, but as long as pulling on the belief pulls, perhaps indirectly, on anticipated experience, the belief is meaningful.
When a pebble of beliefs is completely disconnected from experience, or when the connection is so loose that it can be pulled around arbitrarily without feeling the tug of experience, then we can pronounce it meaningless. The pebble may make an attractive paperweight, with an intricate structure made of elements that also occur in meaningful beliefs, but that’s all it can be. Music of the mind, conveying a subjective impression of deep meaning, without having any.
For the hypothetical photon disappearing in the far-far-away, no observation can be made on that photon, but we have other observations leading to beliefs about photons in general, according to which they cannot decay. That makes it meaningful to say that the far away photon acts in the same way. If we discovered processes of photon decay, it would still be meaningful, but then we would believe it could be false.
Interesting idea. But how did you know how to phrase your original beliefs about photons? You could just have easily decided to describe photons as “photons obey Maxwell’s equations up to an event horizon and case to exist outside of it”. You could then add other beliefs like “nothing exists outside of the event horizon” which are incompatible with the photon continuing to exist.
In other words, your beliefs cannot afford to be independent of one another, but you could build two different belief systems, one in which the photon continues to exist and one in which it does not, that make identical predictions about experiences. Is it meaningful to ask which of these belief systems is true?
But how did you know how to phrase your original beliefs about photons? You could just have easily decided to describe photons as “photons obey Maxwell’s equations up to an event horizon and case to exist outside of it”.
Systems of belief are more like a lump of sandstone than a pile of sand, but they are also more like a lump of sandstone, a rather friable lump, than a lump of marble. They are not indissoluble structures that can be made in arbitrary shapes, the whole edifice supported by an attachment at one point to experience.
Experience never brought hypotheses such as you suggest to physicists’ attention. The edifice as built has no need of it, and it cannot be bolted on: it will just fall off again.
But these hypotheses have just be brought to our attention—just now. In fact the claim that these hypotheses produce indistinguishable physics might even be useful. If I want to simulate my experiences, I can save on computational power by knowing that I no longer have to keep track of things that have gone behind an event horizon. The real question is why the standard set of beliefs should be more true or meaningful than this new one. A simple appeal to what physicists have so far conjectured is not in general sufficient.
Which meaningful beliefs to consider seriously is an issue separate from the original koan, which asks which possible beliefs are meaningful. I think we are all agreeing that a belief about the remote photon’s extinction or not is a meaningful one.
I don’t see how you can claim that the belief that the photon continues to exist is a meaningful belief without also allowing the belief that the photon does not continue to exist to be a meaningful belief. Unless you do something along the lines of taking Kolmogorov complexity into account, these beliefs seem to be completely analogous to each other. Perhaps to phrase things more neutrally, we should be asking if the question “does the photon continue to exist?” is meaningful. On the one hand, you might want to say “no” because the outcome of the question is epiphenomenal. On the other hand, you would like this question to be meaningful since it may have behavioral implications.
I don’t see how you can claim that the belief that the photon continues to exist is a meaningful belief without also allowing the belief that the photon does not continue to exist to be a meaningful belief.
They’re both meaningful. There are reasons to reject one of them as false, but that’s a separate issue.
OK. I think that I had been misreading some of your previous posts. Allow me the rephrase my objection.
Suppose that our beliefs about photons were rewritten as “photons not beyond an event horizon obey Maxwell’s Equations”. Making this change to my belief structure now leaves beliefs about whether or not photons still exist beyond an event horizon unconnected from my experiences. Does the meaningfulness of this belief depend on how I phrase my other beliefs?
Also if one can equally easily produce belief systems which predict the same sets of experiences but disagree on whether or not the photon exists beyond the event horizon, how does this belief differ from the belief that Carol is a post-utopian?
In other words, your beliefs cannot afford to be independent of one another, but you could build two different belief systems, one in which the photon continues to exist and one in which it does not, that make identical predictions about experiences. Is it meaningful to ask which of these belief systems is true?
Dunno about “meaningful”, but the model with lower Kolmogorov complexity will give you more bang for the buck.
Your view reminds me of Quine’s “web of belief” view as expressed in “Two Dogmas of Empiricism” section 6:
The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges. Or, to change the figure, total science is like a field of force whose boundary conditions are experience. A conflict with experience at the periphery occasions readjustments in the interior of the field. Truth values have to be redistributed over some of our statements. Reevaluation of some statements entails reevaluation of others, because of their logical interconnections—the logical laws being in turn simply certain further statements of the system, certain further elements of the field.
Quine doesn’t use Bayesian epistemology, unfortunately because I think it would have helped him clarify and refine his view.
One way to try to flesh this intuition out is to say that some beliefs are meaningful by virtue of being subject to revision by experience (i.e. they directly pay rent), while others are meaningful by virtue of being epistemically entangled with beliefs that pay rent (in the sense of not being independent beliefs in the probabilistic sense). But that seems to fail because any belief connected to a belief that directly pays rent must itself be subject to revision by experience, at least to some extent, since if A is entangled with B, an observation which revises P(A) typically revises P(B), however slightly.
If a person with access to the computer simulating whichever universe (or set of universes) a belief is about could in principle write a program that takes as input the current state of the universe (as represented in the computer) and outputs whether the belief is true, then the belief is meaningful.
(if the universe in question does not run on a computer, begin by digitizing your universe, then proceed as above)
That has the same problem as atomic-level specifications that become false when you discover QM. If the Church-Turing thesis is false, all statements you have specified thus become meaningless or false. Even using a hierarchy of oracles until you hit a sufficient one might not be enough if the universe is even more magical than that.
Taking you more strictly at your word than you mean it the program could just return true for the majority belief on empirically non-falsifiable questions. Or it could just return false on all beliefs including your belief that that is illogical. So with the right programs pretty much arbitrary beliefs pass as meaningful.
You actually want it to depend on the state of the universe in the right way, but that’s just another way to say it should depend on whether the belief is true.
That’s a problem with all theories of truth, though. “Elaine is a post-utopian author” is trivially true if you interpret “post-utopian” to mean “whatever professors say is post-utopian”, or “a thing that is always true of all authors” or “is made out of mass”.
To do this with programs rather than philosophy doesn’t make it any worse.
I’m suggesting is that there is a correspondence between meaningful statements and universal computer programs. Obviously this theory doesn’t tell you how to match the right statement to the right computer program. If you match the statement “snow is white” to the computer program that is a bunch of random characters, the program will return no result and you’ll conclude that “snow is white” is meaningless. But that’s just the same problem as the philosopher who refuses to accept any definition of “snow”, or who claims that snow is obviously black because “snow” means that liquid fossil fuel you drill for and then turn into gasoline.
If your closest match to “post-utopian” is a program that determines whether professors think someone is post-utopian, then you can either conclude that post-utopian literally means “something people call post-utopian”—which would probably be a weird and nonstandard word use the same way using “snow” to mean “oil” would be nonstandard—or that post-utopianism isn’t meaningful.
Yeah, probably all theories of truth are circular and the concept is simply non-tabooable. I agree your explanation doesn’t make it worse, but it doesn’t make it better either.
Doesn’t this commit you to the claim that at least some beliefs about whether or not a particular Turing machine halts must be meaningless? If they are all meaningful and your criterion of meaningfulness is correct, then your simulating computer solves the halting problem. But it seems implausible that beliefs about whether Turing machines halt are meaningless.
Suppose we are not living in a simulation. We are to digitize our universe. Do we make our digitization include stars outside the cosmological horizon? By what principle do we decide?
(I suppose you could be asking us to actually digitize the universe, but we want a principle we can use today.)
Well, if the universe actually runs on a computer, then presumably that computer includes data for all stars, not just the ones that are visible to us.
If the universe doesn’t run on a computer, then you have to actually digitize the universe so that your model is identical to the real universe as if it were on a computer, not stop halfway when it gets too hard or physically impossible.
I don’t think any of these principles will actually be practical. Even the sense-experience principle isn’t useful. It would classify “a particle accelerator the size of the Milky Way would generate evidence of photinos” as meaningful, but no one is going to build a particle accelerator the size of the Milky Way any more than they are going to digitize the universe. The goal is to have a philosophical tool, not a practical plan of action.
Oh, I didn’t express myself clearly in the last paragraph of the grandparent. Don’t worry, I’m not trying to demand any kind of practical procedure. I think we’re on the same page. However:
Well, if the universe actually runs on a computer, then presumably that computer includes data for all stars, not just the ones that are visible to us.
I don’t think we can really say that in general. Perhaps if the computer stored the locations and properties of stars in an easy-to-understand way, like a huge array of floating-point numbers, and we looked into the computer’s memory and found a whole other universe’s worth of extra stars, with spatial coordinates that prevent us from ever interacting with them, then we would be comfortable saying that those stars exist but are invisible to us.
But what if the computer compresses star location data, so the database of visible stars looks like random bits? And then we find an extra file in the computer, which is never accessed, and which is filled with random bits? Do we interpret those as invisible stars? I claim that there is no principled, objective way of pointing to parts of a computer’s memory and saying “these bits represent stars invisible to the simulation’s inhabitants, those do not”.
If the universe doesn’t run on a computer, then you have to actually digitize the universe so that your model is identical to the real universe as if it were on a computer [...]
I’m suspicious of the phrase “identical to the real universe as if it were on a computer”. It seems like a black box. Suppose we commission a digital model of this universe, and the engineer in charge capriciously programs the computer to delete information about any object that passes over the cosmological horizon. But they conscientiously program the computer to periodically archive snapshots of the state of the simulation. It might look like this model does not contain spaceships that have passed over the cosmological horizon. But the engineer points out that you can easily extrapolate the correct location of the spaceship from the archived snapshots — the initial state and the laws of physics uniquely determine the present location of the spaceship beyond the cosmological horizon, if it exists. The engineer claims that the simulation actually does contain the spaceship outside the cosmological horizon, and the extrapolation process they just described is simply the decompression algorithm. Is the engineer right? Again, we run into the same problem. To answer the question we must either make an arbitrary decision or give an answer that is relative to some of the simulation’s inhabitants.
And now we have the same problem with deciding whether this digital model is “identical to the real universe as if it were on a computer”. Even if we believe that the spaceship still exists, we have trouble deciding whether the spaceship exists “in the model”.
Well, if the universe actually runs on a computer, then presumably that computer includes data for all stars, not just the ones that are visible to us.
Why should it if its purpose is to simulate reality for humans? What’s wrong with a version of The Truman Show?
Because since everything would be a simulation, “all stars” would be identical in meaning with “all stars that are being simulated” and with “all stars for which the computer includes data”.
In a Truman Show situation, the simulators would’ve shown us white pin-pricks for thousands of years, and then started doing actual astrophysics simulations only when we got telescopes.
Before reading other answers, I would guess that a statement is meaningful if it is either implied or refuted by a useful model of the universe—the more useful the model, the more meaningful the statement.
This is incontrovertibly the best answer given so far. My answer was that a proposition is meaningful iff an oracle machine exists that takes as input the proposition and the universe, outputs 0 if the proposition is true and outputs 1 if the proposition is false. However, this begs the question, because an oracle machine is defined in terms of a “black box”.
Looking at Furslid’s answer, I discovered that my definition is somewhat ambiguous—a statement may be implied or refuted by quite a lot of different kinds of models, some of which are nearly useless and some of which are anything but, and my definition offers no guidance on the question of which model’s usefulness reflects the statement’s meaningfulness.
Plus, I’m not entirely sure how it works with regards to logical contradictions.
In the end, we have to rely on the logical theory of probability (as well as standard logical laws, such as the law of noncontradiction). There is no better choice.
Using Bayes’ theorem (beginning with priors set by Occam’s Razor) tells you how useful your model is.
I think I was unclear. What I was considering was along the following lines:
Take the example from the article. Let us stipulate that the professor’s use of the terms “post-utopian” and “colonial alienization” is, for all practical purposes, entirely uninformative about the authors and works so described.
Any worthwhile model of the professor’s grading criteria will include the professor’s list of “post-utopian” works. These models will not be very useful, however.
Any sufficiently-detailed model of the entire universe, on the other hand, will include the professor, and therefore the professor’s list—but will be immensely useful thanks to the other details it includes.
Which model should we refer to when considering the statement’s meaningfulness, then?
What occurred to me just now, as I wrote out the example, is the idea of simplicity. If you penalize models that add complexity without addition of practical value, the professor’s list will be rapidly cut from almost any model more general than “what answer will receive a good grade on this professor’s tests?”
For a belief to be meaningful you have to be able to describe evidence that would move your posterior probability of it being true after a Bayesian update.
This is a generalization of falsifiability that allows, for example, indirect evidence pertaining to universal laws.
How about basic logical statements? For example: If P, then P. I think that belief is meaningful, but I don’t think I could coherently describe evidence that would make me change it’s probability of being true.
You’d have to define “exist”, because mathematical structures in themselves are just generalized relations that hold under specified constraints. And once you defined “exist”, it might be easier to look for Bayesian evidence—either for them existing, or for a law that would require them to exist.
As a general thing, my definition does consider under-defined assertions meaningless, but that seems correct.
Yeah, I’m not really sure how to interpret “exist” in that statement. Someone that knows more about Tegmark level IV than I do should weigh in, but my intuition is that if parallel mathematical structures exist that we can’t, in principle, even interact with, it’s impossible to obtain Bayesian evidence about whether they exist.
If we couldn’t, even in principle, find any evidence that would make the theory more likely or less, then yeah I think that theory would be correctly labeled meaningless.
But, I can immediately think of some evidence that would move my posterior probability. If all definable universes exist, we should expect (by Occam) to be in a simple one, and (by anthropic reasoning) in a survivable one, but we should not expect it to be elegant. The laws should be quirky, because the number of possible universes (that are simple and survivable) is larger than the subset thereof that are elegant.
But, I can immediately think of some evidence that would move my posterior probability. If all definable universes exist, we should expect (by Occam) to be in a simple one,
Why? That assumes the universes are weighted by complexity, which isn’t true in all Tegmark level IV theories.
Consider “Elaine is a post-utopian and the Earth is round” This statement is meaningless, at least in the case where the Earth is round, where it is equivalent to “Elaine is a post-utopian.” Yet it does constrain my experience, because observing that the Earth is flat falsifies it. If something like this came to seem like a natural proposition to consider, I think it would be hard to notice it was (partly) meaningless, since I could still notice it being updated.
This seems to defeat many suggestions people have made so far. I guess we could say it’s not a real counterexample, because the statement is still “partly meaningful”. But in that case it would be still be nice if we could say what “partly meaningful” means. I think that the situation often arises that a concept or belief people throw around has a lot of useless conceptual baggage that doesn’t track anything in the real world, yet doesn’t completely fail to constrain reality (I’d put phlogiston and possibly some literary criticism concepts in this category).
My first attempt is to say that a belief A of X is meaningful to the extent that it (is contained in / has an analog in / is resolved by) the most parsimonious model of the universe which makes all predictions about direct observations that X would make.
A solution to that particular example is already in logic—the statements “Elaine is a post-utopian” and “the Earth is round” can be evaluated separately, and then you just need a separate rule for dealing with conjunctions.
For every meaningful proposition P, an author should (in theory) be able to write coherently about a fictional universe U where P is true and a fictional universe U’ where P is false.
I thought Eliezer’s story about waking up in a universe where 2+2 seems to equal 3 felt pretty coherent.
edit: It seems like the story would be less coherent if it involved detailed descriptions of re-deriving mathematics from first principles. So perhaps ArisKatsaris’ definition leaves too much to the author’s judgement in what to leave out of the story.
I think that it’s a good deal more subtle than this. Eliezer described a universe in which he had evidence that 2+2=3, not a universe in which 2 plus 2 was actually equal to 3. If we talk about the mathematical statement that 2+2=4, there is actually no universe in which this can be false. On the other hand in order to know this fact we need to acquire evidence of it, which, because it is a mathematical truth, we can do without any interaction with the outside world. On the other hand if someone messed with your head, you could acquire evidence that 2 plus 2 was 3 instead, but seeing this evidence would not cause 2 plus 2 to actually equal 3.
If we talk about the mathematical statement that 2+2=4, there is actually no universe in which this can be false.
On the contrary. Imagine a being that cannot (due to some neurological quirk) directly percieve objects—it can only percieve the spaces between objects, and thus indirectly deduce the presence of the objects themselves. To this being, the important thing—the thing that needs to be counted and to which a number is assigned—is the space, not the object.
Thus, “two” looks like this, with two spaces: 0 0 0
Placing “two” next to “two” gives this: 0 0 0 0 0 0
I think you misunderstand what I mean by “2+2=4”. Your argument would be reasonable if I had meant “when you put two things next to another two things I end up with four things”. On the other hand, this is not what I mean. In order to get that statement you need the additional, and definitely falsifiable statement “when I put a things next to b things, I have a+b things”.
When I say “2+2=4”, I mean that in the totally abstract object known as the natural numbers, the identity 2+2=4 holds. On the other hand the Platonist view of mathematics is perhaps a little shaky, especially among this crowd of empiricists, so if you don’t want to accept the above meaning, I at least mean that “SS0+SS0=SSSS0″ is a theorem in Peano Arithmetic. Neither of these claims can be false in any universe.
I think I understand what CCC means by the being that perceives spaces instead of objects—Peano Arithmetic only exists because it is useful for us, human beings, to manipulate numbers that way. Given a different set of conditions, a different set of mathematical axioms would be employed.
Peano Arithmetic is merely a collection of axioms (and axiom schema), and inference laws. It’s existence is not predicated upon its usefulness, and neither are its theorems.
I agree that the fact that we actually talk about Peano Arithmetic is a consequence of the fact that it (a) is useful to us (b) appeals to our aesthetic sense. On the other hand, although the being described in CCC’s post may not have developed Peano’s axioms on their own, once they are informed of these axioms (and modus ponens, and what it means for something to be a theorem), they would still agree that “SS0+SS0=SSSS0” in Peano Arithmetic.
In summary, although there may be universes in which the belief “2+2=4” is no longer useful, there are no universes in which it is not true.
I freely concede that a tree falling in the woods with no-one around makes acoustic vibrations, but I think it is relevant that it does not make any auditory experiences.
In retrospect, however, backtracking to the original comment, if “2+2=4” were replaced by “not(A and B) = (not A) or (not B)”, I think my argument would be nearly untenable. I think that probably suffices to demonstrate that ArisKatsaris’s theory of meaningfulness is flawed.
I freely concede that a tree falling in the woods with no-one around makes acoustic vibrations, but I think it is relevant that it does not make any auditory experiences.
How is it relevant? CCC was arguing that “2+2=4” was not true in some universes, not that it wouldn’t be discovered or useful in all universes. If your other example makes you happy that’s fine, but I think it would be possible to find hypothetical observers to whom De Morgan’s Law is equally useless. For example, the observer trapped in a sensory deprivation chamber may not have enough in the way of actual experiences for De Morgan’s Law to be at all useful in making sense of them.
In my opinion, saying “2+2=4 in every universe” is roughly equivalent to saying “1.f3 is a poor chess opening in every universe”—it’s “true” only if you stipulate a set of axioms whose meaningfulness is contingent on facts about our universe. It’s a valid interpretation of the term “true”, but it is not the only such interpretation, and it is not my preferred interpretation. That’s all.
If this is the case, then I’m confused as to what you mean by “true”. Let’s consider the statement “In the standard initial configuration in chess, there’s a helpmate in 2″. I imagine that you consider this analogous to your example of a statement about chess, but I am more comfortable with this one because it’s not clear exactly what a “poor move” is.
Now, if we wanted to explain this statement to a being from another universe, we would need to taboo “chess” and “helpmate” (and maybe “move”). The statement then unfolds into the following: ”In the game with the following set of rules… there is a sequence of play that causes the game to end after only two turns are taken by each player” Now this statement is equivalent to the first, but seems to me like it is only more meaningful to us than it is to anyone else because the game it describes matches a game that we, in a universe where chess is well known, have a non-trivial probability of ever playing. It seems like you want to use “true” to mean “true and useful”, but I don’t think that this agrees with what most people mean by “true”.
For example, there are infinitely many true statements of the form “A+B=C” for some specific integers A,B,C. On the other hand, if you pick A and B to be random really large numbers, the probability that the statement in question will ever be useful to anyone becomes negligible. On the other hand, it seems weird to start calling these statements “false” or “meaningless”.
It seems like you want to use “true” to mean “true and useful”, but I don’t think that this agrees with what most people mean by “true”.
You’re right, of course. To a large extent my comment sprung from a dislike of the idea that mathematics possesses some special ontological status independent of its relevance to our world—your point that even those statements which are parochial can be translated into terms comprehensible in a language fitted to a different sort of universe pretty much refutes that concern of mine.
I suppose it depends on how stict you are about what “coherently” means. A fictional universe is not the same as a possible universe and you pobably could write about a universe where you put two apples next to two other apples and then count five apples.
Hmm, I get your point, upvoting—but I’m not sure that “2+2=4” is meaningful in the same sense that “Bob already had 2 apples and bought 2 more apples, he was now in possession of 4 apples” is meaningful.
To the extent that 2+2=4 is just a matter of extending mathematical definitions from Peano Arithmetic, it’s as meaningful as saying 1=1 -- less a matter of beliefs, and more of a matter of definitions. And as far as it represents real events occurring, we can indeed imagine surreal fictional universes where if you buy two apples when you have already two apples, you end up in possession of five or six or zero apples...
What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?
A variation on this question “what rule could restrict our beliefs to just propositions that can be decided, without excluding a priori anything true?” is known to hopeless in a strong sense.
Incidentally I think the phrase “in principle” isn’t doing any work in your koan.
Meaningful seems like a odd word to choose, as it contains the answer itself. What rule restricts our beliefs to just propositions that can be meaningful? Why, we could ask ourselves if the proposition has meaning.
The “atoms” rule seems fine, if one takes out the word “atoms” and replaces it with “state of the universe,” with the understanding that “state” includes both statics and dynamics. Thus, we could imagine a world where QM was not true, and other physics held sway- and the state of that world, including its dynamics, would be noticeably different than ours.
And, like daenerys, I think the statement that “Elaine is a post-utopian” can be meaningful, and the implied expanded version of it can be concordant with reality.
[edit] I also wrote my koan answers as I was going through the post, so here’s 1:
Supposing that knowledge only exists in minds, then truth judgments- that is, knowledge that a belief corresponds to reality- will only exist in heads, because it is knowledge.
The postmodernists are wrong if they seek to have material implications from this definitional argument. What makes truth judgments special compared to other judgments is that we have access to the same reality. If Sally believes that the marble is in the basket and Anne believes the marble is in the box, the straw postmodernist might claim that both have their own truth- but two beliefs do not generate two marbles. Sally and Anne will both see the marble in the same container when they go looking for it.
Again, the bare facts agree with the postmodernists- Sally and Anne would need to look to see where the marble is, which they can hardly do without their heads! But the lack of an unthinking truth oracle does not make “the concordance of beliefs with reality”- what I would submit as a short definition of truth- a useful and testable concept.
And 2:
Quite probably, as it would want to have beliefs about the potential pasts and futures, or counterfactuals, or beliefs in the minds of others.
I suspect that, if we are born, we already have a first model of physics, a few built-in axioms. As we grow older, we acquire beliefs that are only recursive applications and elaborations of these axioms.
I would say that, if a belief can be reduced to this lowest level of abstraction, it is a meaningful belief.
Proposition p is meaningful relative to the collection of possible worlds W if and only if there exist w, w’ in W such that p is true in the possible world w and false in the possible world w’.
Then the question become: to be able to reason in all generality what collection of possible worlds should one use?
They are truisms—in principle they are statements that are entirely redundant as one could in principle work out the truth of them without being told anything. However, principle and practice are rather different here—just because we could in principle reinvent mathematics from scratch doesn’t mean that in practice we could. Consequently these beliefs are presented to us as external information rather than as the inevitable truisms they actually are.
A proposition P is meaningful if and only if P and not-P would imply different perceptions for a hypothetical entity which perceives all existing things.
(This is not any kind of argument for the actual existence of a god. Downvote if you wish, but please not due to that potential misunderstanding.)
No, in fact it works better on the assumption that there is no such entity.
If it could be an existing entity, then we could construct a paradoxical proposition, such as P=”There exists an object unperceived by anything.”, which could not be consistently evaluated as meaningful or unmeaningful. Treating a “perceiver of all existing things” as a purely hypothetical entity—a cognitive tool, not a reality—avoids such paradoxes.
If there’s an all-seeing deity, P is well-formed, meaningful, and false. Every object is perceived by the deity, including the deity itself. If there’s no all-seeing deity, the deity pops into hypothetical existence outside the real world, and evaluates P for possible perceiving anythings inside the real world; P is meaningful and likely true.
But that’s not what I was talking about. I’m talking about logical possibility, not existence. It’s okay to have a theory that talks about squares even though you haven’t built any perfect squares, and even if the laws of physics forbid it, because you have formal systems where squares exist. So you can ask “What is the smallest square that encompasses this shape?”, with a hypothetical square. But you can’t ask “What is the smallest square circle that encompasses this shape?”, because square circles are logically impossible.
I’m having a hard time finding an example of an impossible deity, not just a Turing-underpowered one, or one that doesn’t look at enough branches of a forking system. Maybe a universe where libertarian free will is true, and the deity must predict at 6AM what any agent will do at 7AM—but of course I snuck in the logical impossibility by assuming libertarian free will.
Huh? We’re talking past each other here.
… I’m talking about logical possibility, not existence.
Oh, oops. My mental model was this: Consider an all-perceiving entity (APE) such that, for all actually existing X, APE magically perceives X. That’s all of the APE’s properties—I’m not talking about classical theism or the God of any particular religion—so it doesn’t look to me like there are logical problems.
If there’s an all-seeing deity, P is well-formed, meaningful, and false. Every object is perceived by the deity, including the deity itself. If there’s no all-seeing deity, the deity pops into hypothetical existence outside the real world, and evaluates P for possible perceiving anythings inside the real world; P is meaningful and likely true.
Mostly agreed. But that’s not the GEV verificationism I suggested. The above paragraph takes the form “Evaluate P given APE” and “Evaluate P given no-APE”. My suggestion is the reverse; it takes the form “Evaluate APE’s perceptions given P” and “Evaluate APE’s perceptions given not-P”. If the great APE counts as a real thing, what would its set of perceptions be given that there exists an object unperceived by anything? That’s simply to build a contradiction: APE sees everything, and there’s something APE doesn’t see. But if the all-perceiving entity is assumed not to be a real thing, the problem goes away.
Propositions must be able in principle to be connected to a state of how the world could-be, and this connection must be durable over alternate states of basic world identity. That is to say, it should be possible to simulate both states in which the proposition is true, and states in which it is not.
Internal consistency. Propositions must be non self-contradictory. If a proposition is a conjunction of multiple propositions, then those propositions must not contradict each other.
When we try to build a model of the underlying universe, what we’re really talking about it is trying to derive properties of a program which we are observing (and a component of), and which produces our sense experiences. Probably quite a short program in its initial state, in fact (though possibly not one limited by the finite precision of traditional Turing Machines).
So, that gives us a few rules that seem likely to be general: the underlying model must be internally consistent and mathematically describable, and must have a total K-complexity less than the amount of information in the observable universe (or else we couldn’t reason about it).
So the question to ask is really “can I imagine a program state that would make this proposition true, given my current beliefs about my organization of the program?”
This is resilient to the atoms / QM thing, at least, as you can always change the underlying program description to better fit the evidence.
Although, in practice, most of what intelligent entities do can more precisely be described as ‘grammar fitting’ than ‘program induction.’ We reason probabalistically, essentially by throwing heuristics at a wall to see what offers marginal returns on predicting future sense impressions, since trying to guess the next word in a sentence by reverse-deriving the original state of the universe-program and iterating it forwards is not practical for most people. That massive mess of semi-rational, anticipatorially-justified rules of thumb is what allows us to reason in the day to day.
So a more pragmatic question is ‘how does this change my anticipation of future events?’ or ‘What sense experiences do I expect to have differently as a result of this belief?’
It is only when we seek to understand more deeply and generally, or when dealing with problems of things not directly observable, that it is practical to try to reason about the actual program underlying the universe.
I’m pleased to find this post and community; the writing is thoughtful and challenging. I’m not a philosopher, so some of the post waltzes off the edge of my cognitive dance floor, yet without stumbling or missing a beat. Proposing a rule to restrict belief seems problematic; who will enforce the restriction and how will bear on the outcome being “just.” So, the only just enforcer can be the individual believer. Perhaps the rule might pertain to the intersection of belief and action: beliefs may not cause actions that limit others’ freedom or well-being. Person A believes the sky is blue. Person B complains that person A’s belief limits their ability to believe that the sky is green. But person B’s complaint is out of bounds, as it’s based on B’s desire for unanimity, a desire that limits others’ freedom. Hmm.
For some reason, I did not find this option here (perhaps it is implied somewhere in the chains): a statement makes sense if, in principle, it is possible to imagine its sensory results in detail. Depends on whether Russell’s teapot makes sense, and also suggests that 2+2=3 doesn’t make sense.
Restrict propositions to observable references? (Or have a rule about falsifiablility?)
The problem with the observable reference rule is that sense can be divorced from reference and things can be true (in principle) even if un-sensed or un-sensable. However, when we learn language we start by associating sense with concrete reference. Abstractions are trickier.
It is the case that my sensorimotor apparatus will determine my beliefs and my ability to cross-reference my beliefs with other similar agents with similar sensorimotor apparatus will forge consensus on propositions that are meaningful and true.
Falsifiability is better. I can ask another human is Orwell post-Utopian? They can say ‘hell no he is dystopian’… But if some say yes and some say no, it seems I have an issue with vagueness which I would have to clarify with some definition of criteria for post-Utopian and dystopian.
Then once we had clarity of definition we could seek evidence in his texts. A lot of humanities texts however just leave observable reference at the door and run amok with new combinations of sense. Thus you get unicorns and other forms of fantasy...
All the propositions must be logical consequences of a theory that predicts observation, once you’ve removed everything you can from the theory without changing its predictions, and without adding anything.
It seems to me that we at least have to admit two different classes of proposition:
1) Propositions that reflect or imply an expectation of some experiences over others. Examples include the belief that the sky is blue, and the belief that we experience the blueness of the sky mediated by photons, eyes, nerves, and the brain itself.
2) Propositions that do not imply a prediction, but that we must believe in order to keep our model of the world simple incomprehensible. An example of this would be the belief that the photon continues to exist after taxes outside of our light cone.
If I, given a universal interface to a class of sentient beings, but without access to that being’s language or internal mind-state, could create an environment for each possible truth value of the statement, where any experiment conducted by a being of that class upon the environment would reflect the environment’s programmed truth value of the statement, and that being could form a confidence of belief regarding the statement which would be roughly uniform among beings of that class and generally leaning in the direction of the programmed truth value, then the statement has meaning.
In other words, I put on my robe and wizard’s cap, and you put on your haptic feedback vest and virtual reality helmet, and you tell me whether Elaine is a Post-Utopian.
This should cover propositions whose truth-value might not be knowable by us within our present universe if we can craft the environment such that it is knowable via the interface to the observer. e.g. hyperluminal messaging / teleportation / “pause” mode / “ghost” mode, debug HUDs, etc.
Explicitly assuming realism and reductionism. I think.
A meaningful statement is one that claims the “actual reality” lies within a particular well-defined subset of possible worlds, where each possible world is a complete and concrete specification of everything in that universe, to the highest available precision, at the lowest possible level of description, in that universes own ontology.
Of course, superhuge uncomputable subsets of possible worlds are not practically useful, so we compress by talking about concepts (like “white”, “snow”, “5”, “post-utopian”), among other things. Unfortunately, once we get into turing-complete compression, we can construct programs (concepts) that do all sorts of stupid stuff (like not halt). Concepts need to be portable between ontologies. This might sink this whole idea.
For example, “snow is white” says the One True Reality is within the (unimaginably huge) subset of possible worlds where the substructures that the “snow” concept matches are also matched by the “white” concept.
For example “2 + 2 = 5” refers to the subset of possible worlds where the concept generated by the application of the higher-order concept “+” to “2” and “2″ will match everything matched by “5”. (I unpacked “=” to “concepts match same things”, but you don’t have to) There’s something really neat about these abstract concepts, but they don’t seem fundamentally different from other ones.
TL;DR: So the rule is “your beliefs should be specified by a probability distribution over exact possible worlds”, and I don’t know of a compression language for possible world subsets that can’t express meaningless concepts (and it probably isn’t worth it to look for one).
“A statement can be meaningful if a test can be constructed that will return only one result, in all circumstances, if the statement is true.”
Consider the satement: If I throw an object off this cliff, then the object will fall. The test is obvious; I can take a wide variety of objects (a bowling ball, a rock, a toy car, and a set of music CDs by ) and throw them off the cliff. I can then note that all of them fall, and therefore improve the probability that the statement is true. I can then take one final object, a helium balloon, and throw it off the cliff; as the balloon rises, however, I have therefore shown that the statement is false. (A more correct version would be “if I throw a heavier-than-air object off this cliff, then the object will fall.” It’s still not completely true yet—a live pigeon is heavier than air—but it’s closer).
By this test, however, the statement “Carol is a post-utopian author” is meaningful, as long as there exist some features which are the features of post-modern authors (the features do not need to be described, or even known, as long as their existence can be proven—repeatable, correct classification by a series of artificial neural networks would prove that such features exist).
Here’s my first swing at it: A proposition is meaningful if it constrains the predicted observations of any theoretically possible observer.
This way, the proposition “the unmanned starship will not blink out of existence when it leaves my light cone” is meaningful because it’s possible that there might potentially be an observer nearby who observes the starship not disappear.
On the other hand, the statement “The position of this particle is exactly X and its momentum exactly P” is not meaningful under this rule, and that’s a feature.
Hm, how about: “[...] of any observer which our best current theory of how minds work says could exist”.
So for example, a statement along the lines of “a ghost watches and sees whether or not Mars continues to exist when it passes behind the Sun from Earth’s perspective” would have been meaningful a long time ago, but is not meaningful for people today who know a little about brains.
This also means that a proposition may be meaningful only because the proposer is ignorant.
A proposition P is meaningful to an observer O to the extent that O can alter its expectations about the world based on P.
This doesn’t a priori exclude anything that could be true, although for any given observer it might do so. As it should. Not every true proposition is meaningful to me, for example, and some true propositions that are meaningful to me aren’t meaningful to my mom.
Of course, it doesn’t necessarily exclude things that are false, either. (Nor should it. Propositions can be meaningful and false.)
For clarity, it’s also perhaps worth distinguishing between propositions and utterances, although the above is also true of meaningful utterances.
Maps are models of the territory. And the usefulness of them is often that they make predictions about parts of the territory I haven’t actually seen yet, and may have trouble getting to at all. The Sun will come up in the morning. There isn’t a leprachaun colony living a mile beneath my house. There aren’t any parts of the moon that are made of cheese.
I have no problem saying that these things are true, but they are in fact extrapolations of my current map into areas which I haven’t seen and may never see. These statements don’t meaningfully stand alone, they arise out of extrapolating a map that checks out in all sorts of other locations which I can check. One can then have meaningful certainty about the zones that haven’t yet been seen.
How does one extrapolate a map? In principle I’d say that you should find the most compressible form—the form that describes the territory without adding extra ‘information’ that I’ve assumed from someplace else. The compressed form then leads to predictions over and above the bald facts that go into it.
The map should match the territory in the places you can check. When I then make statements that something is “true”, I’m making assertions about what the world is like, based on my map. As far as English is concerned, I don’t need absolute certainty to say something is true, merely reasonable likelihood.
Hence the photon. The most compressible form of our description of the universe is that the parts of space that are just beyond visibility aren’t inherently different from the parts we can see. So the photon doesn’t blink out over there, because we don’t see any such blinking out over here.
If by “meaningful” you mean “either true or false” and by “meaningless” you mean “neither true nor false”, then a Platonist and a formalist would disagree about the meaningfulness of the continuum hypothesis. Since I don’t know any knockdown argument for either Platonism or formalism, I defy everyone who claims to have a crisp answer to your question, including possibly you.
Firstly, I don’t really like the wording of the Koan. I feel like a more accurate statement of the fundamental problem here is “What rule could restrict our beliefs to propositions that we can usefully discuss whether or not they are true without excluding any statements for which we would like be base our behavior on whether or not they are true.” Unfortunately, on some level I do not believe that there is a satisfactory answer here. Though it is quite possible that the problem is with my wanting to base my behavior on the truth of statements whose truth cannot be meaningfully discussed.
To start with, let’s talk about the restriction about restricting to statements for which we can meaningfully discuss whether or not they are true. Given the context of the post this is relatively straightforward. If truth is an agreement between our beliefs and reality, and if reality is the thing that determines our experiences, then it is only meaningful to talk about beliefs being true if there are some sequences of possible experiences that could cause the belief to be either true or false. This is perhaps too restrictive a use of “reality”, but certainly such beliefs can be meaningfully discussed.
Unfortunately, I would like to base my actions upon beliefs that do not fall into this category. Things like “the universe will continue to exist after I die” does not have any direct implications on my lifetime experiences, and thus would be considered meaningless. Fortunately, I have found a general transformation that turns such beliefs into beliefs that often have meaning. The basic idea is to instead of asking directly about my experiences to instead use Solomonoff induction to ask the question indirectly. For example, the question above becomes (roughly) “will the simplest model of my lifetime experiences have things corresponding to objects existing at times later than anything correspond to me?” This new statement could be true (as it is with my current set of experiences), or false (if for example, I expected to die in a big crunch). Now on every statement I can think of, the above rule transforms the statement A to a statement T(A) so that my naive beliefs about A are the same as my beliefs about T(A) (if they exist). Furthermore, it seems that T(A) is still meaningless in the above sense only in cases where I naively believe A to actually be meaningless and thus not useful for determining my behavior. So in some sense, this transformation seems to work really well.
Unfortunately, things are still not quite adding up to normality for me. The thing that I actually care about is whether or not people will exist after my death, not whether certain models contain people after my death. Thus even though this hack seems to be consistently giving me the right answers to questions about whether statements are true or meaningful, it does not seem to be doing so for the right reasons.
In case you were exposing a core uncertainty you had - ‘I want a) people to exist after me more than I want b) a MODEL that people exist after me, but my thinking incorporates b) instead of a); and that means my priorities are wrong’ - and it’s still troubling you, I’d like to suggest the opposite: if you have a model that predicts what you want, that’s perfect! Your model (I think) takes your experiences, feeds them into a Bayesian algo, and predicts the future—what better way is there to think? I mean, I lack such computing power and honesty...but if an honest computer takes my experiences and says, ‘Therefore, people exist after me,’ then my best possible guess is that people exist after me, and I can improve the chance of that using my model.
Only propositions that constrain our sensory experience are meaningful.
If it turns out that the cosmologists are wrong and the universe begins to contract, we will have the opportunity to make contact with the civilization that the colonization starship spawns. The proposition “The starship exists” entails that the probability of the universe contracting and us making contact with the descendants of the passengers of the starship is substantial compared to the probability of the universe contracting.
Counter-example.
“There exists at least one entity capable of sensory experience.” What constraints on sensory experience does this statement impose? If not, do you reject it as meaningless?
Least convenient possible world—we discover the universe will definitely expand forever. Now what?
Or what about the past? If I tell you an alien living three million years ago threw either a red or a blue ball into the black hole at the center of the galaxy but destroyed all evidence as to which, is there a fact of the matter as to which color ball it was?
“Possible” is an important qualifier there. Since 0 and 1 are not probabilities, you are not describing a possible world.
The comment doesn’t lose too much if we take ‘definite’ to mean 0.99999 instead of 1. (I would tend to write ‘almost certainly’ in such contexts to avoid this kind of problem.)
Yvain’s objection fails if “definitely” means “with probability 0.99999″. In that case the conditional probability P( encounter civilization | universe contracts) is well-defined.
Yvain’s objection fails if “definitely” means “with probability 0.99999″. In that case the conditional probability P( encounter civilization | universe contracts) is well-defined.
Oh, I thought I retracted the grandparent. Nevermind—it does need more caveats in the expression for it to return to being meaningful.
I think it loses its force entirely in that case. Nisan’s proposal was a counterfactual, and Yvain’s counter was a possible world where that counterfactual cannot obtain. Since there is no such possible world, the objection falls flat.
I suspect that the answer to the alien-ball case may be empirical rather than philosophical.
Suppose that there existed quantum configurations in which the alien threw in a red ball, and there existed quantum configurations in which the alien threw in a blue ball, and both of those have approximately equal causal influence on the configuration-cluster in which we are having (approximately) this conversation. In this case, we would happen to be living in a particular type of world such that there was no fact of the matter as to which color ball it was (except that e.g. it mostly wasn’t green).
we discover the universe will definitely expand forever. Now what?
You’re right, my principle doesn’t work if there’s something we believe with absolute certainty.
If I tell you an alien living three million years ago threw either a red or a blue ball into the black hole at the center of the galaxy but destroyed all evidence as to which, is there a fact of the matter as to which color ball it was?
If we later find out that the alien did in fact leave some evidence, and recover that evidence, we’ll have an opinion about the color of the ball.
If I tell you an alien living three million years ago threw either a red or a blue ball into the black hole at the center of the galaxy but destroyed all evidence as to which, is there a fact of the matter as to which color ball it was?
If we later find out that the alien did in fact leave some evidence, and recover that evidence, we’ll have an opinion about the color of the ball.
This seems to be avoiding Yvain’s question by answering a preferred one.
The position expressed so far, combined with the avoidance here would seem to give the answer ‘No’.
What about the proposition “the universe will cease to exist when I die” (using some definition of “die” that precludes any future experiences, for example, “die for the last time”)? Then the truth of this proposition does not constrain sensory input (because it only makes claims about times after which you have no sensory input), but does have behavioral ramifications if you are, for example, deciding whether or not to write a will.
First, our territory is a map. This is by nature of evolving at a physical scale in which we exist on a type of planet (rather than at the quantum level or the cosmological level) and of a century/day/hour scale conception of time (rather than geological or the opposite) and of a species in which experience is shared, preserved, and consequently accumulated. Differentiating matter is of that perspective, labeling snow is of that perspective, labeling is of that perspective, causation, and so on.
By nature of being, we create a territory. For a map to be true (I don’t like ‘meaningful’), it must correspond with the relevant territory. So, we need more than a laplacian demon to restrict beliefs to propositions that can be true, we need a demon capable of having a perfect and imperfect understanding of nature. It’d have to carve out all possible territories (of which can conflict) from our block universe and see them from all possible perspectives, and then you would have to specify which territory you want to see corresponds with whatever map.
Meaningful means it exists. By virtue of (variants of) the macroscopic decoherence interpretations of quantum mechanics and the best understanding I and three other long time rationalists have of cosmology, everything physically possible exists, either in a quantum mechanical branch or in another hubble volume.
To narrow it down a bit (but not conclusively) start out by eliminating all propositions that presuppose violation of conservation of energy, that should give you a head start.
Anything physically possible exists within our timeless universe-structure’s causal closures: When we talk “meaninful” or “not meaningful” we are really talking physics or not physics. Perpetual motion, for instance, isn’t physics. Neither is (as far as I know) faster than light travel or communication, reversing entropy, ontologically basic mental entities and a lot of other things. They do not exist in any world in our universe, thus not meaningful, not a thing you can experience.
This of course presupposes knowledge of physics… I’ll have to mull on that. Funny disagreeing with yourself while typing.
Perpetual motion, faster than light travel, etc. were falsified by scientific experimentation. This means that these hypotheses must have constrained anticipated experience. Maybe they are “meaningless” by some definition of the word (although not any with which I am familiar), but that is not the way Eliezer is using “meaningless”.
Would a powerful AI, from the “run_ai” is pressed on the command line till it knows practically everything ever give a significant probability to violation of conservation of energy?
Humans are really amazingly bad at thinking about physics, (Aristotle is a notable example, he practically formalized intuitive physics which are dead wrong,) but what if you aren’t?
I am nearly certain there exists some multiverse branch where humans study the avian migration patterns of the wild hog, but I too am nearly certain there is no multiverse branch within this mutiversal causal closure where even one electron spontaneously appears out of nothing and then goes on its merry way.
I agree this is a different viewpoint than a purely epistemological one, and that any epistemological agent can only approximate the function (defun exists-in-mutiverse-p...), but if you want be stringent, physics is the way.
Furthemore it patternmaches against my concept of how Tegmark invented his eponymous hypotheses: finding a basic premise and wondering if it is neccesary. Do we really need brains to talk about meaningful hypotheses, or do we just need a big universe.
Koan answers here for:
I dislike the “post utopian” example, and here’s why:
Language is pretty much a set of labels. When we call something “white”, we are saying it has some property of “whiteness.” NOW we can discuss wavelengths and how light works, or whatnot, but 200 years ago, they had no clue. They could still know that snow is white, though. At the same time, even with our knowledge of how colors work, we can still have difficulties knowing exactly where the label “white” ends, and grey or yellow begins.
Say I’m carving up music-space. I can pretty easily classify the differences between Classical and Rap, in ways that are easy to follow. I could say that classical features a lot of instrumentation, and rap features rhythmic language, or something. But if I had lots of people spending all their lives studying music, they’re going to end up breaking music space into much smaller pieces. For example, dub step and house.
Now, I can RECOGNIZE dubstep when I hear it, but if you asked me to teach you what it was, I would have difficulties. I couldn’t necessarily say “It’s the one that goes, like, WOPWOPWOPWOP iiinnnnnggg” if I’m a learned professor, so I’ll use jargon like “synthetic rhythm,” or something.
But not having a complete explainable System 2 algorithm for “How to Tell if it’s Dubstep” doesn’t mean that my System 1 can’t readily identify it. In fact, it’s probably easier to just listen to a bunch of music until your System 1 can identify the various genres, even if your System 2 can’t codify it. The example is treating the fact that your professor can’t really codify “post utopianism” to mean that it’s not “true”. (this example has been used in other sequence posts, and I disagreed with it then too)
Have someone write a bunch of short stories. Give them to English Literature professors. If they tend to agree which ones are post utopian, and which ones aren’t, then they ARE in fact carving up literature-space in a meaningful way. The fact that they can’t quite articulate the distinction doesn’t make it any less true than knowing that snow was white before you knew about wavelengths. They’re both labels, we just understand one better.
Anyways, I know it’s just an example, but without a better example, i can’t really understand the question well enough to think of a relevant answer.
I think Eliezer is taking it as a given that English college professors who talk like that are indeed talking without connection to anticipated experience. This may not play effectively to those he is trying to teach, and as you say, may not even be true.
In particular, “post-utopian” is not a real term so far as I know, and I’m using it as a stand-in for literary terms that do in fact have no meaning. If you think there are none of those, Alan Sokal would like to have a word with you.
There’s a sense in which a lot of fuzzy claims are meaningless: for example, it would be hard for a computer to evaluate “Socrates is kind” even if the computer could easily evaluate more direct claims like “Socrates is taller than five feet”. But “kind” isn’t really meaningless; it would just be a lot of work to establish exactly what goes into saying “kind” and exactly where the cutoff point between “kind” and “not so kind” is.
I agree that literary critical terms are fuzzy in the same sense as “kind”, but I don’t think they’re necessarily any more fuzzy. For example, replacing “post-utopian” with its likely inspiration “post-colonial”, I don’t know much about literature, but I feel pretty okay designating Salman Rushdie as “post-colonial” (since his books very often take place against the backdrop of the issues surrounding British decolonization of India) and J. K. Rowling as “not post-colonial” (since her books don’t deal with issues surrounding decolonization at all.)
Likewise, even though “post-utopian” was chosen specifically to be meaningless, I can say with confidence that Sir Thomas More’s Utopia was not post-utopian, and I bet most other people will agree with me.
The Sokal Hoax to me was less about totally disproving all literary critical terms, and more about showing that it’s really easy to get a paper published that no one understands. People elsewhere in the thread have already given examples of Sokalesque papers in physics, computer science, etc that got published, even though those fields seem pretty meaningful.
Literary criticism does have a bad habit of making strange assertions, but I don’t think they hinge on meaningless terms. A good example would be deconstruction of various works to point out the racist or sexist elements within. For example, “It sure is suspicious that Moby Dick is about a white whale, as if Melville believed that only white animals could possibly be individuals with stories of their own.”
The claim that Melville was racist when writing Moby Dick seems potentially meaningful—for example, we could go back in time, put him under truth serum, and ask him whether that was intentional. Even if it was wholly unconscious, it still implies that (for example) if we simulate a society without racism, it will be less likely to produce books like Moby Dick, or that if we pick apart Melville’s brain we can draw some causal connection between the racism to which he was exposed and the choice to have Moby Dick be white.
However, if I understand correctly literary critics believe these assertions do not hinge on authorial intent; that is, Melville might not have been trying to make Moby Dick a commentary on race relations, but that doesn’t mean a paper claiming that Moby Dick is a commentary on race relations should be taken less seriously.
Even this might not be totally meaningless. If an infinite monkey at an infinite typewriter happened to produce Animal Farm, it would still be the case that, by coincidence, it was a great metaphor for Communism. A literary critic (or primatologist) who wrote a paper saying “Hey, Animal Farm can increase our understanding and appreciation of the perils of Communism” wouldn’t really be talking nonsense. In fact, I’d go so far as to say that they’re (kind of) objectively correct, whereas even someone making the relatively stupid claim about Moby Dick above might still be right that the book can help us think about our assumptions about white people.
If I had to criticize literary criticism, I would have a few vague objections. First, that they inflate terms—instead of saying “Moby Dick vaguely reminds me of racism”, they say “Moby Dick is about racism.” Second, that even if their terms are not meaningless, their disputes very often are: if one critic says “Moby Dick is about racism” and another critic says “No it isn’t”, then if what the first one means is “Mobdy Dick vaguely reminds me of racism”, then arguing this is a waste of time. My third and most obvious complaint is opportunity costs: to me at least the whole field of talking about how certain things vaguely remind you of other things seems like a waste of resources that could be turned into perfectly good paper clips.
But these seem like very different criticisms than arguing that their terms are literally meaningless. I agree that to students they may be meaningless and they might compensate by guessing the teacher’s password, but this happens in every field.
I liked your comment and have a half-formed metaphor for you to either pick apart or develop:
LW/ rationalist types tend towards hard sciences. This requires more System 2 reasoning. Their fields are like computer programs. Every step makes sense, and is understood.
Humanities tends toward more System 1 pattern recognition. This is more akin to a neural network. Even if you are getting the “right” answer, it is coming out of a black box.
Because the rationalist types can’t see the algorithm, they assume it can’t be “right”.
Thoughts?
I like your idea and upvoted the comment, but I don’t know enough about neural networks to have a meaningful opinion on it.
I like the idea that this comment produces in my mind. But nitpickingly, a neural network is a type of computer program. And most of the professional bollocks-talkers of my acquaintance think very hard in system-two like ways about the rubbish they spout.
It’s hard to imagine a system-one academic discipline. Something like ‘Professor of telling whether people you are looking at are angry’, or ‘Professor of catching cricket balls’....
I wonder if you might be thinking more of the difference between a computer program that one fully understands (a rare thing indeed), and one which is only dimly understood, and made up of ‘magical’ parts even though its top level behaviour may be reasonably predictable (which is how most programmers perceive most programs).
Well, in the case of answers to questions like that in the humanities what does the word ‘right’ actually mean? If we say a particular author is ‘post utopian’ what does it actually mean for the answer to that question to be ‘yes’ or ‘no’? It’s just a classification that we invented. And like all classification groups there is a set of rules characteristics that mean that the author is either post utopian or not. I imagine it as a checklist of features which gets ticked off as a person reads the book. If all the items in the checklist are ticked then the author is post utopian. If not then the author is not.
The problem with this is that different people have different items in their checklist and differ in their opinion on how many items in the list need to be checked for the author to be classified as post utopian. You can pick any literary classification and this will be the case. There will never be a consensus on all the items in the checklist. There will always be a few points that everybody does not agree on. This makes me think that objectively speaking there is not ‘absolutely right’ or ‘absolutely wrong’ answer to a question like that.
In hard science on the other hand. There is always an absolutely right answer. If we say: “Protons and neutrons are oppositely charged.” There is an answer that is right because no matter what my beliefs, experiment is the final arbiter. Nobody who follows through the logical steps can deny that they are oppositely charged without making an illogical leap.
In the literary classification, you or your neural network can go through logical steps and still arrive at an answer that is not the same for everybody.
EDIT: I meant “protons and electrons are oppositely charged” not “protons and neutrons”. Sorry!
One: Protons and neutrons aren’t oppositely charged.
Two: You’re using particle physics as an example of an area where experiment is the final arbiter; you might not want to do that. Scientific consensus has more than a few established beliefs in that field that are untested and border on untestable.
Honestly, he’d be hard pressed to find a field that has better tested beliefs and greater convergence of evidence. The established beliefs you mention are a problem everywhere, and pretty much no field is backed with as much data as particle physics.
Fair enough; I had wanted to say that but don’t have sufficiently intimate awareness of every academic field to be comfortable doing so. I think it works just as well to illustrate that we oughtn’t confuse passing flaws in a field with fundamental ones, or the qualities of a /discipline/ with the qualities of seeking truth in a particular domain.
Press the Show help button to figure out how to italisize and bold and all that.
Was this intended to be a response to a different comment?
No, it’s just that FluffyC used slashes to indicate that the word in the middle was to be italisized, so she probably hadn’t read the help section, and I thought that reading the help section would, well, help FluffyC.
Oh Whoops! I mean protons and electrons! Silly mistake!
I don’t think that the fact that everyone having a different checklist is the point. In this perfect, hypothetical world, everyone has the same checklist.
I think that the point is that the checklist is meaningless, like having a literary genre called y-ism and having “The letter ‘y’ constitutes 1/26th of the text” on the checklist.
Even if we can identify y-ism with our senses, the distinction is doesn’t “mean” anything. It has zero application outside of the world of y-ism. It floats.
That is an important point. It is not so easy to come up up with a criterion of “meaningfulness” that excludes the stuff rationalists don’t like, but doens’t exclude a lot of everyday terninology at the same time.
I could add that others have their own criteria of “meaningfulness”. Humanities types aren’t very bothered about questions like how many moons saturn has, because it doens’t affect them or their society. The common factor seems to both kinds of “meaningfullness” is that they amount to “the stuff I personally consider to be worth bothering about”. A concern with objective meaningfullness is still a subjective concern.
FWIW, the Moby Dick example is less stupid than you paint it, given the recurrence of whiteness as an attribute of things special or good in western culture—an idea that pre-dates the invention of race. I think a case could be made out that (1) the causality runs from whiteness as a special or magical attribute, to its selection as a pertinent physical feature when racism was being invented (considering that there were a number of parallel candidates, like phrenology, that didn’t do so well memetically), and (2) in a world that now has racism, the ongoing presence of valuing white things as special has been both consciously used to reinforce it (cf the KKK’s name and its connotations) and unconsciously reinforces it by association,
I can’t resist. I think you should read Moby Dick. Whiteness in that novel is not used as any kind of symbol for good:
If you want to talk about racism and Moby Dick, talk about Queequeg!
Not that white animals aren’t often associated with good things, but this is not unique in western culture:
If that’s your criteria, you could use some stand-in for computer science terms that have no meaning.
I think you are playing to what you assume are our prejudices.
Suppose X is a meaningless predicate from a humanities subject. Suppose you used it, not a simulacrum. If it’s actually meaningless by the definition I give elsewhere in the thread, nobody will be able to name any Y such that p(X|Y) differs from p(X|¬Y) after a Bayesian update. Do you actually expect that, for any significant number of terms in humanities subjects, you would find no Y, even after grumpy defenders of X popped up in the thread? Or did you choose a made-up term so as to avoid flooding the thread with Y-proponents? If you expect people to propose candidates for Y, you aren’t really expecting X to be meaningless.
The Sokal hoax only proves one journal can be tricked by fake jargon. Not that bona fide jargon is meaningless.
I’m sure there’s a lot of nonsense, but “post-utopian” appears to have a quite ordinary sense, despite the lowness of the signal to noise ratio of some of those hits. A post-utopian X (X = writer, architect, hairdresser, etc.) is one who is working after, and in reaction against, a period of utopianism, i.e. belief in the perfectibility of the world by man. Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.
We’re all utopians here.
By this definition, wouldn’t the belief that science will not lead to perfection but we can still look forward to more of what we already have (rather than ruin and destruction) be equally post-utopian?
Not as I see the word used, which appears to involve the sense of not merely less enthusiastic than, but turning away from. You can’t make a movement on the basis of “yes, but not as sparkly”.
Pity. “It will be kind of like it is now” is an under-utilized prediction.
Dunno, Futurama is pretty much entirely based on that.
What would he have to say? The Sokal Hoax was about social engineering, not semantics.
“Post-utopian” is a real term, and even in the absence of examples of its use, it is straightforward to deduce its (likely) meaning, since “post-” means “subsequent to, in reaction to” and “utopian” means “believing in or aiming at the perfecting of polity or social conditions”. So post-utopian texts are those which react against utopianism, express skepticism at the perfectibility of society, and so on. This doesn’t seem like a particularly difficult idea and it is not difficult to identify particular texts as post-utopian (for example, Koestler’s Darkness at Noon, Huxley’s Brave New World, or Nabokov’s Bend Sinister).
So I think you need to pick a better example: “post-utopian” doesn’t cut it. The fact that you have chosen a weak example increases my skepticism as to the merits of your general argument. If meaningless terms are rife in the field of English literature, as you seem to be suggesting, then it should be easy for you to pick a real one.
(I made a similar point in response to your original post on this subject.)
There is the literature professor’s belief, the student’s belief, and the sentence “Carol is ‘post-utopian’”. While the sentence can be applied to both beliefs, the beliefs themselves are quite different beasts. The professor’s belief is something that carve literature space in a way most other literature professors do. Totally meaningful. The student’s belief, on the other hand, is just a label over a set of authors the student have scarcely read. Going a level deeper, we can find an explanation for this label, which turns out to be just another label (“colonial alienation”), and then it stops. From Eliezer’s main post (emphasis mine) :
The professor have a meaningful belief.
Unable to express it properly (it may not be his fault), gives a mysterious explanation.
That mysterious explanation generates a floating belief in the student’s mind.
Well, not that floating. The student definitely expects a sensory experience: grades. The problem isn’t the lack of expectations, but that they’re based on an overly simplified model of the professor’s beliefs, with no direct ties to the writing themselves –only to the authors’ names. Remove professors and authors’ names, and the students’ beliefs are really floating: they will have no way to tie them to reality –the writing. And if they try anyway, I bet their carvings won’t agree.
Now when the professor grades an answer, only a label will be available (“post-utopian”, or whatever). This label probably reflects the student’s belief directly. That answer will indeed be quickly patterned matched against a label inside the professor’s brain, generating a quick “right” or “wrong” response (and the corresponding motion in the hand that wield the red pen). Just as drawn in the picture actually.
However, the label in the professor’s head is not a floating belief like the student’s. It’s a cached thought, based on a much more meaningful belief (or so I hope).
Okay, now that I recognize your name, I see you’re not exactly a newcomer here. Sorry if I didn’t told anything you don’t know. But it did seem like you conflated mysterious answers (like “phlogiston”) and floating beliefs (actual neural constructs). Hope this helped.
If that is what Eliezer meant, then it was confusing to use an example for which many people suspect that the concept itself is not meaningful. It just generates distraction, like the “Is Nixon a pacifist?” example in the original Politics is the mind-killer post (and actually,the meaningfulness of post-colonialism as a category might be a political example in the wide sense of the word). He could have used something from physics like “Heat is transmitted by convention”, or really any other topic that a student can learn by rot without real understanding.
I don’t think Eliezer meant all what I have written (edit: yep, he didn’t). I was mainly analysing (and defending) the example to death, under Daenerys’ proposed assumption that the belief in the professor’s head is not floating. More likely, he picked something familiar that would make us think something like “yeah, if those are just labels, that’s no use”.¹
By the way is there any good example? Something that (i) clearly is meaningful, and (ii) let us empathise with those who nevertheless extract a floating belief out of it? I’m not sure. I for one don’t empathise with the students who merely learn by rot, for I myself don’t like loosely connected belief networks: I always wanted to understand.
Also, Eliezer wasn’t very explicit about the distinction between a statement, embodied in text, images, or whatever our senses can process, and belief, embodied in a heap of neurons. But this post is introductory. It is probably not very useful to make the distinction so soon. More important is to realize that ideas are not floating in the void, but are embodied in a medium: paper, computers… and of course brains.
[1] We’re not familiar to “post-utopianism” and “colonial alienation” specifically, but we do know the feeling generated by such literary mumbo jumbo.
Thank you! Your post helped me finally to understand what it was that I found so dissatisfying with the way I’m being taught chemistry. I’m not sure right now what I can do to remedy this, but thank you for helping me come to the realization.
If the teacher does not have a precise codification of what makes a writer “post-utopian”, then how should he teach it to students?
I would say the best way is a mix of demonstrating examples (“Alice is not a post-utopian; Carol is a post-utopian”), and offering generalizations that are correlated with whether the author is a post-utopian (“colonial alienation”). This is a fairly slow method of instruction, at least in some cases where the things being studied are complicated, but it can be effective. While the student’s belief may not yet be as well-formed as the professor’s, I would hesitate to call it meaningless. (More specifically, I would agree denotatively but object connotatively to such a classification.) I would definitely not call the belief useless, since it forms the basis for a later belief that will be meaningful. If a route to meaningful, useful belief B goes through “meaningless” belief A, then I would say that A is useful, and that calling A meaningless produces all the wrong sorts of connotations.
The example assumed bad teaching based on rote learning. Your idea might actually work.
(Edit: oops, you’re probably aware of that. Sorry for the noise)
To over-extend your metaphor, dubstep is electronic music with a breakbeat and certain BPM. Bassnectar described it in an inverview once as hip-hop beats at half time in breakbeat BPMs.
It’s really easy to tell the difference between dubstep and house, because dubstep has a broken kick..kickSNARE beat, while house has a 4⁄4 kick.kick.kick.kick beat.
(Interestingly, the dubstep you seem to describe is what people who listened to earlier dubstep commonly call “brostep,” and was inspired by one Rusko song (“Cockney Thug,” if I remember correctly).)
The point I mean to make by this is that most concepts do have system 2 algorithms that identify them, even if most people on LW would disagree with the social groups that advance those concepts.
I have many friends and comrades that are liberal arts students, and most of the time, if they said something like “post-utopian” or “colonial alienation” they’d have a coherent system-2 algorithm for identifying which authors or texts are more or less post-utopian.
Really, I agree that this is a bad example, because there are two things going on: the students have to guess the teacher’s password (which is the same as if you had Skirllex teaching MUSC 202: Dubstep Identification, and only accepted “songs with that heavy wobble bass shit” as “real dubstep, bro”), and there’s an alleged unspoken conspiracy of academics to have a meaningless classifier (which is maybe the same as subgenres of hard noise music, where there truly is no difference between typical songs in each subgenre, and only artist self-identification or consensus among raters can be used as a grouping strategy).
As others have said better than me, the Sokal affair seems to be better evidence of how easy it is to publish a bad paper than it is evidence that postmodernism is a flawed field.
Example: an irishman arguing with a mongolian over what dragons look like.
When the Irishman is a painter and the Mongolian a dissatisfied customer, does their disagreement have meaning?
In that case, they’re arguing about the wrong thing. Their real dispute is that the painting isn’t what the Mongolian wanted as a result of a miscommunication which neither of them noticed until one of them had spent money (or promised to) and the other had spent days painting.
So, no, even in that situation, there’s no such thing as a dragon, so they might as well be arguing about the migratory patterns of unicorns.
While the English profs may consistently classify writing samples as post utopian or not, the use of the label “post utopian” should be justified by the english meanings of “post” and “utopian” in some way. “Post” and “utopian” are concepts with meaning, they’re not just nonsense sounds available for use as labels.
If you have no conceptual System 1 algorithm for “post utopian”, and just have some consistent System 2 algorithm, it’s a conceptual confusion to use compound of labels for concepts that may have nothing at all to do with your underlying System 2 defined concept.
Likely the confusion serves an intellectually dishonest purpose, as in euphemism. When you see this kind of nonsense, there is some politically motivated obfuscation nearby.
A set of beliefs is not like a bag of sand, individual beliefs unconnected with each other, about individual things. They are connected to each other by logical reasoning, like a lump of sandstone. Not all beliefs need to have a direct connection with experience, but as long as pulling on the belief pulls, perhaps indirectly, on anticipated experience, the belief is meaningful.
When a pebble of beliefs is completely disconnected from experience, or when the connection is so loose that it can be pulled around arbitrarily without feeling the tug of experience, then we can pronounce it meaningless. The pebble may make an attractive paperweight, with an intricate structure made of elements that also occur in meaningful beliefs, but that’s all it can be. Music of the mind, conveying a subjective impression of deep meaning, without having any.
For the hypothetical photon disappearing in the far-far-away, no observation can be made on that photon, but we have other observations leading to beliefs about photons in general, according to which they cannot decay. That makes it meaningful to say that the far away photon acts in the same way. If we discovered processes of photon decay, it would still be meaningful, but then we would believe it could be false.
Interesting idea. But how did you know how to phrase your original beliefs about photons? You could just have easily decided to describe photons as “photons obey Maxwell’s equations up to an event horizon and case to exist outside of it”. You could then add other beliefs like “nothing exists outside of the event horizon” which are incompatible with the photon continuing to exist.
In other words, your beliefs cannot afford to be independent of one another, but you could build two different belief systems, one in which the photon continues to exist and one in which it does not, that make identical predictions about experiences. Is it meaningful to ask which of these belief systems is true?
Systems of belief are more like a lump of sandstone than a pile of sand, but they are also more like a lump of sandstone, a rather friable lump, than a lump of marble. They are not indissoluble structures that can be made in arbitrary shapes, the whole edifice supported by an attachment at one point to experience.
Experience never brought hypotheses such as you suggest to physicists’ attention. The edifice as built has no need of it, and it cannot be bolted on: it will just fall off again.
But these hypotheses have just be brought to our attention—just now. In fact the claim that these hypotheses produce indistinguishable physics might even be useful. If I want to simulate my experiences, I can save on computational power by knowing that I no longer have to keep track of things that have gone behind an event horizon. The real question is why the standard set of beliefs should be more true or meaningful than this new one. A simple appeal to what physicists have so far conjectured is not in general sufficient.
Which meaningful beliefs to consider seriously is an issue separate from the original koan, which asks which possible beliefs are meaningful. I think we are all agreeing that a belief about the remote photon’s extinction or not is a meaningful one.
I don’t see how you can claim that the belief that the photon continues to exist is a meaningful belief without also allowing the belief that the photon does not continue to exist to be a meaningful belief. Unless you do something along the lines of taking Kolmogorov complexity into account, these beliefs seem to be completely analogous to each other. Perhaps to phrase things more neutrally, we should be asking if the question “does the photon continue to exist?” is meaningful. On the one hand, you might want to say “no” because the outcome of the question is epiphenomenal. On the other hand, you would like this question to be meaningful since it may have behavioral implications.
They’re both meaningful. There are reasons to reject one of them as false, but that’s a separate issue.
OK. I think that I had been misreading some of your previous posts. Allow me the rephrase my objection.
Suppose that our beliefs about photons were rewritten as “photons not beyond an event horizon obey Maxwell’s Equations”. Making this change to my belief structure now leaves beliefs about whether or not photons still exist beyond an event horizon unconnected from my experiences. Does the meaningfulness of this belief depend on how I phrase my other beliefs?
Also if one can equally easily produce belief systems which predict the same sets of experiences but disagree on whether or not the photon exists beyond the event horizon, how does this belief differ from the belief that Carol is a post-utopian?
Dunno about “meaningful”, but the model with lower Kolmogorov complexity will give you more bang for the buck.
Your view reminds me of Quine’s “web of belief” view as expressed in “Two Dogmas of Empiricism” section 6:
Quine doesn’t use Bayesian epistemology, unfortunately because I think it would have helped him clarify and refine his view.
One way to try to flesh this intuition out is to say that some beliefs are meaningful by virtue of being subject to revision by experience (i.e. they directly pay rent), while others are meaningful by virtue of being epistemically entangled with beliefs that pay rent (in the sense of not being independent beliefs in the probabilistic sense). But that seems to fail because any belief connected to a belief that directly pays rent must itself be subject to revision by experience, at least to some extent, since if A is entangled with B, an observation which revises P(A) typically revises P(B), however slightly.
If a person with access to the computer simulating whichever universe (or set of universes) a belief is about could in principle write a program that takes as input the current state of the universe (as represented in the computer) and outputs whether the belief is true, then the belief is meaningful.
(if the universe in question does not run on a computer, begin by digitizing your universe, then proceed as above)
That has the same problem as atomic-level specifications that become false when you discover QM. If the Church-Turing thesis is false, all statements you have specified thus become meaningless or false. Even using a hierarchy of oracles until you hit a sufficient one might not be enough if the universe is even more magical than that.
But that’s only useful if you make it circular.
Taking you more strictly at your word than you mean it the program could just return true for the majority belief on empirically non-falsifiable questions. Or it could just return false on all beliefs including your belief that that is illogical. So with the right programs pretty much arbitrary beliefs pass as meaningful.
You actually want it to depend on the state of the universe in the right way, but that’s just another way to say it should depend on whether the belief is true.
That’s a problem with all theories of truth, though. “Elaine is a post-utopian author” is trivially true if you interpret “post-utopian” to mean “whatever professors say is post-utopian”, or “a thing that is always true of all authors” or “is made out of mass”.
To do this with programs rather than philosophy doesn’t make it any worse.
I’m suggesting is that there is a correspondence between meaningful statements and universal computer programs. Obviously this theory doesn’t tell you how to match the right statement to the right computer program. If you match the statement “snow is white” to the computer program that is a bunch of random characters, the program will return no result and you’ll conclude that “snow is white” is meaningless. But that’s just the same problem as the philosopher who refuses to accept any definition of “snow”, or who claims that snow is obviously black because “snow” means that liquid fossil fuel you drill for and then turn into gasoline.
If your closest match to “post-utopian” is a program that determines whether professors think someone is post-utopian, then you can either conclude that post-utopian literally means “something people call post-utopian”—which would probably be a weird and nonstandard word use the same way using “snow” to mean “oil” would be nonstandard—or that post-utopianism isn’t meaningful.
Yeah, probably all theories of truth are circular and the concept is simply non-tabooable. I agree your explanation doesn’t make it worse, but it doesn’t make it better either.
Doesn’t this commit you to the claim that at least some beliefs about whether or not a particular Turing machine halts must be meaningless? If they are all meaningful and your criterion of meaningfulness is correct, then your simulating computer solves the halting problem. But it seems implausible that beliefs about whether Turing machines halt are meaningless.
Input->Black box->Desired output. “Black box” could be replaced with”magic.” How would your black box work in practice?
That doesn’t help us decide whether there are stars outside the cosmological horizon.
I feel like writing a more intelligent reply than “Yes it does”, so could you explain this further?
Suppose we are not living in a simulation. We are to digitize our universe. Do we make our digitization include stars outside the cosmological horizon? By what principle do we decide?
(I suppose you could be asking us to actually digitize the universe, but we want a principle we can use today.)
Well, if the universe actually runs on a computer, then presumably that computer includes data for all stars, not just the ones that are visible to us.
If the universe doesn’t run on a computer, then you have to actually digitize the universe so that your model is identical to the real universe as if it were on a computer, not stop halfway when it gets too hard or physically impossible.
I don’t think any of these principles will actually be practical. Even the sense-experience principle isn’t useful. It would classify “a particle accelerator the size of the Milky Way would generate evidence of photinos” as meaningful, but no one is going to build a particle accelerator the size of the Milky Way any more than they are going to digitize the universe. The goal is to have a philosophical tool, not a practical plan of action.
Oh, I didn’t express myself clearly in the last paragraph of the grandparent. Don’t worry, I’m not trying to demand any kind of practical procedure. I think we’re on the same page. However:
I don’t think we can really say that in general. Perhaps if the computer stored the locations and properties of stars in an easy-to-understand way, like a huge array of floating-point numbers, and we looked into the computer’s memory and found a whole other universe’s worth of extra stars, with spatial coordinates that prevent us from ever interacting with them, then we would be comfortable saying that those stars exist but are invisible to us.
But what if the computer compresses star location data, so the database of visible stars looks like random bits? And then we find an extra file in the computer, which is never accessed, and which is filled with random bits? Do we interpret those as invisible stars? I claim that there is no principled, objective way of pointing to parts of a computer’s memory and saying “these bits represent stars invisible to the simulation’s inhabitants, those do not”.
I’m suspicious of the phrase “identical to the real universe as if it were on a computer”. It seems like a black box. Suppose we commission a digital model of this universe, and the engineer in charge capriciously programs the computer to delete information about any object that passes over the cosmological horizon. But they conscientiously program the computer to periodically archive snapshots of the state of the simulation. It might look like this model does not contain spaceships that have passed over the cosmological horizon. But the engineer points out that you can easily extrapolate the correct location of the spaceship from the archived snapshots — the initial state and the laws of physics uniquely determine the present location of the spaceship beyond the cosmological horizon, if it exists. The engineer claims that the simulation actually does contain the spaceship outside the cosmological horizon, and the extrapolation process they just described is simply the decompression algorithm. Is the engineer right? Again, we run into the same problem. To answer the question we must either make an arbitrary decision or give an answer that is relative to some of the simulation’s inhabitants.
And now we have the same problem with deciding whether this digital model is “identical to the real universe as if it were on a computer”. Even if we believe that the spaceship still exists, we have trouble deciding whether the spaceship exists “in the model”.
Why should it if its purpose is to simulate reality for humans? What’s wrong with a version of The Truman Show?
Because since everything would be a simulation, “all stars” would be identical in meaning with “all stars that are being simulated” and with “all stars for which the computer includes data”.
In a Truman Show situation, the simulators would’ve shown us white pin-pricks for thousands of years, and then started doing actual astrophysics simulations only when we got telescopes.
A variant of Löb’s theorem, isn’t it?
Edit: Downvoted because the parallels are too obvious, or because the comparison seems too contrived? “E”nquiring minds want to know …
Before reading other answers, I would guess that a statement is meaningful if it is either implied or refuted by a useful model of the universe—the more useful the model, the more meaningful the statement.
This is incontrovertibly the best answer given so far. My answer was that a proposition is meaningful iff an oracle machine exists that takes as input the proposition and the universe, outputs 0 if the proposition is true and outputs 1 if the proposition is false. However, this begs the question, because an oracle machine is defined in terms of a “black box”.
Looking at Furslid’s answer, I discovered that my definition is somewhat ambiguous—a statement may be implied or refuted by quite a lot of different kinds of models, some of which are nearly useless and some of which are anything but, and my definition offers no guidance on the question of which model’s usefulness reflects the statement’s meaningfulness.
Plus, I’m not entirely sure how it works with regards to logical contradictions.
Where Recursive Justification Hits Bottom and its comment thread should be interesting to you.
In the end, we have to rely on the logical theory of probability (as well as standard logical laws, such as the law of noncontradiction). There is no better choice.
Using Bayes’ theorem (beginning with priors set by Occam’s Razor) tells you how useful your model is.
I think I was unclear. What I was considering was along the following lines:
What occurred to me just now, as I wrote out the example, is the idea of simplicity. If you penalize models that add complexity without addition of practical value, the professor’s list will be rapidly cut from almost any model more general than “what answer will receive a good grade on this professor’s tests?”
For a belief to be meaningful you have to be able to describe evidence that would move your posterior probability of it being true after a Bayesian update.
This is a generalization of falsifiability that allows, for example, indirect evidence pertaining to universal laws.
How about basic logical statements? For example: If P, then P. I think that belief is meaningful, but I don’t think I could coherently describe evidence that would make me change it’s probability of being true.
Possible counterexample: “All possible mathematical structures exist.”
You’d have to define “exist”, because mathematical structures in themselves are just generalized relations that hold under specified constraints. And once you defined “exist”, it might be easier to look for Bayesian evidence—either for them existing, or for a law that would require them to exist.
As a general thing, my definition does consider under-defined assertions meaningless, but that seems correct.
Yeah, I’m not really sure how to interpret “exist” in that statement. Someone that knows more about Tegmark level IV than I do should weigh in, but my intuition is that if parallel mathematical structures exist that we can’t, in principle, even interact with, it’s impossible to obtain Bayesian evidence about whether they exist.
If we couldn’t, even in principle, find any evidence that would make the theory more likely or less, then yeah I think that theory would be correctly labeled meaningless.
But, I can immediately think of some evidence that would move my posterior probability. If all definable universes exist, we should expect (by Occam) to be in a simple one, and (by anthropic reasoning) in a survivable one, but we should not expect it to be elegant. The laws should be quirky, because the number of possible universes (that are simple and survivable) is larger than the subset thereof that are elegant.
Why? That assumes the universes are weighted by complexity, which isn’t true in all Tegmark level IV theories.
Consider “Elaine is a post-utopian and the Earth is round” This statement is meaningless, at least in the case where the Earth is round, where it is equivalent to “Elaine is a post-utopian.” Yet it does constrain my experience, because observing that the Earth is flat falsifies it. If something like this came to seem like a natural proposition to consider, I think it would be hard to notice it was (partly) meaningless, since I could still notice it being updated.
This seems to defeat many suggestions people have made so far. I guess we could say it’s not a real counterexample, because the statement is still “partly meaningful”. But in that case it would be still be nice if we could say what “partly meaningful” means. I think that the situation often arises that a concept or belief people throw around has a lot of useless conceptual baggage that doesn’t track anything in the real world, yet doesn’t completely fail to constrain reality (I’d put phlogiston and possibly some literary criticism concepts in this category).
My first attempt is to say that a belief A of X is meaningful to the extent that it (is contained in / has an analog in / is resolved by) the most parsimonious model of the universe which makes all predictions about direct observations that X would make.
A solution to that particular example is already in logic—the statements “Elaine is a post-utopian” and “the Earth is round” can be evaluated separately, and then you just need a separate rule for dealing with conjunctions.
For every meaningful proposition P, an author should (in theory) be able to write coherently about a fictional universe U where P is true and a fictional universe U’ where P is false.
So my belief that 2+2=4 isn’t meaningful?
I thought Eliezer’s story about waking up in a universe where 2+2 seems to equal 3 felt pretty coherent.
edit: It seems like the story would be less coherent if it involved detailed descriptions of re-deriving mathematics from first principles. So perhaps ArisKatsaris’ definition leaves too much to the author’s judgement in what to leave out of the story.
I think that it’s a good deal more subtle than this. Eliezer described a universe in which he had evidence that 2+2=3, not a universe in which 2 plus 2 was actually equal to 3. If we talk about the mathematical statement that 2+2=4, there is actually no universe in which this can be false. On the other hand in order to know this fact we need to acquire evidence of it, which, because it is a mathematical truth, we can do without any interaction with the outside world. On the other hand if someone messed with your head, you could acquire evidence that 2 plus 2 was 3 instead, but seeing this evidence would not cause 2 plus 2 to actually equal 3.
On the contrary. Imagine a being that cannot (due to some neurological quirk) directly percieve objects—it can only percieve the spaces between objects, and thus indirectly deduce the presence of the objects themselves. To this being, the important thing—the thing that needs to be counted and to which a number is assigned—is the space, not the object.
Thus, “two” looks like this, with two spaces: 0 0 0
Placing “two” next to “two” gives this: 0 0 0 0 0 0
Counting the spaces gives five. Thus, 2+2=5.
I think you misunderstand what I mean by “2+2=4”. Your argument would be reasonable if I had meant “when you put two things next to another two things I end up with four things”. On the other hand, this is not what I mean. In order to get that statement you need the additional, and definitely falsifiable statement “when I put a things next to b things, I have a+b things”.
When I say “2+2=4”, I mean that in the totally abstract object known as the natural numbers, the identity 2+2=4 holds. On the other hand the Platonist view of mathematics is perhaps a little shaky, especially among this crowd of empiricists, so if you don’t want to accept the above meaning, I at least mean that “SS0+SS0=SSSS0″ is a theorem in Peano Arithmetic. Neither of these claims can be false in any universe.
I think I understand what CCC means by the being that perceives spaces instead of objects—Peano Arithmetic only exists because it is useful for us, human beings, to manipulate numbers that way. Given a different set of conditions, a different set of mathematical axioms would be employed.
Peano Arithmetic is merely a collection of axioms (and axiom schema), and inference laws. It’s existence is not predicated upon its usefulness, and neither are its theorems.
I agree that the fact that we actually talk about Peano Arithmetic is a consequence of the fact that it (a) is useful to us (b) appeals to our aesthetic sense. On the other hand, although the being described in CCC’s post may not have developed Peano’s axioms on their own, once they are informed of these axioms (and modus ponens, and what it means for something to be a theorem), they would still agree that “SS0+SS0=SSSS0” in Peano Arithmetic.
In summary, although there may be universes in which the belief “2+2=4” is no longer useful, there are no universes in which it is not true.
I freely concede that a tree falling in the woods with no-one around makes acoustic vibrations, but I think it is relevant that it does not make any auditory experiences.
In retrospect, however, backtracking to the original comment, if “2+2=4” were replaced by “not(A and B) = (not A) or (not B)”, I think my argument would be nearly untenable. I think that probably suffices to demonstrate that ArisKatsaris’s theory of meaningfulness is flawed.
How is it relevant? CCC was arguing that “2+2=4” was not true in some universes, not that it wouldn’t be discovered or useful in all universes. If your other example makes you happy that’s fine, but I think it would be possible to find hypothetical observers to whom De Morgan’s Law is equally useless. For example, the observer trapped in a sensory deprivation chamber may not have enough in the way of actual experiences for De Morgan’s Law to be at all useful in making sense of them.
In my opinion, saying “2+2=4 in every universe” is roughly equivalent to saying “1.f3 is a poor chess opening in every universe”—it’s “true” only if you stipulate a set of axioms whose meaningfulness is contingent on facts about our universe. It’s a valid interpretation of the term “true”, but it is not the only such interpretation, and it is not my preferred interpretation. That’s all.
If this is the case, then I’m confused as to what you mean by “true”. Let’s consider the statement “In the standard initial configuration in chess, there’s a helpmate in 2″. I imagine that you consider this analogous to your example of a statement about chess, but I am more comfortable with this one because it’s not clear exactly what a “poor move” is.
Now, if we wanted to explain this statement to a being from another universe, we would need to taboo “chess” and “helpmate” (and maybe “move”). The statement then unfolds into the following:
”In the game with the following set of rules… there is a sequence of play that causes the game to end after only two turns are taken by each player”
Now this statement is equivalent to the first, but seems to me like it is only more meaningful to us than it is to anyone else because the game it describes matches a game that we, in a universe where chess is well known, have a non-trivial probability of ever playing. It seems like you want to use “true” to mean “true and useful”, but I don’t think that this agrees with what most people mean by “true”.
For example, there are infinitely many true statements of the form “A+B=C” for some specific integers A,B,C. On the other hand, if you pick A and B to be random really large numbers, the probability that the statement in question will ever be useful to anyone becomes negligible. On the other hand, it seems weird to start calling these statements “false” or “meaningless”.
You’re right, of course. To a large extent my comment sprung from a dislike of the idea that mathematics possesses some special ontological status independent of its relevance to our world—your point that even those statements which are parochial can be translated into terms comprehensible in a language fitted to a different sort of universe pretty much refutes that concern of mine.
I suppose it depends on how stict you are about what “coherently” means. A fictional universe is not the same as a possible universe and you pobably could write about a universe where you put two apples next to two other apples and then count five apples.
Hmm, I get your point, upvoting—but I’m not sure that “2+2=4” is meaningful in the same sense that “Bob already had 2 apples and bought 2 more apples, he was now in possession of 4 apples” is meaningful.
To the extent that 2+2=4 is just a matter of extending mathematical definitions from Peano Arithmetic, it’s as meaningful as saying 1=1 -- less a matter of beliefs, and more of a matter of definitions. And as far as it represents real events occurring, we can indeed imagine surreal fictional universes where if you buy two apples when you have already two apples, you end up in possession of five or six or zero apples...
A variation on this question “what rule could restrict our beliefs to just propositions that can be decided, without excluding a priori anything true?” is known to hopeless in a strong sense.
Incidentally I think the phrase “in principle” isn’t doing any work in your koan.
Meaningful seems like a odd word to choose, as it contains the answer itself. What rule restricts our beliefs to just propositions that can be meaningful? Why, we could ask ourselves if the proposition has meaning.
The “atoms” rule seems fine, if one takes out the word “atoms” and replaces it with “state of the universe,” with the understanding that “state” includes both statics and dynamics. Thus, we could imagine a world where QM was not true, and other physics held sway- and the state of that world, including its dynamics, would be noticeably different than ours.
And, like daenerys, I think the statement that “Elaine is a post-utopian” can be meaningful, and the implied expanded version of it can be concordant with reality.
[edit] I also wrote my koan answers as I was going through the post, so here’s 1:
And 2:
I very much like your response to (1) - I think the point about having access to a common universe makes it very clear.
Beliefs must pay rent.
Insufficient: the colony ship leaves no evidence.
How about an expanded version: if we could be a timeless spaceless perfect observer of the universe(s), what evidence would we expect to see?
Can you guarantee that a TSPO wouldn’t see epiphenomenal consciousness?
Well, no. How is that different from epiphenomenal spaceships? Our modal predicts spaceships but no p-zombies.
I suspect that, if we are born, we already have a first model of physics, a few built-in axioms. As we grow older, we acquire beliefs that are only recursive applications and elaborations of these axioms.
I would say that, if a belief can be reduced to this lowest level of abstraction, it is a meaningful belief.
Proposition p is meaningful relative to the collection of possible worlds W if and only if there exist w, w’ in W such that p is true in the possible world w and false in the possible world w’.
Then the question become: to be able to reason in all generality what collection of possible worlds should one use?
That’s a very hard question.
They are truisms—in principle they are statements that are entirely redundant as one could in principle work out the truth of them without being told anything. However, principle and practice are rather different here—just because we could in principle reinvent mathematics from scratch doesn’t mean that in practice we could. Consequently these beliefs are presented to us as external information rather than as the inevitable truisms they actually are.
“God’s-eye-view” verificationism
A proposition P is meaningful if and only if P and not-P would imply different perceptions for a hypothetical entity which perceives all existing things.
(This is not any kind of argument for the actual existence of a god. Downvote if you wish, but please not due to that potential misunderstanding.)
Doesn’t that require such an entity to be logically possible?
No, in fact it works better on the assumption that there is no such entity.
If it could be an existing entity, then we could construct a paradoxical proposition, such as P=”There exists an object unperceived by anything.”, which could not be consistently evaluated as meaningful or unmeaningful. Treating a “perceiver of all existing things” as a purely hypothetical entity—a cognitive tool, not a reality—avoids such paradoxes.
Huh? We’re talking past each other here.
If there’s an all-seeing deity, P is well-formed, meaningful, and false. Every object is perceived by the deity, including the deity itself. If there’s no all-seeing deity, the deity pops into hypothetical existence outside the real world, and evaluates P for possible perceiving anythings inside the real world; P is meaningful and likely true.
But that’s not what I was talking about. I’m talking about logical possibility, not existence. It’s okay to have a theory that talks about squares even though you haven’t built any perfect squares, and even if the laws of physics forbid it, because you have formal systems where squares exist. So you can ask “What is the smallest square that encompasses this shape?”, with a hypothetical square. But you can’t ask “What is the smallest square circle that encompasses this shape?”, because square circles are logically impossible.
I’m having a hard time finding an example of an impossible deity, not just a Turing-underpowered one, or one that doesn’t look at enough branches of a forking system. Maybe a universe where libertarian free will is true, and the deity must predict at 6AM what any agent will do at 7AM—but of course I snuck in the logical impossibility by assuming libertarian free will.
Oh, oops. My mental model was this: Consider an all-perceiving entity (APE) such that, for all actually existing X, APE magically perceives X. That’s all of the APE’s properties—I’m not talking about classical theism or the God of any particular religion—so it doesn’t look to me like there are logical problems.
Mostly agreed. But that’s not the GEV verificationism I suggested. The above paragraph takes the form “Evaluate P given APE” and “Evaluate P given no-APE”. My suggestion is the reverse; it takes the form “Evaluate APE’s perceptions given P” and “Evaluate APE’s perceptions given not-P”. If the great APE counts as a real thing, what would its set of perceptions be given that there exists an object unperceived by anything? That’s simply to build a contradiction: APE sees everything, and there’s something APE doesn’t see. But if the all-perceiving entity is assumed not to be a real thing, the problem goes away.
Propositions must be able in principle to be connected to a state of how the world could-be, and this connection must be durable over alternate states of basic world identity. That is to say, it should be possible to simulate both states in which the proposition is true, and states in which it is not.
I don’t think there can be any such rule.
Internal consistency. Propositions must be non self-contradictory. If a proposition is a conjunction of multiple propositions, then those propositions must not contradict each other.
I think the condition is necessary but not sufficient. How would it deal with the post-utopian example in the article text?
When we try to build a model of the underlying universe, what we’re really talking about it is trying to derive properties of a program which we are observing (and a component of), and which produces our sense experiences. Probably quite a short program in its initial state, in fact (though possibly not one limited by the finite precision of traditional Turing Machines).
So, that gives us a few rules that seem likely to be general: the underlying model must be internally consistent and mathematically describable, and must have a total K-complexity less than the amount of information in the observable universe (or else we couldn’t reason about it).
So the question to ask is really “can I imagine a program state that would make this proposition true, given my current beliefs about my organization of the program?”
This is resilient to the atoms / QM thing, at least, as you can always change the underlying program description to better fit the evidence.
Although, in practice, most of what intelligent entities do can more precisely be described as ‘grammar fitting’ than ‘program induction.’ We reason probabalistically, essentially by throwing heuristics at a wall to see what offers marginal returns on predicting future sense impressions, since trying to guess the next word in a sentence by reverse-deriving the original state of the universe-program and iterating it forwards is not practical for most people. That massive mess of semi-rational, anticipatorially-justified rules of thumb is what allows us to reason in the day to day.
So a more pragmatic question is ‘how does this change my anticipation of future events?’ or ‘What sense experiences do I expect to have differently as a result of this belief?’
It is only when we seek to understand more deeply and generally, or when dealing with problems of things not directly observable, that it is practical to try to reason about the actual program underlying the universe.
I’m pleased to find this post and community; the writing is thoughtful and challenging. I’m not a philosopher, so some of the post waltzes off the edge of my cognitive dance floor, yet without stumbling or missing a beat. Proposing a rule to restrict belief seems problematic; who will enforce the restriction and how will bear on the outcome being “just.” So, the only just enforcer can be the individual believer. Perhaps the rule might pertain to the intersection of belief and action: beliefs may not cause actions that limit others’ freedom or well-being. Person A believes the sky is blue. Person B complains that person A’s belief limits their ability to believe that the sky is green. But person B’s complaint is out of bounds, as it’s based on B’s desire for unanimity, a desire that limits others’ freedom. Hmm.
For some reason, I did not find this option here (perhaps it is implied somewhere in the chains): a statement makes sense if, in principle, it is possible to imagine its sensory results in detail. Depends on whether Russell’s teapot makes sense, and also suggests that 2+2=3 doesn’t make sense.
Restrict propositions to observable references? (Or have a rule about falsifiablility?)
The problem with the observable reference rule is that sense can be divorced from reference and things can be true (in principle) even if un-sensed or un-sensable. However, when we learn language we start by associating sense with concrete reference. Abstractions are trickier.
It is the case that my sensorimotor apparatus will determine my beliefs and my ability to cross-reference my beliefs with other similar agents with similar sensorimotor apparatus will forge consensus on propositions that are meaningful and true.
Falsifiability is better. I can ask another human is Orwell post-Utopian? They can say ‘hell no he is dystopian’… But if some say yes and some say no, it seems I have an issue with vagueness which I would have to clarify with some definition of criteria for post-Utopian and dystopian.
Then once we had clarity of definition we could seek evidence in his texts. A lot of humanities texts however just leave observable reference at the door and run amok with new combinations of sense. Thus you get unicorns and other forms of fantasy...
All the propositions must be logical consequences of a theory that predicts observation, once you’ve removed everything you can from the theory without changing its predictions, and without adding anything.
It seems to me that we at least have to admit two different classes of proposition:
1) Propositions that reflect or imply an expectation of some experiences over others. Examples include the belief that the sky is blue, and the belief that we experience the blueness of the sky mediated by photons, eyes, nerves, and the brain itself.
2) Propositions that do not imply a prediction, but that we must believe in order to keep our model of the world simple incomprehensible. An example of this would be the belief that the photon continues to exist after taxes outside of our light cone.
Solomonoff induction! Just kidding.
If I, given a universal interface to a class of sentient beings, but without access to that being’s language or internal mind-state, could create an environment for each possible truth value of the statement, where any experiment conducted by a being of that class upon the environment would reflect the environment’s programmed truth value of the statement, and that being could form a confidence of belief regarding the statement which would be roughly uniform among beings of that class and generally leaning in the direction of the programmed truth value, then the statement has meaning.
In other words, I put on my robe and wizard’s cap, and you put on your haptic feedback vest and virtual reality helmet, and you tell me whether Elaine is a Post-Utopian.
This should cover propositions whose truth-value might not be knowable by us within our present universe if we can craft the environment such that it is knowable via the interface to the observer. e.g. hyperluminal messaging / teleportation / “pause” mode / “ghost” mode, debug HUDs, etc.
Explicitly assuming realism and reductionism. I think.
A meaningful statement is one that claims the “actual reality” lies within a particular well-defined subset of possible worlds, where each possible world is a complete and concrete specification of everything in that universe, to the highest available precision, at the lowest possible level of description, in that universes own ontology.
Of course, superhuge uncomputable subsets of possible worlds are not practically useful, so we compress by talking about concepts (like “white”, “snow”, “5”, “post-utopian”), among other things. Unfortunately, once we get into turing-complete compression, we can construct programs (concepts) that do all sorts of stupid stuff (like not halt). Concepts need to be portable between ontologies. This might sink this whole idea.
For example, “snow is white” says the One True Reality is within the (unimaginably huge) subset of possible worlds where the substructures that the “snow” concept matches are also matched by the “white” concept.
For example “2 + 2 = 5” refers to the subset of possible worlds where the concept generated by the application of the higher-order concept “+” to “2” and “2″ will match everything matched by “5”. (I unpacked “=” to “concepts match same things”, but you don’t have to) There’s something really neat about these abstract concepts, but they don’t seem fundamentally different from other ones.
TL;DR: So the rule is “your beliefs should be specified by a probability distribution over exact possible worlds”, and I don’t know of a compression language for possible world subsets that can’t express meaningless concepts (and it probably isn’t worth it to look for one).
“A statement can be meaningful if a test can be constructed that will return only one result, in all circumstances, if the statement is true.”
Consider the satement: If I throw an object off this cliff, then the object will fall. The test is obvious; I can take a wide variety of objects (a bowling ball, a rock, a toy car, and a set of music CDs by ) and throw them off the cliff. I can then note that all of them fall, and therefore improve the probability that the statement is true. I can then take one final object, a helium balloon, and throw it off the cliff; as the balloon rises, however, I have therefore shown that the statement is false. (A more correct version would be “if I throw a heavier-than-air object off this cliff, then the object will fall.” It’s still not completely true yet—a live pigeon is heavier than air—but it’s closer).
By this test, however, the statement “Carol is a post-utopian author” is meaningful, as long as there exist some features which are the features of post-modern authors (the features do not need to be described, or even known, as long as their existence can be proven—repeatable, correct classification by a series of artificial neural networks would prove that such features exist).
Here’s my first swing at it: A proposition is meaningful if it constrains the predicted observations of any theoretically possible observer.
This way, the proposition “the unmanned starship will not blink out of existence when it leaves my light cone” is meaningful because it’s possible that there might potentially be an observer nearby who observes the starship not disappear.
On the other hand, the statement “The position of this particle is exactly X and its momentum exactly P” is not meaningful under this rule, and that’s a feature.
Taboo “theoretically possible”.
Hm, how about: “[...] of any observer which our best current theory of how minds work says could exist”.
So for example, a statement along the lines of “a ghost watches and sees whether or not Mars continues to exist when it passes behind the Sun from Earth’s perspective” would have been meaningful a long time ago, but is not meaningful for people today who know a little about brains.
This also means that a proposition may be meaningful only because the proposer is ignorant.
Taboo “could”. Basically, counter-factual surgery is a lot trickier than you seem to think.
There aren’t many threads where I’d let that pass. This is one of them.
My $0.02:
A proposition P is meaningful to an observer O to the extent that O can alter its expectations about the world based on P.
This doesn’t a priori exclude anything that could be true, although for any given observer it might do so. As it should. Not every true proposition is meaningful to me, for example, and some true propositions that are meaningful to me aren’t meaningful to my mom.
Of course, it doesn’t necessarily exclude things that are false, either. (Nor should it. Propositions can be meaningful and false.)
For clarity, it’s also perhaps worth distinguishing between propositions and utterances, although the above is also true of meaningful utterances.
Maps are models of the territory. And the usefulness of them is often that they make predictions about parts of the territory I haven’t actually seen yet, and may have trouble getting to at all. The Sun will come up in the morning. There isn’t a leprachaun colony living a mile beneath my house. There aren’t any parts of the moon that are made of cheese.
I have no problem saying that these things are true, but they are in fact extrapolations of my current map into areas which I haven’t seen and may never see. These statements don’t meaningfully stand alone, they arise out of extrapolating a map that checks out in all sorts of other locations which I can check. One can then have meaningful certainty about the zones that haven’t yet been seen.
How does one extrapolate a map? In principle I’d say that you should find the most compressible form—the form that describes the territory without adding extra ‘information’ that I’ve assumed from someplace else. The compressed form then leads to predictions over and above the bald facts that go into it.
The map should match the territory in the places you can check. When I then make statements that something is “true”, I’m making assertions about what the world is like, based on my map. As far as English is concerned, I don’t need absolute certainty to say something is true, merely reasonable likelihood.
Hence the photon. The most compressible form of our description of the universe is that the parts of space that are just beyond visibility aren’t inherently different from the parts we can see. So the photon doesn’t blink out over there, because we don’t see any such blinking out over here.
If by “meaningful” you mean “either true or false” and by “meaningless” you mean “neither true nor false”, then a Platonist and a formalist would disagree about the meaningfulness of the continuum hypothesis. Since I don’t know any knockdown argument for either Platonism or formalism, I defy everyone who claims to have a crisp answer to your question, including possibly you.
OK. Here’s my best shot at it.
Firstly, I don’t really like the wording of the Koan. I feel like a more accurate statement of the fundamental problem here is “What rule could restrict our beliefs to propositions that we can usefully discuss whether or not they are true without excluding any statements for which we would like be base our behavior on whether or not they are true.” Unfortunately, on some level I do not believe that there is a satisfactory answer here. Though it is quite possible that the problem is with my wanting to base my behavior on the truth of statements whose truth cannot be meaningfully discussed.
To start with, let’s talk about the restriction about restricting to statements for which we can meaningfully discuss whether or not they are true. Given the context of the post this is relatively straightforward. If truth is an agreement between our beliefs and reality, and if reality is the thing that determines our experiences, then it is only meaningful to talk about beliefs being true if there are some sequences of possible experiences that could cause the belief to be either true or false. This is perhaps too restrictive a use of “reality”, but certainly such beliefs can be meaningfully discussed.
Unfortunately, I would like to base my actions upon beliefs that do not fall into this category. Things like “the universe will continue to exist after I die” does not have any direct implications on my lifetime experiences, and thus would be considered meaningless. Fortunately, I have found a general transformation that turns such beliefs into beliefs that often have meaning. The basic idea is to instead of asking directly about my experiences to instead use Solomonoff induction to ask the question indirectly. For example, the question above becomes (roughly) “will the simplest model of my lifetime experiences have things corresponding to objects existing at times later than anything correspond to me?” This new statement could be true (as it is with my current set of experiences), or false (if for example, I expected to die in a big crunch). Now on every statement I can think of, the above rule transforms the statement A to a statement T(A) so that my naive beliefs about A are the same as my beliefs about T(A) (if they exist). Furthermore, it seems that T(A) is still meaningless in the above sense only in cases where I naively believe A to actually be meaningless and thus not useful for determining my behavior. So in some sense, this transformation seems to work really well.
Unfortunately, things are still not quite adding up to normality for me. The thing that I actually care about is whether or not people will exist after my death, not whether certain models contain people after my death. Thus even though this hack seems to be consistently giving me the right answers to questions about whether statements are true or meaningful, it does not seem to be doing so for the right reasons.
In case you were exposing a core uncertainty you had - ‘I want a) people to exist after me more than I want b) a MODEL that people exist after me, but my thinking incorporates b) instead of a); and that means my priorities are wrong’ - and it’s still troubling you, I’d like to suggest the opposite: if you have a model that predicts what you want, that’s perfect! Your model (I think) takes your experiences, feeds them into a Bayesian algo, and predicts the future—what better way is there to think? I mean, I lack such computing power and honesty...but if an honest computer takes my experiences and says, ‘Therefore, people exist after me,’ then my best possible guess is that people exist after me, and I can improve the chance of that using my model.
Only propositions that constrain our sensory experience are meaningful.
If it turns out that the cosmologists are wrong and the universe begins to contract, we will have the opportunity to make contact with the civilization that the colonization starship spawns. The proposition “The starship exists” entails that the probability of the universe contracting and us making contact with the descendants of the passengers of the starship is substantial compared to the probability of the universe contracting.
Counter-example. “There exists at least one entity capable of sensory experience.” What constraints on sensory experience does this statement impose? If not, do you reject it as meaningless?
Heh. Okay, this and dankane’s similar proposition are good counterexamples.
Least convenient possible world—we discover the universe will definitely expand forever. Now what?
Or what about the past? If I tell you an alien living three million years ago threw either a red or a blue ball into the black hole at the center of the galaxy but destroyed all evidence as to which, is there a fact of the matter as to which color ball it was?
“Possible” is an important qualifier there. Since 0 and 1 are not probabilities, you are not describing a possible world.
The comment doesn’t lose too much if we take ‘definite’ to mean 0.99999 instead of 1. (I would tend to write ‘almost certainly’ in such contexts to avoid this kind of problem.)
Yvain’s objection fails if “definitely” means “with probability 0.99999″. In that case the conditional probability P( encounter civilization | universe contracts) is well-defined.
Oh, I thought I retracted the grandparent. Nevermind—it does need more caveats in the expression for it to return to being meaningful.
I think it loses its force entirely in that case. Nisan’s proposal was a counterfactual, and Yvain’s counter was a possible world where that counterfactual cannot obtain. Since there is no such possible world, the objection falls flat.
If this claim is meaningful, isn’t Nisan’s proposal false?
No. Why would that be?
I suspect that the answer to the alien-ball case may be empirical rather than philosophical.
Suppose that there existed quantum configurations in which the alien threw in a red ball, and there existed quantum configurations in which the alien threw in a blue ball, and both of those have approximately equal causal influence on the configuration-cluster in which we are having (approximately) this conversation. In this case, we would happen to be living in a particular type of world such that there was no fact of the matter as to which color ball it was (except that e.g. it mostly wasn’t green).
You’re right, my principle doesn’t work if there’s something we believe with absolute certainty.
If we later find out that the alien did in fact leave some evidence, and recover that evidence, we’ll have an opinion about the color of the ball.
This seems to be avoiding Yvain’s question by answering a preferred one.
The position expressed so far, combined with the avoidance here would seem to give the answer ‘No’.
What about the proposition “the universe will cease to exist when I die” (using some definition of “die” that precludes any future experiences, for example, “die for the last time”)? Then the truth of this proposition does not constrain sensory input (because it only makes claims about times after which you have no sensory input), but does have behavioral ramifications if you are, for example, deciding whether or not to write a will.
First, our territory is a map. This is by nature of evolving at a physical scale in which we exist on a type of planet (rather than at the quantum level or the cosmological level) and of a century/day/hour scale conception of time (rather than geological or the opposite) and of a species in which experience is shared, preserved, and consequently accumulated. Differentiating matter is of that perspective, labeling snow is of that perspective, labeling is of that perspective, causation, and so on.
By nature of being, we create a territory. For a map to be true (I don’t like ‘meaningful’), it must correspond with the relevant territory. So, we need more than a laplacian demon to restrict beliefs to propositions that can be true, we need a demon capable of having a perfect and imperfect understanding of nature. It’d have to carve out all possible territories (of which can conflict) from our block universe and see them from all possible perspectives, and then you would have to specify which territory you want to see corresponds with whatever map.
Meaningful means it exists. By virtue of (variants of) the macroscopic decoherence interpretations of quantum mechanics and the best understanding I and three other long time rationalists have of cosmology, everything physically possible exists, either in a quantum mechanical branch or in another hubble volume.
To narrow it down a bit (but not conclusively) start out by eliminating all propositions that presuppose violation of conservation of energy, that should give you a head start.
Anything physically possible exists within our timeless universe-structure’s causal closures: When we talk “meaninful” or “not meaningful” we are really talking physics or not physics. Perpetual motion, for instance, isn’t physics. Neither is (as far as I know) faster than light travel or communication, reversing entropy, ontologically basic mental entities and a lot of other things. They do not exist in any world in our universe, thus not meaningful, not a thing you can experience.
This of course presupposes knowledge of physics… I’ll have to mull on that. Funny disagreeing with yourself while typing.
No, it doesn’t.
There is a hubble volume beyond ours where you agree with me.
There is a quantum branch where you agree with me.
I am not sure there is a distinction between the two.
Also, you are right, it feels inadequate.
Perpetual motion, faster than light travel, etc. were falsified by scientific experimentation. This means that these hypotheses must have constrained anticipated experience. Maybe they are “meaningless” by some definition of the word (although not any with which I am familiar), but that is not the way Eliezer is using “meaningless”.
Eliezer uses meaningless on belief networks. I know that. I have read most of the sequences.
See this
Says who? Even if your multiversal theory is right, that doens’t follow. Physics doens’t prove anything about the meaning of the word “meaning”.
Would a powerful AI, from the “run_ai” is pressed on the command line till it knows practically everything ever give a significant probability to violation of conservation of energy?
Humans are really amazingly bad at thinking about physics, (Aristotle is a notable example, he practically formalized intuitive physics which are dead wrong,) but what if you aren’t?
I am nearly certain there exists some multiverse branch where humans study the avian migration patterns of the wild hog, but I too am nearly certain there is no multiverse branch within this mutiversal causal closure where even one electron spontaneously appears out of nothing and then goes on its merry way.
I agree this is a different viewpoint than a purely epistemological one, and that any epistemological agent can only approximate the function
(defun exists-in-mutiverse-p...)
, but if you want be stringent, physics is the way.Furthemore it patternmaches against my concept of how Tegmark invented his eponymous hypotheses: finding a basic premise and wondering if it is neccesary. Do we really need brains to talk about meaningful hypotheses, or do we just need a big universe.
I don’t see how that addresses my comment. A sentence is meaningful or not because of the laws of language, not the laws of physics.