DDTT. Don’t study words as if they had meanings that you could discover by examining your intuitions about how to use them. Don’t draw maps without looking out of the window.
BS. For example, Eliezer’s take on logical positivism in the most recent Sequence is interesting. But logical positivism has substantial difficulties—identified by competing philosophical schools—that Eliezer has only partially resolved.
Aristotle tried to say insightful things merely by examining etymology, but the best of modern philosophy has learned better.
I only see objections to traditional strains of positivism. It doesn’t seem they even apply to what EY’s been doing. In particular, the problems in objections 1, 3C1, 3C2, and 3F2 have been avoided by being more careful about what is not said. Meanwhile, 2 and 3F1 seem incoherent to me.
3C1: The correspondence relation must be some sort of resemblance relation. But truthbearers do not resemble anything in the world except other truthbearers—echoing Berkeley’s “an idea can be like nothing but an idea”.
I don’t see how Eliezer could dodge this objection, or why he would want to. Very colloquially, Eliezer thinks there is an arrow leading to “Snow is white” from the fact that snow is white. Labeling that arrow “causal” does nothing to explain what that arrow is. If you don’t explain what the arrow is, how do you know that (1) you’ve said something rigorous or (2) that the causal arrows are the same thing as what we want to mean by “true”?
Objection 1: Definitions like (1) or (2) are too broad; although they apply to truths from some domains of discourse, e.g., the domain of science, they fail for others, e.g. the domain of morality: there are no moral facts.
As stated, this objection is too strong (because it assumes moral anti-realism is true). The correspondence theory can be agnostic in the dispute between moral realism and moral anti-realism. But moral realists intend to use the word “true” in exactly the same way that scientists use the word. Thus, a correspondence-theory moral realist needs to be able to identify what corresponds to any particular moral truth—otherwise, moral anti-realism is the correct moral epistemology.
Most people are moral realists, so if your theory of truth is inconsistent with moral realism, they will take that as evidence that your theory of truth is not correct.
Look, no one but a total idiot believes Mark’s epistemic theory. There is an external world, with sufficient regularity that our physical predictions will be accurate within the limits of our knowledge and computational power. The issue is whether that can be stated more rigorously—and the different specifications are where logical positivists, physical pragmitists, Kunn and other theorists disagree.
I do agree that objections 2 and 3F2 are not particularly compelling (as I understand them).
3C1: The correspondence relation must be some sort of resemblance relation. But truthbearers do not resemble anything in the world except other truthbearers—echoing Berkeley’s “an idea can be like nothing but an idea”.
This is actually a very easy one to respond to. Truthbearers do resemble non-truthbearers. What must ultimately be truth-bearing, if anything really is, is some component of the world—a brain-state, an utterance, or what-have-you. These truth-bearing parts of the world can resemble their referents, in the sense that a relatively simple and systematic transformation on one would yield some of the properties of the other. For instance, a literal map clearly resembles its territory; eliminating most of the territory’s properties, and transforming the ones that remain in a principled way, could produce the map. But sentences also resemble the territories they describe, e.g., through temporal and spatial correlation. Even Berkeley’s argument clearly fails for this reason; an immaterial idea can systematically share properties with a non-idea, if only temporal ones.
Eliezer thinks there is an arrow leading to “Snow is white” from the fact that snow is white.
Language use is a natural phenomenon. Hence, reference is also a natural phenomenon, and one we should try to explain as part of our project of accounting for the patterns of human behavior. Here, we’re trying to understand why humans assert “Snow is white” in the particular patterns they do, and why they assign truth-values to that sentence in the patterns they do. The simplest adequate hypothesis will note that usage of “snow” correlates with brain-states that in turn resemble (heavily transformed) snow, and that “white” correlates with brain-states resembling transformed white light, and that “Snow is white” expresses a relationship between these two phenomena such that white light is reflected off of snow. When normal English language users think white light reflects off of snow, they call the sentence “snow is white” true; and when they think the opposite, they call “snow is white” false. So, there is a physical relationship between the linguistic behavior of this community and the apparent properties of snow.
Most people are moral realists, so if your theory of truth is inconsistent with moral realism, they will take that as evidence that your theory of truth is not correct.
Yes, but is our goal to convince everyone that we’re correct, or to be correct? The unpopularity of moral anti-realism counts against the rhetorical persuasiveness of a correspondence theory combined with a conventional scientific world-view. But it will only count against the plausibility of this conjunction if we have reason to think that moral statements are true in the same basic way that statements about the whiteness of snow are true.
one we should try to explain as part of our project of accounting for the patterns of human behavior.
In brief, I disagree that we are trying to explain human behavior. We are trying to develop an agent-universal explanation of truth. The risk of focusing on human behavior (or human brain states) is that the theory of truth won’t generalize to non-human agents.
Regarding moral facts, I agree that our goal is true philosophy, not comforting philosophy. I’m a moral anti-realist independent of theory-of-truth considerations. But most people seem to feel that their moral senses are facts (yes, I’m well aware of the irony of appealing to universal intuitions in a post that urges rejection of appeals to universal intuitions).
The widespread nature of belief in values-as-truths cries out for explanation, and the only family of theories I’m aware of that even try to provide such an explanation is wildly controversial and unpopular in the scientific community.
We are trying to develop an agent-universal explanation of truth. The risk of focusing on human behavior (or human brain states) is that the theory of truth won’t generalize to non-human agents.
I’m not sure ‘agent’ is a natural kind. ‘Truth’ may not be a natural kind either; it may be a very gerrymandered, odd-looking collection of properties. So I spoke in terms of concrete human behaviors in order to maintain agnosticism about how generalizable these properties are. If they do turn out to be generalizable, then great. I don’t think any part of my account precludes that possibility.
The widespread nature of belief in values-as-truths cries out for explanation
Yes. My explanation is that our mental models do treat values as though they were real properties of things. Similarly, our mental models treat chairs as discrete solid objects, treat mathematical objects as mind-independent reals, treat animals as having desires and purposes, and treat possibility and necessity as worldly facts. In all of these cases, our evidence for the metaphysical category actually occurring is much weaker than our apparent confidence in the category’s reality. So the problem is very general; it seems that most of our beliefs are predicated on useful fictions (analogous to our willingness to affirm the truth of ‘Sherlock Holmes is a detective, not a carpenter’), in which case we are committed either to an error theory or to revising our standards for what ‘truth’ is.
‘Truth’ may not be a natural kind either; it may be a very gerrymandered, odd-looking collection of properties.
If so. rationalists may as well shut up shop, because anyone would be able to add an interest-specific lump to the gerrymander.
ETA
So the problem is very general; it seems that most of our beliefs are predicated on useful fictions (analogous to our willingness to affirm the truth of ‘Sherlock Holmes is a detective, not a carpenter’), in which case we are committed either to an error theory or to revising our standards for what ‘truth’ is.
If so. rationalists may as well shut up shop, because anyone would be able to add an interest-specific lump to the gerrymander.
People already do that, and yet rationalists see no reason to ‘shut up shop’ as a result. ‘True’ is just a word. Rationality is about systematic optimization for our goals, not about defending our favorite words from the rabble. Sometimes it’s worthwhile to actively criticize a use of ‘truth;’ sometimes it’s worthwhile to participate in the gerrymandering ourselves; and sometimes it’s worthwhile just to avoid getting involved in the kerfuffle. For instance, criticizing people for calling ‘Sherlock Holmes is a detective’ true is both less useful and less philosophically interesting than criticizing people for calling ‘there is exactly one empty set’ true.
Also, it’s important to remember that there are two different respects in which ‘truth’ might be gerrymandered. First, it might be gerrymandered for purely social reasons. Second, it might be gerrymandered because it’s a very complicated property of high-level representational systems. One should not expect mental states in general to be simply and nondisjunctively definable in a strictly physical language. Yet if we learned that ‘pain’ were a highly disjunctive property rather than a natural kind, this would give us no reason to stop deeming pain unpleasant.
People already do that, and yet rationalists see no reason to ‘shut up shop’ as a result
People try to do that, but rationalists don’t have to regard it as legitimate, and can object. However, if a notion of truth is adopted that is pluralistic and has no constraint on its pluralism—Anythng Goes—rationalists could no longer object to,eg. Astrological Truth.
‘True’ is just a word.
Rationality is about systematic optimization for our goals, not about defending our favorite words from the rabble.
So you say. Most rationalists are engaged in some sort of wider debate.
sometimes it’s worthwhile to participate in the gerrymandering
Even if it is intellectually dishonest to do so?
First, it might be gerrymandered for purely social reasons. Second, it might be gerrymandered because it’s a very complicated property of high-level representational systems.
I think you may have confused truth with statesof-mind-having-content-about-truth. Electrons are simple, thoughts about them aren’t.
One should not expect mental states in general to be simply and nondisjunctively definable in a strictly physical language. Yet if we learned that ‘pain’ were a highly disjunctive property rather than a natural kind, this would give us no reason to stop deeming pain unpleasant.
Somethings not being a natural kind, is not justification for arbitrarily changing its definition. I don’t get to redefine the taste of chocolate as a kind of pain.
No one on this thread, up till now, has mentioned an arbitrarily changing or anything goes model of truth. Perhaps you misunderstood what I meant by ‘gerrymandered.’ All I meant was that the referent of ‘truth’ in physical or biological terms may be an extremely complicated and ugly array of truth-bearing states. Conceding that doesn’t mean that we should allow ‘truth’ (or any word) to be used completely anarchically.
I don’t see how Eliezer could dodge this objection, or why he would want to.
I would phrase that as that he has recast it so it is non-objectionable.
A lot of the other objections are of the nature “how do you know?” And generally he lets the answer be, “we don’t know that to a degree of certainty that—it has been correctly pointed out—would philosophically objectionable.”
Well, that moves much closer to making objection 2 meaningful. If all that the correspondence theory of truth can do is reassure us that our colloquial usage of “truth” gestures at a unified and meaningful philosophical concept, then it isn’t much use. It is not like anyone seriously doubts that “empirically true” is a real thing.
I still don’t understand this ‘usefulness’ objection. If the correspondence theory of truth is a justification for colloquial notions of truth, its primary utility does lie in our not worrying too much about things we don’t actually need to worry about. There are other uses such as molding the way one approaches knowledge under uncertainty. The lemmas needed to produce the final “everything’s basically OK” result provide significant value.
There are many concepts where the precise contours of the correct position makes no practical difference to most people. Examples include (1) Newtonian vs. Relativity and QM, (2) the meaning of infinity, or (3) persistence of identity. Many of the folk versions of those types of concepts are inadequate in dealing with edge cases (e.g. the folk theory of infinity is hopelessly broken). The concept of “truth” is probably in this no-practical-implications category. As I said, there’s no particular reason to doubt truth exists, whether the correspondence theory is correct or not.
Anyway, edge cases don’t tend to come up in ordinary life, so there’s no good reason for most people to be worried. If one isn’t worried, then the whole correspondence-theory-of-truth project is pointless to you. Without worry, reassurance is irrelevant. By contrast, if you are worried, the correspondence theory is insufficient to reassure you. Your weaker interpretation is vacuous, Eliezer’s stronger version has flaws.
None of this says that one should worry about what “truth” is, but having taken on the question, I think Eliezer has come up short in answering.
I haven’t communicated clearly. There are two understandings of useful—practical-useful and philosophy-useful. Arguments aimed at philosophy-use are generally irrelevant to practical-use (aka “Without worry, reassurance is irrelevant”).
In particular, the correspondence theory of truth has essentially no practical-use. The interpretation you advocate here removes philosophical-use.
“Everything’s basically ok.” is a practical-use issue. Therefore, it’s off-topic in a philosophical-use discussion.
I don’t see where it’s coming up short in the first two examples you gave.
I mentioned the examples to try to explain the distinction between practical-use and philosophical-use. Believing the correspondence theory of truth won’t help with any of the examples I gave. Ockham’s Razor is not implied by the correspondence theory. Nor is Bayes’ Theorem. Correspondence theory implies physical realism, but physical realism does not imply correspondence theory.
I think is important to note that what we’ve been calling theories of truth are actually aimed at being theories of meaningfulness. As lukeprog implicitly asserts, there are whole areas of philosophy where we aren’t sure there is anything substantive at all. If we could figure out the correct theory of meaningfulness, we could figure out which areas of philosophy could be discarded entirely without close examination.
For example, Carnap and other logical positivists thought Heidegger’s assertion that “Das nicht nichtet” was meaningless nonsense. I’m notsure I agree, but figuring out questions like that is the purpose of a theory of meaning / truth.
I see, so you aren’t really concerned with practical-use applications; you’re more interested in figuring out which areas of philosophy are meaningful. That makes sense, but, on the other hand, can an area of philosophy with a well-established practical use still be meaningless ?
It sure would be surprising if that happened. But meaningfulness is not the only criteria one could apply to a theory. No one thinks Newtonian physics is meaningless, even though everyone thinks Newtonian physics is wrong (i.e. less right than relativity and QM).
In other words, one result of a viable theory of truth would be a formal articulation of “wronger than wrong.”
No one thinks Newtonian physics is meaningless, even though everyone thinks Newtonian physics is wrong (i.e. less right than relativity and QM).
That’s not the same as “wrong”, though. It’s just “less right”, but it’s still good enough to predict the orbit of Venus (though not Mercury), launch a satellite (though not a GPS satellite), or simply lob cannonballs at an enemy fortress, if you are so inclined.
From what I’ve seen, philosophy is more concerned with logical proofs and boolean truth values. If this is true, then perhaps that is the reason why philosophy is so riddled with deep-sounding yet ultimately useless propositions ? We’d be in deep trouble if we couldn’t use Newtonian mechanics just because it’s not as accurate as QM, even though we’re dealing with macro-sized cannonballs moving slower than sound.
As far as I can tell, we’re in the middle of a definitional dispute—and I can’t figure out how to get out.
My point remains that Eliezer’s reboot of logical positivism does no better (and no worse) than the best of other logical positivist philosophies. A theory of truth needs to be able to explain why certain propositions are meaningful. Using “correspondence” as a semantic stop sign does not achieve this goal.
Abandoning the attempt to divide the meaningful from the non-meaningful avoids many of the objections to Eliezer’s point, at the expense of failing to achieve a major purpose of the sequence.
It’s not so much a definitional dispute as I have no idea what you’re talking about.
Suggesting that there’s something out there which our ideas can accurately model isn’t a semantic stop sign at all. It suggests we use modeling language, which does, contra your statement elsewhere, suggest using Bayesian inference. It gives sufficient criteria for success and failure (test the models’ predictions). It puts sane epistemic limits on the knowable.
That seems neither impractical nor philosophically vacuous.
The philosophical problem has always been he apparent arbitrariness of the rules. You can say that “meaningful”
sentences are empircially verifiable ones. But why should anyone believe that? The sentence “the only meaningful
sentences are the empircially verifiable ones” isn’t obviously empirically verifiable. You have over-valued clarity and under-valued plausibility.
They need to be meaningful. If your definition of meaningfullness assers its own meaninglessness, you have a problem. If you are asserting that there is truth-by-stipulation as well as truth-by-correspondence, you have a problem.
What about mathematics, then ? Does it correspond to something “out there” ? If so, what/where is it ? If not, does this mean that math is not meaningful ?
Math is how you connect inferences. The results of mathematics are of the form ‘if X, Y, and Z, then A’… so, find cases where X, Y, and Z, and then check A.
It doesn’t even need to be a practical problem. Every time you construct an example, that counts.
I don’t see how that addresses the problem. You have said that there is one kind of truth/meanignullness,
based on modelling relaity, and then you describe mathematical truth in a form that doens’t match that.
If any domain can have its own standards of truth, then astrologers can say there merhcandise is “astrologically true”. You have anything goes.
This stuff is a tricky , typically philophsical problem because the obvious answers all have problems. Saying that all truth is correspondence means that either mathematical Platonism holds—mathematical truths correspond to the status quo in Plato’s heaven—or maths isn’t meaningful/true at all. Or truth isn’t correspondence, it’s anything goes.
I don’t think those problems are iresolvable, and EY has in fact suggested (but not originated) what I think
is a promissing approach.
How does it not match? Take the 4 color problem. It says you’re not going to be able to construct a minimally-5-color flat map. Go ahead. Try.
That’s the kind of example I’m talking about here. The examples are artificial, but by constructing them you are connecting the math back to reality. Artificial things are real.
If any domain can have its own standards of truth, then astrologers can say there merhcandise is “astrologically true”.
What? How is holding everything is held to the standard of ‘predict accurately or you’re wrong’ the same as ‘anything goes’?
I mean, if astrology just wants to be a closed system that never ever says anything about the outside world… I’m not interested in it, but it suddenly ceases to be false.
How does it not match? Take the 4 color problem. It says you’re not going to be able to construct a minimally-5-color flat map. Go ahead. Try.
That doesn’t matfch reality because it would still be true in universes with different laws of physics.
‘predict accurately or you’re wrong’ the same as ‘anything goes’?
It isn’t. It’s a standard of truth that too narrow to include much of maths.
I mean, if astrology just wants to be a closed system that never ever says anything about the outside world
That doens’t follow. Astrologers can say their merchandise is about the world, and true, but not true in a way that has anything to do with correspondence or prediction.
That doesn’t matfch reality because it would still be true in universes with different laws of physics.
If you’re in a different universe with different laws of physics, your implementation of the 4 color problem will have to be different. Your failure to correctly map between math and reality isn’t math’s problem. Math, as noted above, is of the form ‘if X and Y and Z, then A’ - and you can definitely arrange formal equivalents to X, Y, and Z by virtue of being able to express the math in the first place.
That doens’t follow. Astrologers can say their merchandise is about the world, and true, but not true in a way that has anything to do with correspondence or prediction.
It’s about the world but it doesn’t correspond to anything in the world? Then the correspondence model of truth has just said they’re full of shit. Victoreeee!
(note: above ‘victory’ claim is in reference to astrologers, not you)
If you’re in a different universe with different laws of physics, your implementation of the 4 color problem will have to be different.
I don’t have to implement it at all to see its truth. Maths is not just applied maths.
Math, as noted above, is of the form ‘if X and Y and Z, then A’ - and you can definitely arrange formal equivalents to X, Y, and Z by virtue of being able to express the math in the first place.
I don’t see that you mean. (Non-applied) maths is just formal, period, AFAIAC..
t’s about the world but it doesn’t correspond to anything in the world? Then the correspondence model of truth has just said they’re full of shit. Victoreeee!
And Astrologers can just say that the CToT is shit and they have a better ToT.
People who have different ‘theories’ of truth really have different definitions of the word ‘truth.’ Taboo that word away, and correspondence theorists are really criticizing astrologists for failing to describe the world accurately, not for asserting coherentist ‘falsehoods.’ Every reasonable disputant can agree that it is possible to describe the world accurately or inaccurately; correspondence theorists are just insisting that the activity of world-describing is important, and that it counts against astrologists that they fail to describe the world.
(P.S. Real astrologists are correspondence theorists. They think their doctrines are true because they are correctly describing the influence celestial bodies have on human behavior. Even idealists at least partly believe in correspondence theory; my claims about ideas in my head can still be true or false based on whether they accurately describe what I’m thinking.)
People who have different ‘theories’ of truth really have different definitions of the word ‘truth.’
That is not at all obvious. Let “that which should be believed” be the defintiion of truth. Then a correspondence theorist and coherence theorist stlll have plenty to disagree about, even if they both hold to the definition.
Agreed. However, it’s still the right view, as well as being the most useful one, since tabooing lets us figure out why people care about which ‘theory’ of ‘truth’ is.… (is what? true?). The real debate is over whether correspondence to the world is important in various discussions, not over whether everyone means the same thing (‘correspondence’) by a certain word (‘truth’).
Let “that which should be believed” be the defintiion of truth.
You can stipulate whatever you want, but “that which should be believed” simply isn’t a credible definition for that word. First, just about everyone thinks it’s possible, in certain circumstances, to ought to believe a falsehood. Second, propositional ‘belief’ itself is the conviction that something is true; we can’t understand belief until we first understand what truth is, or in what sense ‘truth’ is being used when we talk about believing something. Truth is a more basic concept than belief.
If your branch of mathematics is so unapplied that you can’t even represent it in our universe, I suspect it’s no longer math.
Any maths can be represented the way it ususally is, by writing down some essentially aribtrary symbols. That
does not indicate anything about “correspondence” to reality. The problem is the “arbitrary” in arbitrary symbol.
Lets say space is three dimensional. You can write down a formula for 17 dimensional space, but that doens’t
mean you have a chunk of 17 dimesional space for the maths to correspond to. You just have chalk on a blackboard.
Sure. And yet, you can implement vectors in 17 dimensional spaces by writing down 17-dimensional vectors in row notation. Math predicts the outcome of operations on these entities.
Show me a 17-vector. And what is being predicited? The onlyy way to get at the behaviour is to do the math, and the only way to do the predictions is...to do the math. I think meaningful prediction requires some non-identity between predictee and predictor.
The predictions of the mathematics of 17-dimensional space would, yes, depend on the outcome of other operations such as addition and multiplication—operations we can implement more directly in matter.
I have personally relied on the geometry of 11-dimensional spaces for a particular curve-fitting model to produce reliable results. If, say, the Pythagorean theorem suddenly stopped applying above 3 dimensions, it simply would not have worked.
I’m seing pixels on a 2d screen. I’m not seeing an existing 17d dimensional thing.
The mathematics of 17d space predict the mathematics of 17d space. They couldn’t fail to. Which means no real prediction is happening at all. 1.0 is not a probability.
There are things we can model as 17-dimensional spaces, and when we do, the behavior comes out the way we were hoping. This is because of formal equivalence: the behavior of the numbers in a 17-dimensional vector space precisely corresponds to the geometric behavior of a counterfactual 17-dimensional euclidean space. You talk about one, you’re also saying something about the other.
There are things we can model as 17-dimensional spaces,
But they are not 17 dimensional spaces. They have different physics. Treating them as 17 dimesional isn’t modelling them because it isn’t representing thema as they are.
To be concrete, suppose we have a robotic arm with 17 degrees of freedom of movement. It’s current state can and should be represented as a 17-dimensional vector, to which you should do 17-dimensional math to figure out things like “Where is the robotic arm’s index finger pointing?” or “Can the robotic arm touch its own elbow?”
Not obvious. It would just be a redundant way of representing a 3d object in 3 space.
The point of contention is the claim that for any maths, there is something in reality for it to represent. Now, we can model a system of 10 particles as 1 particle in 30 dimensional space, but that doens;t prove that 30d maths
has something in reality to represent, since in reality there are 10 particles. Is was our decision, not reality’s to
treat is as 1 particle in a higher-d space.
Past a certain degree of complexity, there are lots of decisions about representing objects that are “ours, not reality’s”. For example, even if you represent the 10 particles as 10 vectors in 3D space, you still choose an origin, a scale, and a basis for 3D space, and all of these are arbitrary.
The 30-dimensional particle makes correct predictions of the behavior of the 10 particles. That should be enough.
Treaing a mathermatical formula as something that cranks out predictions is treating it as instrumentally, is treaing it unrealistically. But you cannot’ have coherent notion of modeling or representation if there is no real territory being modeled or represented.
To argue that all maths is representational, you either have to claim we are living in Tegmarks level IV, or you have to stretch the meaning of “representation” to meaninglessness. Kindly and Luke Sommers seem to be heading down the second route.
A correct 30D formula wll make correct predictions, Mathematical space also contains an infinity of formulations that are incorrect. Surely it is obvious that you can’t claim eveything in maths correctly models or predicts something in realiy.
Predicts something that could happen in reality (e.g. we’re not rejecting math with 2+2=4 apples just because I only have 3 apples in my kitchen), or
Is an abstraction of other mathematical ideas.
Do you claim that (2) is no longer modeling something in reality? It is arguably still predicting things about reality once you unpack all the layers of abstraction—hopefully at least it has consequences relevant to math that does model something.
Or do you think that I’ve missed a category in my description of math?
It is arguably still predicting things about reality once you unpack all the layers of abstraction
I don’t see what abstraction has to do with it. The Standard Model has about 18 parameters. Vary those, and it will mispredict. I don’t think all the infinity of incorrect variations of the SM are more abstract.
As a physicist, I can say with a moderate degree of authority: no.
I have seen mathematical equations to describe population genetics. That was not physics. I have seen mathematical equations used to describe supply and demand curves. That was not physics. Etc.
If you’re using math to model something, or even could so use it, that is sufficient for it to have a correspondent for purposes of the correspondence theory of truth.
If you’re using math to model something, or even could so use it, that is sufficient for it to have a correspondent for purposes of the correspondence theory of truth.
But that is not suffcient to show that all maths models.
… okay, you were confusing before, but now you’re exceptionally confusing. You’re saying that the standard model of particle physics is an example of math that doesn’t model anything?
Well, it doesn’t model our universe. And the Standard Model is awfully complicated for someone to build a condensed matter system implementing a randomized variant of it. But it’s still a quantum mechanical system, so I wouldn’t bet strongly against it.
And of course if someone decided for some reason to run a quantum simulation using this H-sm-random, then anything you mathematically proved about H-sm-random would be proved about the results of that simulation. The correspondence would be there between the symbols you put in and the symbols you got back, by way of the process used to generate those outputs. It just would be modeling something less cosmically grand than the universe itself, just stuff going on inside a computer. It wouldn’t be worth while to do… but it still corresponds to a relationship that would hold if you were dumb enough to go out of your way to bring it about.
The thing about the correspondence theory of truth is that once something has been reached as corresponding to something and thus being eligible to be true, it serves as a stepping-stone to other things. You don’t need to work your way all the way down to ‘ground floor’ in one leap. You’re allowed to take general cases, not all of which need to be instantiated. Correspondence to patterns instead of instances is a thing.
And of course if someone decided for some reason to run a quantum simulation using this H-sm-random, the anything you mathematically proved about H-sm-random would be proved about the results of that simulation.
Which, as in your other examples, is case of a model modeling a model. You can build something physical
that simulates a universe where electrons have twice the mass, and you can predict the virtual behaviour
of the simulation with an SM where the electron mass paramter is doubled, but the simulation will be made
of electrons with standard mass.
The correspondence would be there between the symbols you put in and the symbols you got back, by way of the process used to generate those outputs. It just would be modeling something less cosmically grand than the universe itself, just stuff going on inside a computer.
It wouldn’t be modeliing reality.
The thing about the correspondence theory of truth
..is that it is a poor fit for mathematical truth. You are making mathetmatical theorems correspondnce-true
by giving them something artificial to correspond to. Before the creation of a simulaiton at time T, there
is nothing for them to correspond to.This is a mismatch with the intuition that mathematical truths are timelessly true.
is that once something has been reached as corresponding to something and thus being eligible to be true, it serves as a stepping-stone to other things. You don’t need to work your way all the way down to ‘ground floor’ in one leap. You’re allowed to take general cases, not all of which need to be instantiated. Correspondence to patterns instead of instances is a thing.
You can gerrymander CToT into something that works, however inelegantly, for maths, or you can abandon it in favour something that doesn’t need gerrymandering.
Physics uses a subset of maths, so the rest would be examples of vald (I am substituing that for “meaninful”, which I am not sure how t apply here) maths that doesn;t correspond to anything external, absent Platonism.
The word “True” is overloaded in the ordinary vernacular. Eliezer’s answer is to set up a separate standard for empirical and mathematical propositions.
Empirical assertions use the label “true” when they correspond to reality. Mathematical assertions use the label “valid” when the theorem follows from the axioms.
Eliezer’s answer is to set up a separate standard for empirical and mathematical propositions.
I dont’ think it is, and that’s a bad answer anyway. To say that two unrelated approaches are both truth allows anthing to join the truth club, since there are no longer criteria for membership.
Well, I don’t think that Eliezer would call mathematically valid propositions “true.” I don’t find that answer any more satisfying than you do. But (as your link suggests), I don’t think he can do better without abandoning the correspondence theory.
Suggesting that there’s something out there which our ideas can accurately model . . .
Simply put, there’s no one who disagrees with this point. And the correspondence theory cannot demonstrate it, even if there were a dispute.
Let me make an analogy to decision theory: In decision theory, the hard part is not figuring out the right answer in a particular problem. No one disputes that one-boxing in Newcomb’s problem has the best payoff. The difficulty in decision theory is rigorously describing a decision theory that comes up with the right answer on all the problems.
To make the parallel explicit, the existence of the external world is not the hard problem. The hard problem is what “true” means. For example, this comment is a sophisticated argument that “true” (or “meaningful”) are not natural kinds. Even if he’s right, that doesn’t conflict with the idea of an external world.
If I understood you correctly, then Berkeley-style Idealists would be an example. However, I have a strong suspicion that I’ve misunderstood you, so there’s that...
Solipsists, by some meanings of “out there”. More generally, skeptics. Various strong forms of relativism, though you might have to give them an inappropriately modernist interpretation to draw that out. My mother-in-law.
I ha ve read a lot of philosophy, and I don’t think EY is doing it at particualrly well. His occasional cross-disciplinary insights keep me going (I’m cross disiplinary too, I started in science and work in I.T).
But he often fails to communicate clearly (I still don’t know whether he thinks numbers exist) and argues vaguely.
If you’ll pardon the pun, I leave you with “Why I Stopped Worrying About the Definition of Life, and Why You Should as Well”.
I don’t see your point. For one thing, I’m not on the philosohpy “side” in some sense exclusive of being on the science or CS side or whatever. For another. there are always plenty of phils. who are agin GOCFA (Good Old Fashioned Conceptual Analysis). The collective noun for philosophers is “a disagreement”. Tha’ts another of
my catchphrases.
Eliezer often fails to communicate clearly (I still don’t know whether he thinks numbers exist) and argues vaguely.
Agree! Very frustrating. What I had in mind was, for example, his advice about dissolving the question, which is not the same advice you’d get from logical positivists or (most) contemporary naturalists.
I don’t see your point.
Sorry, I should have been clearer that I wasn’t trying to make much of a point by sending you the Machery article. I just wanted to send you a bit of snark. :)
What I had in mind was, for example, his advise about dissolving the question, which is not the same advice you’d get from logical positivists or (most) contemporary naturalists
I skimmed the paper. Dennett’s project is a dissolving one, though he does less to explain why we think we have qualia than Yudkowsky did with regard to why we think we have free will. But perhaps Dennett wrote something later which more explicitly sets out to explain why we think we have qualia?
I need to knowpositively how to answer typical philosophhical questions such as the meaning of life.
Only if the question is meaningful. Of course, just saying “Don’t do that then” doesn’t tell you how to resolve whether that’s the case or not, but necessarily expecting an answer rather than a dissolution is not necessarily correct.
I absolutely loathe the way you phrased that question for a variety of reasons (and I suspect analytic philosophers would as well), so I’m going to replace “meaning of life” with something more sensible like “solve metaethics” or “solve the hard problem of consciousness.” In which case, yes. I think computer science is more likely to solve metaethics and other philosophical problems because the field of philosophy isn’t founded on a program and incentive structure of continual improvement through feedback from reality. Oh, and computer science works on those kinds of problems (so do other areas of science, though).
I don’t think you have phrased “the question” differntly and better, I think you have substituted two differnt questions. Well, maybe you think the MoL is a ragbag of different questions, not one big one. Maybe it is. Maybe it
isn’t. That would be a philsophical question. I don’t see how empiricsm could help. Speaking of which...
What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values?
I didn’t notice and qualiometers or agathometers last time I was in a lab.
I’ve substituted problems that philosophy is actually working on (metaethics and conciousness) with one that analytic philosophy isn’t (meaning of life). Meaning comes from mind. Either we create our own meaning (absurdism, existentialism, ect) or we get meaning from a greater mind that designed us with a purpose (religion). Very simple. How could computer science or science dissolve this problem? (1) By not working on it because it’s unanswerable by the only methods we can have said to have answered something, or (2) making the problem answerable by operationalizing it or by reforming the intent of the question into another, answerable, question.
Through the process of science, we gain enough knowledge to dissolve philosophical questions or make the answer obvious and solved (even though science might not say “the meaning of life is X” but instead show that we evolved, what mind is, and how the universe likely came into being—in which case you can answer the question yourself without any need for a philosophy department).
What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values? I didn’t notice and qualiometers or agathometers last time I was in a lab.
If I want to know what’s happening in a brain, I have to understand the physical/biological/computational nature of the brain. If I can’t do that, then I can’t really explain qualia or such. You might say we can’t understand qualia through its physical/biological/computational nature. Maybe, but it seems very unlikely, and if we can’t understand the brain through science, then we’ll have discovered something very surprising and can then move in another direction with good reason.
I’ve substituted problems that philosophy is actually working on (metaethics and conciousness) with one that analytic philosophy isn’t (meaning of life).
Unless it is. Maybe the MoL breaks down into many of the other topics studied by philosophers. Maybe philosophy is in the process of reducing it.
Meaning comes from mind. Either we create our own meaning (absurdism, existentialism, ect) or we get meaning from a greater mind that designed us with a purpose (religion). Very simple
How could computer science or science dissolve this problem? (1) By not working on it because it’s unanswerable by the only methods we can have said to have answered something,
You say it is “unanswerable” timelessly. How do you know that? It’s unanswered up to present. As are a number of scientific questions.
or (2) making the problem answerable by operationalizing it or by reforming the intent of the question into another, answerable, question.
Maybe. But checking that you have correctly identified the intent, and not changed the subject, is just the sort of
armchair conceptual analysis philosophers do.
Through the process of science, we gain enough knowledge to dissolve philosophical questions or make the answer obvious and solved
You say that timelsessly, but at the time of writing we have done where we have and we don’t where we haven;t.
(even though science might not say “the meaning of life is X” but instead show that we evolved, what mind is, and how the universe likely came into being—in which case you can answer the question yourself without any need for a philosophy department).
But unless science can relate that back to the initial question , there is no need to consider it answered.
What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values? I didn’t notice and qualiometers or agathometers last time I was in a lab.
If I want to know what’s happening in a brain, I have to understand the physical/biological/computational nature of the brain.
That’s necessary, sure. But if it were sufficient, would we have a Hard Problem of Consciousness?
If I can’t do that, then I can’t really explain qualia or such.
But I am not suggesting that science be shut down, and the funds transferred to philosophy.
You might say we can’t understand qualia through its physical/biological/computational nature. Maybe, but it seems very unlikely,
It seems actual to me. We don’t have such an understanding at present. I don’t know what that means for the future,
and I don’t how you are computing your confident statement of unlikelihood. One doens’t even have
to believe in some kind of non-physicalism to think that we might never. The philosopher Colin McGinn
argues that we have good reason to believe both that consc. is physical, and that we will never
understand it.
and if we can’t understand the brain through science,
We can’t understand qualia through science now. How long does that have to continue before
you give up? What’s the harm in allowing philsophy to continue when it is so cheap compared
to science?
PS. I would be interested in hearing of a scientific theory of ethics that doens’t just ignore the
is-ought problem.
Even though the wikipedia page for “meaning of life” is enormous, it boils all down to the very simple either/or statement I gave.
How do we know if something is answerable? Did a chicken just materialize 10 billion light years from Earth? We can’t answer that. Is the color blue the best color? We can’t answer that. We can answer questions that contact reality such that we can observe them directly or indirectly. Did a chicken just materialize in front me? No. Is the color blue the most preferred color? I don’t know, but it can be well answered through reported preferences. I don’t know if these currently unanswerable questions will always be unanswerable, but given what I know I can only say that they will almost certainly remain unanswerable (because it’s unfeasible or because it’s a nonsensical question).
Wouldn’t science need to do conceptual analysis? Not really, though it could appear that way. Philosophy has “free will”, science has “volition.” Free will is a label for a continually argued concept. Volition is a label for an axiom that’s been nailed in stone. Science doesn’t really care about concepts, it just wants to ask questions such that it can answer them definitely.
Even though science might provide all the knowledge necessary to easily answer a question, it doesn’t actually answer it, right? My answer: so what? Science doesn’t answer a lot of trivial questions like what I exactly should eat for breakfast, even though the answer is perfectly obvious (healthy food as discovered by science if I want to remain healthy).
Why still have the hard problem of consciousness if it’s answerable by science? Because the brain is hard to understand. Give another century or so. We’ve barely explored the brain.
What if consciousness isn’t explainable by science? When we get to that point, we’ll be much better prepared to understand what direction we need to go to understand the brain. As it is now, philosophy is simply following science’s breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.
A scientific theory of ethics? It wouldn’t have any “you ought to do X because X is good,” but would be more of the form of “science says X,Y,Z are healthy for you” and then you would think “hey, I want to be healthy, so I’m going to eat X,Y,Z.” This is actually how philosophy works now. You get a whole bunch of argumentation as evidence, and then you must enact it personally through hypothetical injunctions like “if I want to maximize well being, then I should act as a utilitarian.”
Even though the wikipedia page for “meaning of life” is enormous, it boils all down to the very simple either/or statement I gave.
Providing you ignore the enornous amount of substructure hanging off each option.
do we know if something is answerable?
We generally perform some sort of armchair conceptual analysis.
Wouldn’t science need to do conceptual analysis? Not really,
Why not? Doesn’t it need to decide which questions it can answer?
Volition is a label for an axiom that’s been nailed in stone.
First I’ve heard of it. Who did that? Where was it published?
Why still have the hard problem of consciousness if it’s answerable by science? Because the brain is hard to understand.
Or impossible, or the brain isn’t solely or responsible, ro something else. It would have helped to have argued
for your prefered option.
Give another century or so. We’ve barely explored the brain.
As it is now, philosophy is simply following science’s breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.
Philosophy generally can’t solve scientific problems, and science generally can’t solve philosophical ones.
A scientific theory of ethics? It wouldn’t have any “you ought to do X because X is good,” but would be more of the form of “science says X,Y,Z are healthy for you” and then you would think “hey, I want to be healthy, so I’m going to eat X,Y,Z.”
And what about my interactions with others? Am I entitled to snatch an orange from a starving man because I need a few extra milligrams of vitamin C?
You have precisely identified the fundamental problem with philosophy.
And your better alternative is...?
DDTT. Don’t study words as if they had meanings that you could discover by examining your intuitions about how to use them. Don’t draw maps without looking out of the window.
Positively, they could always start here.
BS. For example, Eliezer’s take on logical positivism in the most recent Sequence is interesting. But logical positivism has substantial difficulties—identified by competing philosophical schools—that Eliezer has only partially resolved.
Aristotle tried to say insightful things merely by examining etymology, but the best of modern philosophy has learned better.
I only see objections to traditional strains of positivism. It doesn’t seem they even apply to what EY’s been doing. In particular, the problems in objections 1, 3C1, 3C2, and 3F2 have been avoided by being more careful about what is not said. Meanwhile, 2 and 3F1 seem incoherent to me.
I don’t see how Eliezer could dodge this objection, or why he would want to. Very colloquially, Eliezer thinks there is an arrow leading to “Snow is white” from the fact that snow is white. Labeling that arrow “causal” does nothing to explain what that arrow is. If you don’t explain what the arrow is, how do you know that (1) you’ve said something rigorous or (2) that the causal arrows are the same thing as what we want to mean by “true”?
As stated, this objection is too strong (because it assumes moral anti-realism is true). The correspondence theory can be agnostic in the dispute between moral realism and moral anti-realism. But moral realists intend to use the word “true” in exactly the same way that scientists use the word. Thus, a correspondence-theory moral realist needs to be able to identify what corresponds to any particular moral truth—otherwise, moral anti-realism is the correct moral epistemology.
Most people are moral realists, so if your theory of truth is inconsistent with moral realism, they will take that as evidence that your theory of truth is not correct.
Look, no one but a total idiot believes Mark’s epistemic theory. There is an external world, with sufficient regularity that our physical predictions will be accurate within the limits of our knowledge and computational power. The issue is whether that can be stated more rigorously—and the different specifications are where logical positivists, physical pragmitists, Kunn and other theorists disagree.
I do agree that objections 2 and 3F2 are not particularly compelling (as I understand them).
This is actually a very easy one to respond to. Truthbearers do resemble non-truthbearers. What must ultimately be truth-bearing, if anything really is, is some component of the world—a brain-state, an utterance, or what-have-you. These truth-bearing parts of the world can resemble their referents, in the sense that a relatively simple and systematic transformation on one would yield some of the properties of the other. For instance, a literal map clearly resembles its territory; eliminating most of the territory’s properties, and transforming the ones that remain in a principled way, could produce the map. But sentences also resemble the territories they describe, e.g., through temporal and spatial correlation. Even Berkeley’s argument clearly fails for this reason; an immaterial idea can systematically share properties with a non-idea, if only temporal ones.
Language use is a natural phenomenon. Hence, reference is also a natural phenomenon, and one we should try to explain as part of our project of accounting for the patterns of human behavior. Here, we’re trying to understand why humans assert “Snow is white” in the particular patterns they do, and why they assign truth-values to that sentence in the patterns they do. The simplest adequate hypothesis will note that usage of “snow” correlates with brain-states that in turn resemble (heavily transformed) snow, and that “white” correlates with brain-states resembling transformed white light, and that “Snow is white” expresses a relationship between these two phenomena such that white light is reflected off of snow. When normal English language users think white light reflects off of snow, they call the sentence “snow is white” true; and when they think the opposite, they call “snow is white” false. So, there is a physical relationship between the linguistic behavior of this community and the apparent properties of snow.
Yes, but is our goal to convince everyone that we’re correct, or to be correct? The unpopularity of moral anti-realism counts against the rhetorical persuasiveness of a correspondence theory combined with a conventional scientific world-view. But it will only count against the plausibility of this conjunction if we have reason to think that moral statements are true in the same basic way that statements about the whiteness of snow are true.
In brief, I disagree that we are trying to explain human behavior. We are trying to develop an agent-universal explanation of truth. The risk of focusing on human behavior (or human brain states) is that the theory of truth won’t generalize to non-human agents.
Regarding moral facts, I agree that our goal is true philosophy, not comforting philosophy. I’m a moral anti-realist independent of theory-of-truth considerations. But most people seem to feel that their moral senses are facts (yes, I’m well aware of the irony of appealing to universal intuitions in a post that urges rejection of appeals to universal intuitions).
The widespread nature of belief in values-as-truths cries out for explanation, and the only family of theories I’m aware of that even try to provide such an explanation is wildly controversial and unpopular in the scientific community.
I’m not sure ‘agent’ is a natural kind. ‘Truth’ may not be a natural kind either; it may be a very gerrymandered, odd-looking collection of properties. So I spoke in terms of concrete human behaviors in order to maintain agnosticism about how generalizable these properties are. If they do turn out to be generalizable, then great. I don’t think any part of my account precludes that possibility.
Yes. My explanation is that our mental models do treat values as though they were real properties of things. Similarly, our mental models treat chairs as discrete solid objects, treat mathematical objects as mind-independent reals, treat animals as having desires and purposes, and treat possibility and necessity as worldly facts. In all of these cases, our evidence for the metaphysical category actually occurring is much weaker than our apparent confidence in the category’s reality. So the problem is very general; it seems that most of our beliefs are predicated on useful fictions (analogous to our willingness to affirm the truth of ‘Sherlock Holmes is a detective, not a carpenter’), in which case we are committed either to an error theory or to revising our standards for what ‘truth’ is.
If so. rationalists may as well shut up shop, because anyone would be able to add an interest-specific lump to the gerrymander.
ETA
I go for the third option.
People already do that, and yet rationalists see no reason to ‘shut up shop’ as a result. ‘True’ is just a word. Rationality is about systematic optimization for our goals, not about defending our favorite words from the rabble. Sometimes it’s worthwhile to actively criticize a use of ‘truth;’ sometimes it’s worthwhile to participate in the gerrymandering ourselves; and sometimes it’s worthwhile just to avoid getting involved in the kerfuffle. For instance, criticizing people for calling ‘Sherlock Holmes is a detective’ true is both less useful and less philosophically interesting than criticizing people for calling ‘there is exactly one empty set’ true.
Also, it’s important to remember that there are two different respects in which ‘truth’ might be gerrymandered. First, it might be gerrymandered for purely social reasons. Second, it might be gerrymandered because it’s a very complicated property of high-level representational systems. One should not expect mental states in general to be simply and nondisjunctively definable in a strictly physical language. Yet if we learned that ‘pain’ were a highly disjunctive property rather than a natural kind, this would give us no reason to stop deeming pain unpleasant.
People try to do that, but rationalists don’t have to regard it as legitimate, and can object. However, if a notion of truth is adopted that is pluralistic and has no constraint on its pluralism—Anythng Goes—rationalists could no longer object to,eg. Astrological Truth.
So you say. Most rationalists are engaged in some sort of wider debate.
Even if it is intellectually dishonest to do so?
I think you may have confused truth with statesof-mind-having-content-about-truth. Electrons are simple, thoughts about them aren’t.
Somethings not being a natural kind, is not justification for arbitrarily changing its definition. I don’t get to redefine the taste of chocolate as a kind of pain.
No one on this thread, up till now, has mentioned an arbitrarily changing or anything goes model of truth. Perhaps you misunderstood what I meant by ‘gerrymandered.’ All I meant was that the referent of ‘truth’ in physical or biological terms may be an extremely complicated and ugly array of truth-bearing states. Conceding that doesn’t mean that we should allow ‘truth’ (or any word) to be used completely anarchically.
It might be. Then philosphers would be correct to look for a sense that all those referents have in common.
I would phrase that as that he has recast it so it is non-objectionable.
A lot of the other objections are of the nature “how do you know?” And generally he lets the answer be, “we don’t know that to a degree of certainty that—it has been correctly pointed out—would philosophically objectionable.”
Well, that moves much closer to making objection 2 meaningful. If all that the correspondence theory of truth can do is reassure us that our colloquial usage of “truth” gestures at a unified and meaningful philosophical concept, then it isn’t much use. It is not like anyone seriously doubts that “empirically true” is a real thing.
And I say that as a post-modernist.
I still don’t understand this ‘usefulness’ objection. If the correspondence theory of truth is a justification for colloquial notions of truth, its primary utility does lie in our not worrying too much about things we don’t actually need to worry about. There are other uses such as molding the way one approaches knowledge under uncertainty. The lemmas needed to produce the final “everything’s basically OK” result provide significant value.
There are many concepts where the precise contours of the correct position makes no practical difference to most people. Examples include (1) Newtonian vs. Relativity and QM, (2) the meaning of infinity, or (3) persistence of identity. Many of the folk versions of those types of concepts are inadequate in dealing with edge cases (e.g. the folk theory of infinity is hopelessly broken). The concept of “truth” is probably in this no-practical-implications category. As I said, there’s no particular reason to doubt truth exists, whether the correspondence theory is correct or not.
Anyway, edge cases don’t tend to come up in ordinary life, so there’s no good reason for most people to be worried. If one isn’t worried, then the whole correspondence-theory-of-truth project is pointless to you. Without worry, reassurance is irrelevant. By contrast, if you are worried, the correspondence theory is insufficient to reassure you. Your weaker interpretation is vacuous, Eliezer’s stronger version has flaws.
None of this says that one should worry about what “truth” is, but having taken on the question, I think Eliezer has come up short in answering.
I don’t see where it’s coming up short in the first two examples you gave. What else would you want from it?
As far as the third, well, I don’t know that the meaning of truth is directly applicable to this problem.
I haven’t communicated clearly. There are two understandings of useful—practical-useful and philosophy-useful. Arguments aimed at philosophy-use are generally irrelevant to practical-use (aka “Without worry, reassurance is irrelevant”).
In particular, the correspondence theory of truth has essentially no practical-use. The interpretation you advocate here removes philosophical-use.
“Everything’s basically ok.” is a practical-use issue. Therefore, it’s off-topic in a philosophical-use discussion.
I mentioned the examples to try to explain the distinction between practical-use and philosophical-use. Believing the correspondence theory of truth won’t help with any of the examples I gave. Ockham’s Razor is not implied by the correspondence theory. Nor is Bayes’ Theorem. Correspondence theory implies physical realism, but physical realism does not imply correspondence theory.
Out of curiosity, which theory of truth does have a practical use ?
I think is important to note that what we’ve been calling theories of truth are actually aimed at being theories of meaningfulness. As lukeprog implicitly asserts, there are whole areas of philosophy where we aren’t sure there is anything substantive at all. If we could figure out the correct theory of meaningfulness, we could figure out which areas of philosophy could be discarded entirely without close examination.
For example, Carnap and other logical positivists thought Heidegger’s assertion that “Das nicht nichtet” was meaningless nonsense. I’m not sure I agree, but figuring out questions like that is the purpose of a theory of meaning / truth.
I see, so you aren’t really concerned with practical-use applications; you’re more interested in figuring out which areas of philosophy are meaningful. That makes sense, but, on the other hand, can an area of philosophy with a well-established practical use still be meaningless ?
It sure would be surprising if that happened. But meaningfulness is not the only criteria one could apply to a theory. No one thinks Newtonian physics is meaningless, even though everyone thinks Newtonian physics is wrong (i.e. less right than relativity and QM).
In other words, one result of a viable theory of truth would be a formal articulation of “wronger than wrong.”
That’s not the same as “wrong”, though. It’s just “less right”, but it’s still good enough to predict the orbit of Venus (though not Mercury), launch a satellite (though not a GPS satellite), or simply lob cannonballs at an enemy fortress, if you are so inclined.
From what I’ve seen, philosophy is more concerned with logical proofs and boolean truth values. If this is true, then perhaps that is the reason why philosophy is so riddled with deep-sounding yet ultimately useless propositions ? We’d be in deep trouble if we couldn’t use Newtonian mechanics just because it’s not as accurate as QM, even though we’re dealing with macro-sized cannonballs moving slower than sound.
… except, as described below, to discard volumes worth of overthinking the matter.
As far as I can tell, we’re in the middle of a definitional dispute—and I can’t figure out how to get out.
My point remains that Eliezer’s reboot of logical positivism does no better (and no worse) than the best of other logical positivist philosophies. A theory of truth needs to be able to explain why certain propositions are meaningful. Using “correspondence” as a semantic stop sign does not achieve this goal.
Abandoning the attempt to divide the meaningful from the non-meaningful avoids many of the objections to Eliezer’s point, at the expense of failing to achieve a major purpose of the sequence.
It’s not so much a definitional dispute as I have no idea what you’re talking about.
Suggesting that there’s something out there which our ideas can accurately model isn’t a semantic stop sign at all. It suggests we use modeling language, which does, contra your statement elsewhere, suggest using Bayesian inference. It gives sufficient criteria for success and failure (test the models’ predictions). It puts sane epistemic limits on the knowable.
That seems neither impractical nor philosophically vacuous.
The philosophical problem has always been he apparent arbitrariness of the rules. You can say that “meaningful” sentences are empircially verifiable ones. But why should anyone believe that? The sentence “the only meaningful sentences are the empircially verifiable ones” isn’t obviously empirically verifiable. You have over-valued clarity and under-valued plausibility.
Definitions don’t need to be empirically verifiable. How could they be?
They need to be meaningful. If your definition of meaningfullness assers its own meaninglessness, you have a problem. If you are asserting that there is truth-by-stipulation as well as truth-by-correspondence, you have a problem.
Clarity cannot be over-valued; plausibility, however, can be under-valued.
If you believe that, I have two units of clarity to sell you, for ten billion dollars.
Before posting, you should have spent a year thinking up ways to make that comment clearer.
What about mathematics, then ? Does it correspond to something “out there” ? If so, what/where is it ? If not, does this mean that math is not meaningful ?
Math is how you connect inferences. The results of mathematics are of the form ‘if X, Y, and Z, then A’… so, find cases where X, Y, and Z, and then check A.
It doesn’t even need to be a practical problem. Every time you construct an example, that counts.
I don’t see how that addresses the problem. You have said that there is one kind of truth/meanignullness, based on modelling relaity, and then you describe mathematical truth in a form that doens’t match that. If any domain can have its own standards of truth, then astrologers can say there merhcandise is “astrologically true”. You have anything goes.
This stuff is a tricky , typically philophsical problem because the obvious answers all have problems. Saying that all truth is correspondence means that either mathematical Platonism holds—mathematical truths correspond to the status quo in Plato’s heaven—or maths isn’t meaningful/true at all. Or truth isn’t correspondence, it’s anything goes.
I don’t think those problems are iresolvable, and EY has in fact suggested (but not originated) what I think is a promissing approach.
How does it not match? Take the 4 color problem. It says you’re not going to be able to construct a minimally-5-color flat map. Go ahead. Try.
That’s the kind of example I’m talking about here. The examples are artificial, but by constructing them you are connecting the math back to reality. Artificial things are real.
What? How is holding everything is held to the standard of ‘predict accurately or you’re wrong’ the same as ‘anything goes’?
I mean, if astrology just wants to be a closed system that never ever says anything about the outside world… I’m not interested in it, but it suddenly ceases to be false.
That doesn’t matfch reality because it would still be true in universes with different laws of physics.
It isn’t. It’s a standard of truth that too narrow to include much of maths.
That doens’t follow. Astrologers can say their merchandise is about the world, and true, but not true in a way that has anything to do with correspondence or prediction.
If you’re in a different universe with different laws of physics, your implementation of the 4 color problem will have to be different. Your failure to correctly map between math and reality isn’t math’s problem. Math, as noted above, is of the form ‘if X and Y and Z, then A’ - and you can definitely arrange formal equivalents to X, Y, and Z by virtue of being able to express the math in the first place.
It’s about the world but it doesn’t correspond to anything in the world? Then the correspondence model of truth has just said they’re full of shit. Victoreeee!
(note: above ‘victory’ claim is in reference to astrologers, not you)
I don’t have to implement it at all to see its truth. Maths is not just applied maths.
I don’t see that you mean. (Non-applied) maths is just formal, period, AFAIAC..
And Astrologers can just say that the CToT is shit and they have a better ToT.
People who have different ‘theories’ of truth really have different definitions of the word ‘truth.’ Taboo that word away, and correspondence theorists are really criticizing astrologists for failing to describe the world accurately, not for asserting coherentist ‘falsehoods.’ Every reasonable disputant can agree that it is possible to describe the world accurately or inaccurately; correspondence theorists are just insisting that the activity of world-describing is important, and that it counts against astrologists that they fail to describe the world.
(P.S. Real astrologists are correspondence theorists. They think their doctrines are true because they are correctly describing the influence celestial bodies have on human behavior. Even idealists at least partly believe in correspondence theory; my claims about ideas in my head can still be true or false based on whether they accurately describe what I’m thinking.)
That is not at all obvious. Let “that which should be believed” be the defintiion of truth. Then a correspondence theorist and coherence theorist stlll have plenty to disagree about, even if they both hold to the definition.
Agreed. However, it’s still the right view, as well as being the most useful one, since tabooing lets us figure out why people care about which ‘theory’ of ‘truth’ is.… (is what? true?). The real debate is over whether correspondence to the world is important in various discussions, not over whether everyone means the same thing (‘correspondence’) by a certain word (‘truth’).
You can stipulate whatever you want, but “that which should be believed” simply isn’t a credible definition for that word. First, just about everyone thinks it’s possible, in certain circumstances, to ought to believe a falsehood. Second, propositional ‘belief’ itself is the conviction that something is true; we can’t understand belief until we first understand what truth is, or in what sense ‘truth’ is being used when we talk about believing something. Truth is a more basic concept than belief.
At the very least, you can make something formally equivalent if you’re capable of talking about it.
If your branch of mathematics is so unapplied that you can’t even represent it in our universe, I suspect it’s no longer math.
Any maths can be represented the way it ususally is, by writing down some essentially aribtrary symbols. That does not indicate anything about “correspondence” to reality. The problem is the “arbitrary” in arbitrary symbol.
Lets say space is three dimensional. You can write down a formula for 17 dimensional space, but that doens’t mean you have a chunk of 17 dimesional space for the maths to correspond to. You just have chalk on a blackboard.
Sure. And yet, you can implement vectors in 17 dimensional spaces by writing down 17-dimensional vectors in row notation. Math predicts the outcome of operations on these entities.
Show me a 17-vector. And what is being predicited? The onlyy way to get at the behaviour is to do the math, and the only way to do the predictions is...to do the math. I think meaningful prediction requires some non-identity between predictee and predictor.
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6) is a perfectly valid 17-vector.
The predictions of the mathematics of 17-dimensional space would, yes, depend on the outcome of other operations such as addition and multiplication—operations we can implement more directly in matter.
I have personally relied on the geometry of 11-dimensional spaces for a particular curve-fitting model to produce reliable results. If, say, the Pythagorean theorem suddenly stopped applying above 3 dimensions, it simply would not have worked.
I’m seing pixels on a 2d screen. I’m not seeing an existing 17d dimensional thing.
The mathematics of 17d space predict the mathematics of 17d space. They couldn’t fail to. Which means no real prediction is happening at all. 1.0 is not a probability.
There are things we can model as 17-dimensional spaces, and when we do, the behavior comes out the way we were hoping. This is because of formal equivalence: the behavior of the numbers in a 17-dimensional vector space precisely corresponds to the geometric behavior of a counterfactual 17-dimensional euclidean space. You talk about one, you’re also saying something about the other.
Is this point confusing to you?
But they are not 17 dimensional spaces. They have different physics. Treating them as 17 dimesional isn’t modelling them because it isn’t representing thema as they are.
To be concrete, suppose we have a robotic arm with 17 degrees of freedom of movement. It’s current state can and should be represented as a 17-dimensional vector, to which you should do 17-dimensional math to figure out things like “Where is the robotic arm’s index finger pointing?” or “Can the robotic arm touch its own elbow?”
Not obvious. It would just be a redundant way of representing a 3d object in 3 space.
The point of contention is the claim that for any maths, there is something in reality for it to represent. Now, we can model a system of 10 particles as 1 particle in 30 dimensional space, but that doens;t prove that 30d maths has something in reality to represent, since in reality there are 10 particles. Is was our decision, not reality’s to treat is as 1 particle in a higher-d space.
Past a certain degree of complexity, there are lots of decisions about representing objects that are “ours, not reality’s”. For example, even if you represent the 10 particles as 10 vectors in 3D space, you still choose an origin, a scale, and a basis for 3D space, and all of these are arbitrary.
The 30-dimensional particle makes correct predictions of the behavior of the 10 particles. That should be enough.
Treaing a mathermatical formula as something that cranks out predictions is treating it as instrumentally, is treaing it unrealistically. But you cannot’ have coherent notion of modeling or representation if there is no real territory being modeled or represented.
To argue that all maths is representational, you either have to claim we are living in Tegmarks level IV, or you have to stretch the meaning of “representation” to meaninglessness. Kindly and Luke Sommers seem to be heading down the second route.
A correct 30D formula wll make correct predictions, Mathematical space also contains an infinity of formulations that are incorrect. Surely it is obvious that you can’t claim eveything in maths correctly models or predicts something in realiy.
I’d say that math either
Predicts something that could happen in reality (e.g. we’re not rejecting math with 2+2=4 apples just because I only have 3 apples in my kitchen), or
Is an abstraction of other mathematical ideas.
Do you claim that (2) is no longer modeling something in reality? It is arguably still predicting things about reality once you unpack all the layers of abstraction—hopefully at least it has consequences relevant to math that does model something.
Or do you think that I’ve missed a category in my description of math?
I don’t see what abstraction has to do with it. The Standard Model has about 18 parameters. Vary those, and it will mispredict. I don’t think all the infinity of incorrect variations of the SM are more abstract.
Who said vector spaces have anything to do with physics? That’s not math anymore, that’s physics.
Using math to model reality is physics. Phsycis doens’t use all of math, so some math doesn’ model anything real.
As a physicist, I can say with a moderate degree of authority: no.
I have seen mathematical equations to describe population genetics. That was not physics. I have seen mathematical equations used to describe supply and demand curves. That was not physics. Etc.
If you’re using math to model something, or even could so use it, that is sufficient for it to have a correspondent for purposes of the correspondence theory of truth.
But that is not suffcient to show that all maths models.
Well, you can use math for something other than modeling, sure. Can you give a more concrete example of some math you claim doesn’t model anything?
The Standard Model with its 18 parameters set to random values.
… okay, you were confusing before, but now you’re exceptionally confusing. You’re saying that the standard model of particle physics is an example of math that doesn’t model anything?
No, I am saying a mutated, deviant form doens’t model anything -- “with its 18 parameters set to random values”.
Well, it doesn’t model our universe. And the Standard Model is awfully complicated for someone to build a condensed matter system implementing a randomized variant of it. But it’s still a quantum mechanical system, so I wouldn’t bet strongly against it.
And of course if someone decided for some reason to run a quantum simulation using this H-sm-random, then anything you mathematically proved about H-sm-random would be proved about the results of that simulation. The correspondence would be there between the symbols you put in and the symbols you got back, by way of the process used to generate those outputs. It just would be modeling something less cosmically grand than the universe itself, just stuff going on inside a computer. It wouldn’t be worth while to do… but it still corresponds to a relationship that would hold if you were dumb enough to go out of your way to bring it about.
The thing about the correspondence theory of truth is that once something has been reached as corresponding to something and thus being eligible to be true, it serves as a stepping-stone to other things. You don’t need to work your way all the way down to ‘ground floor’ in one leap. You’re allowed to take general cases, not all of which need to be instantiated. Correspondence to patterns instead of instances is a thing.
Which, as in your other examples, is case of a model modeling a model. You can build something physical that simulates a universe where electrons have twice the mass, and you can predict the virtual behaviour of the simulation with an SM where the electron mass paramter is doubled, but the simulation will be made of electrons with standard mass.
It wouldn’t be modeliing reality.
..is that it is a poor fit for mathematical truth. You are making mathetmatical theorems correspondnce-true by giving them something artificial to correspond to. Before the creation of a simulaiton at time T, there is nothing for them to correspond to.This is a mismatch with the intuition that mathematical truths are timelessly true.
You can gerrymander CToT into something that works, however inelegantly, for maths, or you can abandon it in favour something that doesn’t need gerrymandering.
It’s not gerrymandering. What you are doing is gerrymandering. Picking and choosing which parts of the territory we are and aren’t allowed to model.
The territory includes the map.
But not as a map. Maphood is in the eye of the beholder.
The eye of the beholder is part of the territory too. It is a matter of fact that it takes that part of the territory to be a map.
Maphood is still not a matter of fact about maps.
Right, but as Peterdjones said, in this case you have a meaningful system that does not correspond to anything besides, possibly, itself.
Example, please?
Physics uses a subset of maths, so the rest would be examples of vald (I am substituing that for “meaninful”, which I am not sure how t apply here) maths that doesn;t correspond to anything external, absent Platonism.
But you can BUILD something that corresponds to that thing.
Which thing, and why does that matter?
The word “True” is overloaded in the ordinary vernacular. Eliezer’s answer is to set up a separate standard for empirical and mathematical propositions.
Empirical assertions use the label “true” when they correspond to reality. Mathematical assertions use the label “valid” when the theorem follows from the axioms.
I dont’ think it is, and that’s a bad answer anyway. To say that two unrelated approaches are both truth allows anthing to join the truth club, since there are no longer criteria for membership.
However, there is an approach that allows pluralism, AKA “overloading”, but avoids Anything Goes
Well, I don’t think that Eliezer would call mathematically valid propositions “true.” I don’t find that answer any more satisfying than you do. But (as your link suggests), I don’t think he can do better without abandoning the correspondence theory.
Simply put, there’s no one who disagrees with this point. And the correspondence theory cannot demonstrate it, even if there were a dispute.
Let me make an analogy to decision theory: In decision theory, the hard part is not figuring out the right answer in a particular problem. No one disputes that one-boxing in Newcomb’s problem has the best payoff. The difficulty in decision theory is rigorously describing a decision theory that comes up with the right answer on all the problems.
To make the parallel explicit, the existence of the external world is not the hard problem. The hard problem is what “true” means. For example, this comment is a sophisticated argument that “true” (or “meaningful”) are not natural kinds. Even if he’s right, that doesn’t conflict with the idea of an external world.
I’m trying and failing to figure out for what reference class this is supposed to be true.
Who thinks that there isn’t something out there which our ideas can model?
If I understood you correctly, then Berkeley-style Idealists would be an example. However, I have a strong suspicion that I’ve misunderstood you, so there’s that...
Solipsists, by some meanings of “out there”. More generally, skeptics. Various strong forms of relativism, though you might have to give them an inappropriately modernist interpretation to draw that out. My mother-in-law.
I need to knowpositively how to answer typical philosophhical questions such as the meaning of life.
That’s a re-invention of LP, which has problems well known to philosophers.
Eliezer has written quite a bit about how to do philosophy well, and I intend to do so in the future.
If you’ll pardon the pun, I leave you with “Why I Stopped Worrying About the Definition of Life, and Why You Should as Well”.
I ha ve read a lot of philosophy, and I don’t think EY is doing it at particualrly well. His occasional cross-disciplinary insights keep me going (I’m cross disiplinary too, I started in science and work in I.T). But he often fails to communicate clearly (I still don’t know whether he thinks numbers exist) and argues vaguely.
I don’t see your point. For one thing, I’m not on the philosohpy “side” in some sense exclusive of being on the science or CS side or whatever. For another. there are always plenty of phils. who are agin GOCFA (Good Old Fashioned Conceptual Analysis). The collective noun for philosophers is “a disagreement”. Tha’ts another of my catchphrases.
Agree! Very frustrating. What I had in mind was, for example, his advice about dissolving the question, which is not the same advice you’d get from logical positivists or (most) contemporary naturalists.
Sorry, I should have been clearer that I wasn’t trying to make much of a point by sending you the Machery article. I just wanted to send you a bit of snark. :)
I don’t see the significance of that. You definitely get it from some notable naturalists,
I skimmed the paper. Dennett’s project is a dissolving one, though he does less to explain why we think we have qualia than Yudkowsky did with regard to why we think we have free will. But perhaps Dennett wrote something later which more explicitly sets out to explain why we think we have qualia?
Only if the question is meaningful. Of course, just saying “Don’t do that then” doesn’t tell you how to resolve whether that’s the case or not, but necessarily expecting an answer rather than a dissolution is not necessarily correct.
Defund philosophy departments to the benefit of computer science departments?
And the CS departments are going to tell us what the meaning of life is?
If have to give up on even trying to answer the questions, you don’t actually have a better alternative.
I absolutely loathe the way you phrased that question for a variety of reasons (and I suspect analytic philosophers would as well), so I’m going to replace “meaning of life” with something more sensible like “solve metaethics” or “solve the hard problem of consciousness.” In which case, yes. I think computer science is more likely to solve metaethics and other philosophical problems because the field of philosophy isn’t founded on a program and incentive structure of continual improvement through feedback from reality. Oh, and computer science works on those kinds of problems (so do other areas of science, though).
I don’t think you have phrased “the question” differntly and better, I think you have substituted two differnt questions. Well, maybe you think the MoL is a ragbag of different questions, not one big one. Maybe it is. Maybe it isn’t. That would be a philsophical question. I don’t see how empiricsm could help. Speaking of which...
What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values? I didn’t notice and qualiometers or agathometers last time I was in a lab.
I’ve substituted problems that philosophy is actually working on (metaethics and conciousness) with one that analytic philosophy isn’t (meaning of life). Meaning comes from mind. Either we create our own meaning (absurdism, existentialism, ect) or we get meaning from a greater mind that designed us with a purpose (religion). Very simple. How could computer science or science dissolve this problem? (1) By not working on it because it’s unanswerable by the only methods we can have said to have answered something, or (2) making the problem answerable by operationalizing it or by reforming the intent of the question into another, answerable, question.
Through the process of science, we gain enough knowledge to dissolve philosophical questions or make the answer obvious and solved (even though science might not say “the meaning of life is X” but instead show that we evolved, what mind is, and how the universe likely came into being—in which case you can answer the question yourself without any need for a philosophy department).
If I want to know what’s happening in a brain, I have to understand the physical/biological/computational nature of the brain. If I can’t do that, then I can’t really explain qualia or such. You might say we can’t understand qualia through its physical/biological/computational nature. Maybe, but it seems very unlikely, and if we can’t understand the brain through science, then we’ll have discovered something very surprising and can then move in another direction with good reason.
Unless it is. Maybe the MoL breaks down into many of the other topics studied by philosophers. Maybe philosophy is in the process of reducing it.
No, not simple
You say it is “unanswerable” timelessly. How do you know that? It’s unanswered up to present. As are a number of scientific questions.
Maybe. But checking that you have correctly identified the intent, and not changed the subject, is just the sort of armchair conceptual analysis philosophers do.
You say that timelsessly, but at the time of writing we have done where we have and we don’t where we haven;t.
But unless science can relate that back to the initial question , there is no need to consider it answered.
That’s necessary, sure. But if it were sufficient, would we have a Hard Problem of Consciousness?
But I am not suggesting that science be shut down, and the funds transferred to philosophy.
It seems actual to me. We don’t have such an understanding at present. I don’t know what that means for the future, and I don’t how you are computing your confident statement of unlikelihood. One doens’t even have to believe in some kind of non-physicalism to think that we might never. The philosopher Colin McGinn argues that we have good reason to believe both that consc. is physical, and that we will never understand it.
We can’t understand qualia through science now. How long does that have to continue before you give up? What’s the harm in allowing philsophy to continue when it is so cheap compared to science?
PS. I would be interested in hearing of a scientific theory of ethics that doens’t just ignore the is-ought problem.
Even though the wikipedia page for “meaning of life” is enormous, it boils all down to the very simple either/or statement I gave.
How do we know if something is answerable? Did a chicken just materialize 10 billion light years from Earth? We can’t answer that. Is the color blue the best color? We can’t answer that. We can answer questions that contact reality such that we can observe them directly or indirectly. Did a chicken just materialize in front me? No. Is the color blue the most preferred color? I don’t know, but it can be well answered through reported preferences. I don’t know if these currently unanswerable questions will always be unanswerable, but given what I know I can only say that they will almost certainly remain unanswerable (because it’s unfeasible or because it’s a nonsensical question).
Wouldn’t science need to do conceptual analysis? Not really, though it could appear that way. Philosophy has “free will”, science has “volition.” Free will is a label for a continually argued concept. Volition is a label for an axiom that’s been nailed in stone. Science doesn’t really care about concepts, it just wants to ask questions such that it can answer them definitely.
Even though science might provide all the knowledge necessary to easily answer a question, it doesn’t actually answer it, right? My answer: so what? Science doesn’t answer a lot of trivial questions like what I exactly should eat for breakfast, even though the answer is perfectly obvious (healthy food as discovered by science if I want to remain healthy).
Why still have the hard problem of consciousness if it’s answerable by science? Because the brain is hard to understand. Give another century or so. We’ve barely explored the brain.
What if consciousness isn’t explainable by science? When we get to that point, we’ll be much better prepared to understand what direction we need to go to understand the brain. As it is now, philosophy is simply following science’s breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.
A scientific theory of ethics? It wouldn’t have any “you ought to do X because X is good,” but would be more of the form of “science says X,Y,Z are healthy for you” and then you would think “hey, I want to be healthy, so I’m going to eat X,Y,Z.” This is actually how philosophy works now. You get a whole bunch of argumentation as evidence, and then you must enact it personally through hypothetical injunctions like “if I want to maximize well being, then I should act as a utilitarian.”
Providing you ignore the enornous amount of substructure hanging off each option.
We generally perform some sort of armchair conceptual analysis.
Why not? Doesn’t it need to decide which questions it can answer?
First I’ve heard of it. Who did that? Where was it published?
Or impossible, or the brain isn’t solely or responsible, ro something else. It would have helped to have argued for your prefered option.
Give another century or so. We’ve barely explored the brain.
Philosophy generally can’t solve scientific problems, and science generally can’t solve philosophical ones.
And what about my interactions with others? Am I entitled to snatch an orange from a starving man because I need a few extra milligrams of vitamin C?