You would be surprised to learn how often I talk to Less Wrongers who have been corrupted by a few philosophy classes and therefore engage in the kind of philosophical analysis which assumes that their intuitions are generally shared.
Despite being downvoted in this comment, I think Eliezer is generally right that reading too much mainstream philosophy — even “naturalistic” analytic philosophy — is somewhat likely to “teach very bad habits of thought that will lead people to be unable to do real work.”
Oh I doubt I’d be surprised, but that’s more a problem of the people coming out of Philosophy 101 than the discipline itself. Frege and Bertrand Russell put most of the metaphysical extravagances to bed (in the Anglo-American tradition at least) with the turn towards formal logic and language, and the modern-day analytic tradition hasn’t ever looked back.
As it stands the field has about as much to do with mind-body dualism or idealism (or their respective toolkits) as theoretical physics. This goes for ethics and meta-ethics, and no serious writer in that topic would entertain Cartesian dualism or Kantian deontology or any other such in a trivial form. The idea of contingent, historical, contextually-sensitive ethics is widely recognized and is indeed a topic of lively discussion.
Oh I doubt I’d be surprised, but that’s more a problem of the people coming out of Philosophy 101 than the discipline itself.
No, seriously: the assumption that others will share one’s philosophical intuitions is rampant in contemporary philosophy. Go read all the angry papers written in response to the work of experimental philosophers, or the works of the staunch intuitionists like George Bealer and Ernest Sosa.
The field as a whole (or rather, some within it, to be more accurate) takes these issues seriously as a matter of debate, yes, but arguing over controversial claims is the entire point of philosophy so that’s no mark against it. It’s also a radically different position from the strong claim you’ve advanced here that the field itself is broken, which is nonsense to anyone familiar with modern moral philosophy and ethics/meta-ethics and is dangerously close to a strawman argument.
To say the problem is “rampant” is to admit to a limited knowledge of the field and the debates within it.
DDTT. Don’t study words as if they had meanings that you could discover by examining your intuitions about how to use them. Don’t draw maps without looking out of the window.
BS. For example, Eliezer’s take on logical positivism in the most recent Sequence is interesting. But logical positivism has substantial difficulties—identified by competing philosophical schools—that Eliezer has only partially resolved.
Aristotle tried to say insightful things merely by examining etymology, but the best of modern philosophy has learned better.
I only see objections to traditional strains of positivism. It doesn’t seem they even apply to what EY’s been doing. In particular, the problems in objections 1, 3C1, 3C2, and 3F2 have been avoided by being more careful about what is not said. Meanwhile, 2 and 3F1 seem incoherent to me.
3C1: The correspondence relation must be some sort of resemblance relation. But truthbearers do not resemble anything in the world except other truthbearers—echoing Berkeley’s “an idea can be like nothing but an idea”.
I don’t see how Eliezer could dodge this objection, or why he would want to. Very colloquially, Eliezer thinks there is an arrow leading to “Snow is white” from the fact that snow is white. Labeling that arrow “causal” does nothing to explain what that arrow is. If you don’t explain what the arrow is, how do you know that (1) you’ve said something rigorous or (2) that the causal arrows are the same thing as what we want to mean by “true”?
Objection 1: Definitions like (1) or (2) are too broad; although they apply to truths from some domains of discourse, e.g., the domain of science, they fail for others, e.g. the domain of morality: there are no moral facts.
As stated, this objection is too strong (because it assumes moral anti-realism is true). The correspondence theory can be agnostic in the dispute between moral realism and moral anti-realism. But moral realists intend to use the word “true” in exactly the same way that scientists use the word. Thus, a correspondence-theory moral realist needs to be able to identify what corresponds to any particular moral truth—otherwise, moral anti-realism is the correct moral epistemology.
Most people are moral realists, so if your theory of truth is inconsistent with moral realism, they will take that as evidence that your theory of truth is not correct.
Look, no one but a total idiot believes Mark’s epistemic theory. There is an external world, with sufficient regularity that our physical predictions will be accurate within the limits of our knowledge and computational power. The issue is whether that can be stated more rigorously—and the different specifications are where logical positivists, physical pragmitists, Kunn and other theorists disagree.
I do agree that objections 2 and 3F2 are not particularly compelling (as I understand them).
3C1: The correspondence relation must be some sort of resemblance relation. But truthbearers do not resemble anything in the world except other truthbearers—echoing Berkeley’s “an idea can be like nothing but an idea”.
This is actually a very easy one to respond to. Truthbearers do resemble non-truthbearers. What must ultimately be truth-bearing, if anything really is, is some component of the world—a brain-state, an utterance, or what-have-you. These truth-bearing parts of the world can resemble their referents, in the sense that a relatively simple and systematic transformation on one would yield some of the properties of the other. For instance, a literal map clearly resembles its territory; eliminating most of the territory’s properties, and transforming the ones that remain in a principled way, could produce the map. But sentences also resemble the territories they describe, e.g., through temporal and spatial correlation. Even Berkeley’s argument clearly fails for this reason; an immaterial idea can systematically share properties with a non-idea, if only temporal ones.
Eliezer thinks there is an arrow leading to “Snow is white” from the fact that snow is white.
Language use is a natural phenomenon. Hence, reference is also a natural phenomenon, and one we should try to explain as part of our project of accounting for the patterns of human behavior. Here, we’re trying to understand why humans assert “Snow is white” in the particular patterns they do, and why they assign truth-values to that sentence in the patterns they do. The simplest adequate hypothesis will note that usage of “snow” correlates with brain-states that in turn resemble (heavily transformed) snow, and that “white” correlates with brain-states resembling transformed white light, and that “Snow is white” expresses a relationship between these two phenomena such that white light is reflected off of snow. When normal English language users think white light reflects off of snow, they call the sentence “snow is white” true; and when they think the opposite, they call “snow is white” false. So, there is a physical relationship between the linguistic behavior of this community and the apparent properties of snow.
Most people are moral realists, so if your theory of truth is inconsistent with moral realism, they will take that as evidence that your theory of truth is not correct.
Yes, but is our goal to convince everyone that we’re correct, or to be correct? The unpopularity of moral anti-realism counts against the rhetorical persuasiveness of a correspondence theory combined with a conventional scientific world-view. But it will only count against the plausibility of this conjunction if we have reason to think that moral statements are true in the same basic way that statements about the whiteness of snow are true.
one we should try to explain as part of our project of accounting for the patterns of human behavior.
In brief, I disagree that we are trying to explain human behavior. We are trying to develop an agent-universal explanation of truth. The risk of focusing on human behavior (or human brain states) is that the theory of truth won’t generalize to non-human agents.
Regarding moral facts, I agree that our goal is true philosophy, not comforting philosophy. I’m a moral anti-realist independent of theory-of-truth considerations. But most people seem to feel that their moral senses are facts (yes, I’m well aware of the irony of appealing to universal intuitions in a post that urges rejection of appeals to universal intuitions).
The widespread nature of belief in values-as-truths cries out for explanation, and the only family of theories I’m aware of that even try to provide such an explanation is wildly controversial and unpopular in the scientific community.
We are trying to develop an agent-universal explanation of truth. The risk of focusing on human behavior (or human brain states) is that the theory of truth won’t generalize to non-human agents.
I’m not sure ‘agent’ is a natural kind. ‘Truth’ may not be a natural kind either; it may be a very gerrymandered, odd-looking collection of properties. So I spoke in terms of concrete human behaviors in order to maintain agnosticism about how generalizable these properties are. If they do turn out to be generalizable, then great. I don’t think any part of my account precludes that possibility.
The widespread nature of belief in values-as-truths cries out for explanation
Yes. My explanation is that our mental models do treat values as though they were real properties of things. Similarly, our mental models treat chairs as discrete solid objects, treat mathematical objects as mind-independent reals, treat animals as having desires and purposes, and treat possibility and necessity as worldly facts. In all of these cases, our evidence for the metaphysical category actually occurring is much weaker than our apparent confidence in the category’s reality. So the problem is very general; it seems that most of our beliefs are predicated on useful fictions (analogous to our willingness to affirm the truth of ‘Sherlock Holmes is a detective, not a carpenter’), in which case we are committed either to an error theory or to revising our standards for what ‘truth’ is.
‘Truth’ may not be a natural kind either; it may be a very gerrymandered, odd-looking collection of properties.
If so. rationalists may as well shut up shop, because anyone would be able to add an interest-specific lump to the gerrymander.
ETA
So the problem is very general; it seems that most of our beliefs are predicated on useful fictions (analogous to our willingness to affirm the truth of ‘Sherlock Holmes is a detective, not a carpenter’), in which case we are committed either to an error theory or to revising our standards for what ‘truth’ is.
If so. rationalists may as well shut up shop, because anyone would be able to add an interest-specific lump to the gerrymander.
People already do that, and yet rationalists see no reason to ‘shut up shop’ as a result. ‘True’ is just a word. Rationality is about systematic optimization for our goals, not about defending our favorite words from the rabble. Sometimes it’s worthwhile to actively criticize a use of ‘truth;’ sometimes it’s worthwhile to participate in the gerrymandering ourselves; and sometimes it’s worthwhile just to avoid getting involved in the kerfuffle. For instance, criticizing people for calling ‘Sherlock Holmes is a detective’ true is both less useful and less philosophically interesting than criticizing people for calling ‘there is exactly one empty set’ true.
Also, it’s important to remember that there are two different respects in which ‘truth’ might be gerrymandered. First, it might be gerrymandered for purely social reasons. Second, it might be gerrymandered because it’s a very complicated property of high-level representational systems. One should not expect mental states in general to be simply and nondisjunctively definable in a strictly physical language. Yet if we learned that ‘pain’ were a highly disjunctive property rather than a natural kind, this would give us no reason to stop deeming pain unpleasant.
People already do that, and yet rationalists see no reason to ‘shut up shop’ as a result
People try to do that, but rationalists don’t have to regard it as legitimate, and can object. However, if a notion of truth is adopted that is pluralistic and has no constraint on its pluralism—Anythng Goes—rationalists could no longer object to,eg. Astrological Truth.
‘True’ is just a word.
Rationality is about systematic optimization for our goals, not about defending our favorite words from the rabble.
So you say. Most rationalists are engaged in some sort of wider debate.
sometimes it’s worthwhile to participate in the gerrymandering
Even if it is intellectually dishonest to do so?
First, it might be gerrymandered for purely social reasons. Second, it might be gerrymandered because it’s a very complicated property of high-level representational systems.
I think you may have confused truth with statesof-mind-having-content-about-truth. Electrons are simple, thoughts about them aren’t.
One should not expect mental states in general to be simply and nondisjunctively definable in a strictly physical language. Yet if we learned that ‘pain’ were a highly disjunctive property rather than a natural kind, this would give us no reason to stop deeming pain unpleasant.
Somethings not being a natural kind, is not justification for arbitrarily changing its definition. I don’t get to redefine the taste of chocolate as a kind of pain.
No one on this thread, up till now, has mentioned an arbitrarily changing or anything goes model of truth. Perhaps you misunderstood what I meant by ‘gerrymandered.’ All I meant was that the referent of ‘truth’ in physical or biological terms may be an extremely complicated and ugly array of truth-bearing states. Conceding that doesn’t mean that we should allow ‘truth’ (or any word) to be used completely anarchically.
I don’t see how Eliezer could dodge this objection, or why he would want to.
I would phrase that as that he has recast it so it is non-objectionable.
A lot of the other objections are of the nature “how do you know?” And generally he lets the answer be, “we don’t know that to a degree of certainty that—it has been correctly pointed out—would philosophically objectionable.”
Well, that moves much closer to making objection 2 meaningful. If all that the correspondence theory of truth can do is reassure us that our colloquial usage of “truth” gestures at a unified and meaningful philosophical concept, then it isn’t much use. It is not like anyone seriously doubts that “empirically true” is a real thing.
I still don’t understand this ‘usefulness’ objection. If the correspondence theory of truth is a justification for colloquial notions of truth, its primary utility does lie in our not worrying too much about things we don’t actually need to worry about. There are other uses such as molding the way one approaches knowledge under uncertainty. The lemmas needed to produce the final “everything’s basically OK” result provide significant value.
There are many concepts where the precise contours of the correct position makes no practical difference to most people. Examples include (1) Newtonian vs. Relativity and QM, (2) the meaning of infinity, or (3) persistence of identity. Many of the folk versions of those types of concepts are inadequate in dealing with edge cases (e.g. the folk theory of infinity is hopelessly broken). The concept of “truth” is probably in this no-practical-implications category. As I said, there’s no particular reason to doubt truth exists, whether the correspondence theory is correct or not.
Anyway, edge cases don’t tend to come up in ordinary life, so there’s no good reason for most people to be worried. If one isn’t worried, then the whole correspondence-theory-of-truth project is pointless to you. Without worry, reassurance is irrelevant. By contrast, if you are worried, the correspondence theory is insufficient to reassure you. Your weaker interpretation is vacuous, Eliezer’s stronger version has flaws.
None of this says that one should worry about what “truth” is, but having taken on the question, I think Eliezer has come up short in answering.
I haven’t communicated clearly. There are two understandings of useful—practical-useful and philosophy-useful. Arguments aimed at philosophy-use are generally irrelevant to practical-use (aka “Without worry, reassurance is irrelevant”).
In particular, the correspondence theory of truth has essentially no practical-use. The interpretation you advocate here removes philosophical-use.
“Everything’s basically ok.” is a practical-use issue. Therefore, it’s off-topic in a philosophical-use discussion.
I don’t see where it’s coming up short in the first two examples you gave.
I mentioned the examples to try to explain the distinction between practical-use and philosophical-use. Believing the correspondence theory of truth won’t help with any of the examples I gave. Ockham’s Razor is not implied by the correspondence theory. Nor is Bayes’ Theorem. Correspondence theory implies physical realism, but physical realism does not imply correspondence theory.
I think is important to note that what we’ve been calling theories of truth are actually aimed at being theories of meaningfulness. As lukeprog implicitly asserts, there are whole areas of philosophy where we aren’t sure there is anything substantive at all. If we could figure out the correct theory of meaningfulness, we could figure out which areas of philosophy could be discarded entirely without close examination.
For example, Carnap and other logical positivists thought Heidegger’s assertion that “Das nicht nichtet” was meaningless nonsense. I’m notsure I agree, but figuring out questions like that is the purpose of a theory of meaning / truth.
I see, so you aren’t really concerned with practical-use applications; you’re more interested in figuring out which areas of philosophy are meaningful. That makes sense, but, on the other hand, can an area of philosophy with a well-established practical use still be meaningless ?
It sure would be surprising if that happened. But meaningfulness is not the only criteria one could apply to a theory. No one thinks Newtonian physics is meaningless, even though everyone thinks Newtonian physics is wrong (i.e. less right than relativity and QM).
In other words, one result of a viable theory of truth would be a formal articulation of “wronger than wrong.”
No one thinks Newtonian physics is meaningless, even though everyone thinks Newtonian physics is wrong (i.e. less right than relativity and QM).
That’s not the same as “wrong”, though. It’s just “less right”, but it’s still good enough to predict the orbit of Venus (though not Mercury), launch a satellite (though not a GPS satellite), or simply lob cannonballs at an enemy fortress, if you are so inclined.
From what I’ve seen, philosophy is more concerned with logical proofs and boolean truth values. If this is true, then perhaps that is the reason why philosophy is so riddled with deep-sounding yet ultimately useless propositions ? We’d be in deep trouble if we couldn’t use Newtonian mechanics just because it’s not as accurate as QM, even though we’re dealing with macro-sized cannonballs moving slower than sound.
As far as I can tell, we’re in the middle of a definitional dispute—and I can’t figure out how to get out.
My point remains that Eliezer’s reboot of logical positivism does no better (and no worse) than the best of other logical positivist philosophies. A theory of truth needs to be able to explain why certain propositions are meaningful. Using “correspondence” as a semantic stop sign does not achieve this goal.
Abandoning the attempt to divide the meaningful from the non-meaningful avoids many of the objections to Eliezer’s point, at the expense of failing to achieve a major purpose of the sequence.
It’s not so much a definitional dispute as I have no idea what you’re talking about.
Suggesting that there’s something out there which our ideas can accurately model isn’t a semantic stop sign at all. It suggests we use modeling language, which does, contra your statement elsewhere, suggest using Bayesian inference. It gives sufficient criteria for success and failure (test the models’ predictions). It puts sane epistemic limits on the knowable.
That seems neither impractical nor philosophically vacuous.
The philosophical problem has always been he apparent arbitrariness of the rules. You can say that “meaningful”
sentences are empircially verifiable ones. But why should anyone believe that? The sentence “the only meaningful
sentences are the empircially verifiable ones” isn’t obviously empirically verifiable. You have over-valued clarity and under-valued plausibility.
They need to be meaningful. If your definition of meaningfullness assers its own meaninglessness, you have a problem. If you are asserting that there is truth-by-stipulation as well as truth-by-correspondence, you have a problem.
What about mathematics, then ? Does it correspond to something “out there” ? If so, what/where is it ? If not, does this mean that math is not meaningful ?
Math is how you connect inferences. The results of mathematics are of the form ‘if X, Y, and Z, then A’… so, find cases where X, Y, and Z, and then check A.
It doesn’t even need to be a practical problem. Every time you construct an example, that counts.
I don’t see how that addresses the problem. You have said that there is one kind of truth/meanignullness,
based on modelling relaity, and then you describe mathematical truth in a form that doens’t match that.
If any domain can have its own standards of truth, then astrologers can say there merhcandise is “astrologically true”. You have anything goes.
This stuff is a tricky , typically philophsical problem because the obvious answers all have problems. Saying that all truth is correspondence means that either mathematical Platonism holds—mathematical truths correspond to the status quo in Plato’s heaven—or maths isn’t meaningful/true at all. Or truth isn’t correspondence, it’s anything goes.
I don’t think those problems are iresolvable, and EY has in fact suggested (but not originated) what I think
is a promissing approach.
How does it not match? Take the 4 color problem. It says you’re not going to be able to construct a minimally-5-color flat map. Go ahead. Try.
That’s the kind of example I’m talking about here. The examples are artificial, but by constructing them you are connecting the math back to reality. Artificial things are real.
If any domain can have its own standards of truth, then astrologers can say there merhcandise is “astrologically true”.
What? How is holding everything is held to the standard of ‘predict accurately or you’re wrong’ the same as ‘anything goes’?
I mean, if astrology just wants to be a closed system that never ever says anything about the outside world… I’m not interested in it, but it suddenly ceases to be false.
How does it not match? Take the 4 color problem. It says you’re not going to be able to construct a minimally-5-color flat map. Go ahead. Try.
That doesn’t matfch reality because it would still be true in universes with different laws of physics.
‘predict accurately or you’re wrong’ the same as ‘anything goes’?
It isn’t. It’s a standard of truth that too narrow to include much of maths.
I mean, if astrology just wants to be a closed system that never ever says anything about the outside world
That doens’t follow. Astrologers can say their merchandise is about the world, and true, but not true in a way that has anything to do with correspondence or prediction.
That doesn’t matfch reality because it would still be true in universes with different laws of physics.
If you’re in a different universe with different laws of physics, your implementation of the 4 color problem will have to be different. Your failure to correctly map between math and reality isn’t math’s problem. Math, as noted above, is of the form ‘if X and Y and Z, then A’ - and you can definitely arrange formal equivalents to X, Y, and Z by virtue of being able to express the math in the first place.
That doens’t follow. Astrologers can say their merchandise is about the world, and true, but not true in a way that has anything to do with correspondence or prediction.
It’s about the world but it doesn’t correspond to anything in the world? Then the correspondence model of truth has just said they’re full of shit. Victoreeee!
(note: above ‘victory’ claim is in reference to astrologers, not you)
If you’re in a different universe with different laws of physics, your implementation of the 4 color problem will have to be different.
I don’t have to implement it at all to see its truth. Maths is not just applied maths.
Math, as noted above, is of the form ‘if X and Y and Z, then A’ - and you can definitely arrange formal equivalents to X, Y, and Z by virtue of being able to express the math in the first place.
I don’t see that you mean. (Non-applied) maths is just formal, period, AFAIAC..
t’s about the world but it doesn’t correspond to anything in the world? Then the correspondence model of truth has just said they’re full of shit. Victoreeee!
And Astrologers can just say that the CToT is shit and they have a better ToT.
People who have different ‘theories’ of truth really have different definitions of the word ‘truth.’ Taboo that word away, and correspondence theorists are really criticizing astrologists for failing to describe the world accurately, not for asserting coherentist ‘falsehoods.’ Every reasonable disputant can agree that it is possible to describe the world accurately or inaccurately; correspondence theorists are just insisting that the activity of world-describing is important, and that it counts against astrologists that they fail to describe the world.
(P.S. Real astrologists are correspondence theorists. They think their doctrines are true because they are correctly describing the influence celestial bodies have on human behavior. Even idealists at least partly believe in correspondence theory; my claims about ideas in my head can still be true or false based on whether they accurately describe what I’m thinking.)
People who have different ‘theories’ of truth really have different definitions of the word ‘truth.’
That is not at all obvious. Let “that which should be believed” be the defintiion of truth. Then a correspondence theorist and coherence theorist stlll have plenty to disagree about, even if they both hold to the definition.
Agreed. However, it’s still the right view, as well as being the most useful one, since tabooing lets us figure out why people care about which ‘theory’ of ‘truth’ is.… (is what? true?). The real debate is over whether correspondence to the world is important in various discussions, not over whether everyone means the same thing (‘correspondence’) by a certain word (‘truth’).
Let “that which should be believed” be the defintiion of truth.
You can stipulate whatever you want, but “that which should be believed” simply isn’t a credible definition for that word. First, just about everyone thinks it’s possible, in certain circumstances, to ought to believe a falsehood. Second, propositional ‘belief’ itself is the conviction that something is true; we can’t understand belief until we first understand what truth is, or in what sense ‘truth’ is being used when we talk about believing something. Truth is a more basic concept than belief.
If your branch of mathematics is so unapplied that you can’t even represent it in our universe, I suspect it’s no longer math.
Any maths can be represented the way it ususally is, by writing down some essentially aribtrary symbols. That
does not indicate anything about “correspondence” to reality. The problem is the “arbitrary” in arbitrary symbol.
Lets say space is three dimensional. You can write down a formula for 17 dimensional space, but that doens’t
mean you have a chunk of 17 dimesional space for the maths to correspond to. You just have chalk on a blackboard.
Sure. And yet, you can implement vectors in 17 dimensional spaces by writing down 17-dimensional vectors in row notation. Math predicts the outcome of operations on these entities.
Show me a 17-vector. And what is being predicited? The onlyy way to get at the behaviour is to do the math, and the only way to do the predictions is...to do the math. I think meaningful prediction requires some non-identity between predictee and predictor.
The predictions of the mathematics of 17-dimensional space would, yes, depend on the outcome of other operations such as addition and multiplication—operations we can implement more directly in matter.
I have personally relied on the geometry of 11-dimensional spaces for a particular curve-fitting model to produce reliable results. If, say, the Pythagorean theorem suddenly stopped applying above 3 dimensions, it simply would not have worked.
I’m seing pixels on a 2d screen. I’m not seeing an existing 17d dimensional thing.
The mathematics of 17d space predict the mathematics of 17d space. They couldn’t fail to. Which means no real prediction is happening at all. 1.0 is not a probability.
There are things we can model as 17-dimensional spaces, and when we do, the behavior comes out the way we were hoping. This is because of formal equivalence: the behavior of the numbers in a 17-dimensional vector space precisely corresponds to the geometric behavior of a counterfactual 17-dimensional euclidean space. You talk about one, you’re also saying something about the other.
There are things we can model as 17-dimensional spaces,
But they are not 17 dimensional spaces. They have different physics. Treating them as 17 dimesional isn’t modelling them because it isn’t representing thema as they are.
To be concrete, suppose we have a robotic arm with 17 degrees of freedom of movement. It’s current state can and should be represented as a 17-dimensional vector, to which you should do 17-dimensional math to figure out things like “Where is the robotic arm’s index finger pointing?” or “Can the robotic arm touch its own elbow?”
Not obvious. It would just be a redundant way of representing a 3d object in 3 space.
The point of contention is the claim that for any maths, there is something in reality for it to represent. Now, we can model a system of 10 particles as 1 particle in 30 dimensional space, but that doens;t prove that 30d maths
has something in reality to represent, since in reality there are 10 particles. Is was our decision, not reality’s to
treat is as 1 particle in a higher-d space.
Past a certain degree of complexity, there are lots of decisions about representing objects that are “ours, not reality’s”. For example, even if you represent the 10 particles as 10 vectors in 3D space, you still choose an origin, a scale, and a basis for 3D space, and all of these are arbitrary.
The 30-dimensional particle makes correct predictions of the behavior of the 10 particles. That should be enough.
Treaing a mathermatical formula as something that cranks out predictions is treating it as instrumentally, is treaing it unrealistically. But you cannot’ have coherent notion of modeling or representation if there is no real territory being modeled or represented.
To argue that all maths is representational, you either have to claim we are living in Tegmarks level IV, or you have to stretch the meaning of “representation” to meaninglessness. Kindly and Luke Sommers seem to be heading down the second route.
A correct 30D formula wll make correct predictions, Mathematical space also contains an infinity of formulations that are incorrect. Surely it is obvious that you can’t claim eveything in maths correctly models or predicts something in realiy.
Predicts something that could happen in reality (e.g. we’re not rejecting math with 2+2=4 apples just because I only have 3 apples in my kitchen), or
Is an abstraction of other mathematical ideas.
Do you claim that (2) is no longer modeling something in reality? It is arguably still predicting things about reality once you unpack all the layers of abstraction—hopefully at least it has consequences relevant to math that does model something.
Or do you think that I’ve missed a category in my description of math?
It is arguably still predicting things about reality once you unpack all the layers of abstraction
I don’t see what abstraction has to do with it. The Standard Model has about 18 parameters. Vary those, and it will mispredict. I don’t think all the infinity of incorrect variations of the SM are more abstract.
As a physicist, I can say with a moderate degree of authority: no.
I have seen mathematical equations to describe population genetics. That was not physics. I have seen mathematical equations used to describe supply and demand curves. That was not physics. Etc.
If you’re using math to model something, or even could so use it, that is sufficient for it to have a correspondent for purposes of the correspondence theory of truth.
If you’re using math to model something, or even could so use it, that is sufficient for it to have a correspondent for purposes of the correspondence theory of truth.
But that is not suffcient to show that all maths models.
… okay, you were confusing before, but now you’re exceptionally confusing. You’re saying that the standard model of particle physics is an example of math that doesn’t model anything?
Well, it doesn’t model our universe. And the Standard Model is awfully complicated for someone to build a condensed matter system implementing a randomized variant of it. But it’s still a quantum mechanical system, so I wouldn’t bet strongly against it.
And of course if someone decided for some reason to run a quantum simulation using this H-sm-random, then anything you mathematically proved about H-sm-random would be proved about the results of that simulation. The correspondence would be there between the symbols you put in and the symbols you got back, by way of the process used to generate those outputs. It just would be modeling something less cosmically grand than the universe itself, just stuff going on inside a computer. It wouldn’t be worth while to do… but it still corresponds to a relationship that would hold if you were dumb enough to go out of your way to bring it about.
The thing about the correspondence theory of truth is that once something has been reached as corresponding to something and thus being eligible to be true, it serves as a stepping-stone to other things. You don’t need to work your way all the way down to ‘ground floor’ in one leap. You’re allowed to take general cases, not all of which need to be instantiated. Correspondence to patterns instead of instances is a thing.
And of course if someone decided for some reason to run a quantum simulation using this H-sm-random, the anything you mathematically proved about H-sm-random would be proved about the results of that simulation.
Which, as in your other examples, is case of a model modeling a model. You can build something physical
that simulates a universe where electrons have twice the mass, and you can predict the virtual behaviour
of the simulation with an SM where the electron mass paramter is doubled, but the simulation will be made
of electrons with standard mass.
The correspondence would be there between the symbols you put in and the symbols you got back, by way of the process used to generate those outputs. It just would be modeling something less cosmically grand than the universe itself, just stuff going on inside a computer.
It wouldn’t be modeliing reality.
The thing about the correspondence theory of truth
..is that it is a poor fit for mathematical truth. You are making mathetmatical theorems correspondnce-true
by giving them something artificial to correspond to. Before the creation of a simulaiton at time T, there
is nothing for them to correspond to.This is a mismatch with the intuition that mathematical truths are timelessly true.
is that once something has been reached as corresponding to something and thus being eligible to be true, it serves as a stepping-stone to other things. You don’t need to work your way all the way down to ‘ground floor’ in one leap. You’re allowed to take general cases, not all of which need to be instantiated. Correspondence to patterns instead of instances is a thing.
You can gerrymander CToT into something that works, however inelegantly, for maths, or you can abandon it in favour something that doesn’t need gerrymandering.
Physics uses a subset of maths, so the rest would be examples of vald (I am substituing that for “meaninful”, which I am not sure how t apply here) maths that doesn;t correspond to anything external, absent Platonism.
The word “True” is overloaded in the ordinary vernacular. Eliezer’s answer is to set up a separate standard for empirical and mathematical propositions.
Empirical assertions use the label “true” when they correspond to reality. Mathematical assertions use the label “valid” when the theorem follows from the axioms.
Eliezer’s answer is to set up a separate standard for empirical and mathematical propositions.
I dont’ think it is, and that’s a bad answer anyway. To say that two unrelated approaches are both truth allows anthing to join the truth club, since there are no longer criteria for membership.
Well, I don’t think that Eliezer would call mathematically valid propositions “true.” I don’t find that answer any more satisfying than you do. But (as your link suggests), I don’t think he can do better without abandoning the correspondence theory.
Suggesting that there’s something out there which our ideas can accurately model . . .
Simply put, there’s no one who disagrees with this point. And the correspondence theory cannot demonstrate it, even if there were a dispute.
Let me make an analogy to decision theory: In decision theory, the hard part is not figuring out the right answer in a particular problem. No one disputes that one-boxing in Newcomb’s problem has the best payoff. The difficulty in decision theory is rigorously describing a decision theory that comes up with the right answer on all the problems.
To make the parallel explicit, the existence of the external world is not the hard problem. The hard problem is what “true” means. For example, this comment is a sophisticated argument that “true” (or “meaningful”) are not natural kinds. Even if he’s right, that doesn’t conflict with the idea of an external world.
If I understood you correctly, then Berkeley-style Idealists would be an example. However, I have a strong suspicion that I’ve misunderstood you, so there’s that...
Solipsists, by some meanings of “out there”. More generally, skeptics. Various strong forms of relativism, though you might have to give them an inappropriately modernist interpretation to draw that out. My mother-in-law.
I ha ve read a lot of philosophy, and I don’t think EY is doing it at particualrly well. His occasional cross-disciplinary insights keep me going (I’m cross disiplinary too, I started in science and work in I.T).
But he often fails to communicate clearly (I still don’t know whether he thinks numbers exist) and argues vaguely.
If you’ll pardon the pun, I leave you with “Why I Stopped Worrying About the Definition of Life, and Why You Should as Well”.
I don’t see your point. For one thing, I’m not on the philosohpy “side” in some sense exclusive of being on the science or CS side or whatever. For another. there are always plenty of phils. who are agin GOCFA (Good Old Fashioned Conceptual Analysis). The collective noun for philosophers is “a disagreement”. Tha’ts another of
my catchphrases.
Eliezer often fails to communicate clearly (I still don’t know whether he thinks numbers exist) and argues vaguely.
Agree! Very frustrating. What I had in mind was, for example, his advice about dissolving the question, which is not the same advice you’d get from logical positivists or (most) contemporary naturalists.
I don’t see your point.
Sorry, I should have been clearer that I wasn’t trying to make much of a point by sending you the Machery article. I just wanted to send you a bit of snark. :)
What I had in mind was, for example, his advise about dissolving the question, which is not the same advice you’d get from logical positivists or (most) contemporary naturalists
I skimmed the paper. Dennett’s project is a dissolving one, though he does less to explain why we think we have qualia than Yudkowsky did with regard to why we think we have free will. But perhaps Dennett wrote something later which more explicitly sets out to explain why we think we have qualia?
I need to knowpositively how to answer typical philosophhical questions such as the meaning of life.
Only if the question is meaningful. Of course, just saying “Don’t do that then” doesn’t tell you how to resolve whether that’s the case or not, but necessarily expecting an answer rather than a dissolution is not necessarily correct.
I absolutely loathe the way you phrased that question for a variety of reasons (and I suspect analytic philosophers would as well), so I’m going to replace “meaning of life” with something more sensible like “solve metaethics” or “solve the hard problem of consciousness.” In which case, yes. I think computer science is more likely to solve metaethics and other philosophical problems because the field of philosophy isn’t founded on a program and incentive structure of continual improvement through feedback from reality. Oh, and computer science works on those kinds of problems (so do other areas of science, though).
I don’t think you have phrased “the question” differntly and better, I think you have substituted two differnt questions. Well, maybe you think the MoL is a ragbag of different questions, not one big one. Maybe it is. Maybe it
isn’t. That would be a philsophical question. I don’t see how empiricsm could help. Speaking of which...
What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values?
I didn’t notice and qualiometers or agathometers last time I was in a lab.
I’ve substituted problems that philosophy is actually working on (metaethics and conciousness) with one that analytic philosophy isn’t (meaning of life). Meaning comes from mind. Either we create our own meaning (absurdism, existentialism, ect) or we get meaning from a greater mind that designed us with a purpose (religion). Very simple. How could computer science or science dissolve this problem? (1) By not working on it because it’s unanswerable by the only methods we can have said to have answered something, or (2) making the problem answerable by operationalizing it or by reforming the intent of the question into another, answerable, question.
Through the process of science, we gain enough knowledge to dissolve philosophical questions or make the answer obvious and solved (even though science might not say “the meaning of life is X” but instead show that we evolved, what mind is, and how the universe likely came into being—in which case you can answer the question yourself without any need for a philosophy department).
What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values? I didn’t notice and qualiometers or agathometers last time I was in a lab.
If I want to know what’s happening in a brain, I have to understand the physical/biological/computational nature of the brain. If I can’t do that, then I can’t really explain qualia or such. You might say we can’t understand qualia through its physical/biological/computational nature. Maybe, but it seems very unlikely, and if we can’t understand the brain through science, then we’ll have discovered something very surprising and can then move in another direction with good reason.
I’ve substituted problems that philosophy is actually working on (metaethics and conciousness) with one that analytic philosophy isn’t (meaning of life).
Unless it is. Maybe the MoL breaks down into many of the other topics studied by philosophers. Maybe philosophy is in the process of reducing it.
Meaning comes from mind. Either we create our own meaning (absurdism, existentialism, ect) or we get meaning from a greater mind that designed us with a purpose (religion). Very simple
How could computer science or science dissolve this problem? (1) By not working on it because it’s unanswerable by the only methods we can have said to have answered something,
You say it is “unanswerable” timelessly. How do you know that? It’s unanswered up to present. As are a number of scientific questions.
or (2) making the problem answerable by operationalizing it or by reforming the intent of the question into another, answerable, question.
Maybe. But checking that you have correctly identified the intent, and not changed the subject, is just the sort of
armchair conceptual analysis philosophers do.
Through the process of science, we gain enough knowledge to dissolve philosophical questions or make the answer obvious and solved
You say that timelsessly, but at the time of writing we have done where we have and we don’t where we haven;t.
(even though science might not say “the meaning of life is X” but instead show that we evolved, what mind is, and how the universe likely came into being—in which case you can answer the question yourself without any need for a philosophy department).
But unless science can relate that back to the initial question , there is no need to consider it answered.
What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values? I didn’t notice and qualiometers or agathometers last time I was in a lab.
If I want to know what’s happening in a brain, I have to understand the physical/biological/computational nature of the brain.
That’s necessary, sure. But if it were sufficient, would we have a Hard Problem of Consciousness?
If I can’t do that, then I can’t really explain qualia or such.
But I am not suggesting that science be shut down, and the funds transferred to philosophy.
You might say we can’t understand qualia through its physical/biological/computational nature. Maybe, but it seems very unlikely,
It seems actual to me. We don’t have such an understanding at present. I don’t know what that means for the future,
and I don’t how you are computing your confident statement of unlikelihood. One doens’t even have
to believe in some kind of non-physicalism to think that we might never. The philosopher Colin McGinn
argues that we have good reason to believe both that consc. is physical, and that we will never
understand it.
and if we can’t understand the brain through science,
We can’t understand qualia through science now. How long does that have to continue before
you give up? What’s the harm in allowing philsophy to continue when it is so cheap compared
to science?
PS. I would be interested in hearing of a scientific theory of ethics that doens’t just ignore the
is-ought problem.
Even though the wikipedia page for “meaning of life” is enormous, it boils all down to the very simple either/or statement I gave.
How do we know if something is answerable? Did a chicken just materialize 10 billion light years from Earth? We can’t answer that. Is the color blue the best color? We can’t answer that. We can answer questions that contact reality such that we can observe them directly or indirectly. Did a chicken just materialize in front me? No. Is the color blue the most preferred color? I don’t know, but it can be well answered through reported preferences. I don’t know if these currently unanswerable questions will always be unanswerable, but given what I know I can only say that they will almost certainly remain unanswerable (because it’s unfeasible or because it’s a nonsensical question).
Wouldn’t science need to do conceptual analysis? Not really, though it could appear that way. Philosophy has “free will”, science has “volition.” Free will is a label for a continually argued concept. Volition is a label for an axiom that’s been nailed in stone. Science doesn’t really care about concepts, it just wants to ask questions such that it can answer them definitely.
Even though science might provide all the knowledge necessary to easily answer a question, it doesn’t actually answer it, right? My answer: so what? Science doesn’t answer a lot of trivial questions like what I exactly should eat for breakfast, even though the answer is perfectly obvious (healthy food as discovered by science if I want to remain healthy).
Why still have the hard problem of consciousness if it’s answerable by science? Because the brain is hard to understand. Give another century or so. We’ve barely explored the brain.
What if consciousness isn’t explainable by science? When we get to that point, we’ll be much better prepared to understand what direction we need to go to understand the brain. As it is now, philosophy is simply following science’s breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.
A scientific theory of ethics? It wouldn’t have any “you ought to do X because X is good,” but would be more of the form of “science says X,Y,Z are healthy for you” and then you would think “hey, I want to be healthy, so I’m going to eat X,Y,Z.” This is actually how philosophy works now. You get a whole bunch of argumentation as evidence, and then you must enact it personally through hypothetical injunctions like “if I want to maximize well being, then I should act as a utilitarian.”
Even though the wikipedia page for “meaning of life” is enormous, it boils all down to the very simple either/or statement I gave.
Providing you ignore the enornous amount of substructure hanging off each option.
do we know if something is answerable?
We generally perform some sort of armchair conceptual analysis.
Wouldn’t science need to do conceptual analysis? Not really,
Why not? Doesn’t it need to decide which questions it can answer?
Volition is a label for an axiom that’s been nailed in stone.
First I’ve heard of it. Who did that? Where was it published?
Why still have the hard problem of consciousness if it’s answerable by science? Because the brain is hard to understand.
Or impossible, or the brain isn’t solely or responsible, ro something else. It would have helped to have argued
for your prefered option.
Give another century or so. We’ve barely explored the brain.
As it is now, philosophy is simply following science’s breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.
Philosophy generally can’t solve scientific problems, and science generally can’t solve philosophical ones.
A scientific theory of ethics? It wouldn’t have any “you ought to do X because X is good,” but would be more of the form of “science says X,Y,Z are healthy for you” and then you would think “hey, I want to be healthy, so I’m going to eat X,Y,Z.”
And what about my interactions with others? Am I entitled to snatch an orange from a starving man because I need a few extra milligrams of vitamin C?
Maybe you can somehow show that the problem isn’t rampant.
Sure. Should I go about showing there are no unicorns and leprechauns while I’m at it?
ps when a restricted set of statements is used as the exemplar of a very wide and very deep field of which the entire point is to discuss ideas and their implications the proper response to criticism is not “oh yeah well prove it’s not true”
You’re both arguing over your impressions of philosophy. I’m more inclined to agree with Lukeprog’s impression unless you have some way of showing that your impression is more accurate. Like, for example, show me three papers in meta-ethics from the last year that you think highlight what is representational of that area of philosophy.
From my reading of philosophy, the most well known philosophers (who I’d assume are representational of the top 10% of the field) do keep intuitions and conceptual analysis in their toolbox. But when they bring it out of the toolbox, they dress it up so that it’s not prima facie stupid (and then you get a fractal mess of philosophers publishing how the intuition is wrong where their intuition isn’t, or how they shouldn’t be using intuitions, or how intuitions are useful, and so on with no resolution). If I were to take a step back and look at what philosophy accomplishes, I think I’d have to say “confusion.”
You can say this is just the way things are in philosophy, but then why should we fund philosophy?
You can say this is just the way things are in philosophy, but then why should we fund philosophy?
Because some of us realize that there are types of inquiry which are valuable and useful despite the confusion they offer to hyper-systemizing brains who can’t accept any view of reality outside a broken conception of radically reductive materialism.
How is philosophy going to get us the correct conception of reality? How will we know it when it happens? (I think science will progress us to the point where philosophy can answer the question, but by then anyone could)
Bertrand Russell put most of the metaphysical extravagances to bed (in the Anglo-American tradition at least) with the turn towards formal logic and language
Amusing in light of Russell’s rather exotic metaphysical views.
You can understand the difference between being a rough progenitor of a historical tradition in thought, on the one hand, and the views held by an individual, correct?
Honestly I’d expected a little better than the strategy of circling of the wagons and defending the group on the site of Pure Rationality where we correct biased thinking. Turns out LW is like every other internet forum and the focus on “rationality” makes no difference in the degree biases underpinning the arguments?
Show me three of your favorite papers from the last year in ethics or meta-ethics that highlight the kind of the philosophy you think is representational of the field. (And if you’ve been following Lukeprog’s posts for any length of time, you’d see that he’s probably read more philosophy than most philosophers. His gestalt impression of the field is probably accurate.)
I think Eliezer is generally right that reading too much mainstream philosophy — even “naturalistic” analytic philosophy — is somewhat likely to “teach very bad habits of thought that will lead people to be unable to do real work.”
Also could you expand on this as I didn’t catch it before the edit?
It’s not obvious what the “bad habits” might be, and what they are bad relative to. This reads as a claim that would be very hard to defend at face value, and without clarification it reads like a throwaway attack not to be taken seriously.
It’s not obvious what the “bad habits” might be, and what they are bad relative to.
Examples of bad habits often picked up from reading too much philosophy: arguing endlessly about definitions, or using one’s own intuitions as strong evidence about how the external world works. These are bad habits relative to, you know, not arguing endlessly about definitions, and using science to figure out how the world works.
Is the problem the arguing, or the arguing endlessly? In science, there is little need to argue about definitions because Someone Somewhere has settled the issue, often by stipulation. In philosophy, there is no Someone Somewhere who convenientyl does this for you. Philosophy deals with non-empirical questions (or it would be science), which means it deals with concepts, and since we access concepts with words, it deals with definitions. So the criticism that philosophers shouldn’t argue definitions is tantamount to criticising philosophy for being philosophy. Uless the problem was the “endlessly”.
using one’s own intuitions as strong evidence about how the external world works.
Who does that? (ETA: at least for the past one hundred years) None of your examples work that way. Questions like “what is knowledge” and “what is the right thing to do” are not about the EW.
The problem is “arguing” as compared to “investigating”.
If there’s a disagreement about how human minds implement certain ideas, then it’s more productive to do experimental psychology than to discuss it abstractly, for the usual scientific reasons: nailing it down to a prediction makes sure that the idea in question is actually coherent, and also there are a lot of potential pitfalls when humans try to use their own brains to examine their own brains.
Though on the other hand, coming up with good experiments for this stuff is really tricky. As Suryc mentions above, you can’t just ask people what they mean by “intentional” or whatever, you’ll get garbage results. Just like how if you ask somebody with no linguistics knowledge to explain English grammar to you you’ll get nonsense back, even if that person is quite capable at actually writing in English.
Also: Who says that concepts are non-empirical? Doesn’t it come down to something like a scientific investigation into the operations of the human brain?
arguing endlessly about definitions, or using one’s own intuitions as strong evidence about how the external world works.
So this comes down to what you said previously about not liking people who came out of Philosophy 101, e.g., it’s an argument against a philosophical tradition that does not actually exist.
These are bad habits relative to, you know, not arguing endlessly about definitions, and using science to figure out how the world works.
You mention naturalism as a “bad habit” for using science to understand the world?
Do you actually understand what naturalism is and what relationship it has with science?
You mention naturalism as a “bad habit” for using science to understand the world?
No, he doesn’t (which is why I downvoted this comment, BTW). Luke says that even naturalistic philosophers exhibit these bad habits. He does not say that naturalism is a bad habit, or that it’s a bad habit because it uses science to understand the world.
Luke says that even naturalistic philosophers exhibit these bad habits. He does not say that naturalism is a bad habit, or that it’s a bad habit because it uses science to understand the world.
Not quite:
reading too much mainstream philosophy … is somewhat likely to “teach very bad habits of thought that will lead people to be unable to do real work.”
“Teach” implies that engaging one’s self with “too much” mainstream philosophy will cause bad habits to arise (and make one unable to do ‘real work’, whatever that might be).
Unexamined presuppositions make a wonderful basis for discourse.
I don’t think that’s what lukeprog meant. That said, thinking ‘naturalism’ is a unitary concept that the members of some relevant linguistic community or intellectual elite share is itself a startlingly good example of the sort of practice lukeprog’s ‘intuitions aren’t shared’ meme is warning about.
The Stanford Encyclopedia article on naturalism itself begins, amusingly enough:
“The term ‘naturalism’ has no very precise meaning in contemporary philosophy. [...‘N]aturalism’ is not a particularly informative term as applied to contemporary philosophers. The great majority of contemporary philosophers would happily accept naturalism[...]—that is, they would both reject ‘supernatural’ entities, and allow that science is a possible route (if not necessarily the only one) to important truths about the ‘human spirit’.
Even so, this entry will not aim to pin down any more informative definition of ‘naturalism’. It would be fruitless to try to adjudicate some official way of understanding the term. Different contemporary philosophers interpret ‘naturalism’ differently. This disagreement about usage is no accident. For better or worse, ‘naturalism’ is widely viewed as a positive term in philosophical circles—few active philosophers nowadays are happy to announce themselves as ‘non-naturalists’. This inevitably leads to a divergence in understanding the requirements of ‘naturalism’. Those philosophers with relatively weak naturalist commitments are inclined to understand ‘naturalism’ in a unrestrictive way, in order not to disqualify themselves as ‘naturalists’, while those who uphold stronger naturalist doctrines are happy to set the bar for ‘naturalism’ higher.”
Thinking ‘naturalism’ is a unitary concept that the members of some relevant linguistic community or intellectual elite share is itself a startlingly good example of the ‘intuitions aren’t shared’ corrective lukeprog was making.
But calling it a “bad habit” with no justification or qualification is exempt from being an equally good (better, in fact, given that I’d not at all expanded on naturalism and certainly not with a dismissive one-liner) example of the “corrective”?
PS—the Stanford Encyclopedia is as good a “proof” as posting a link from Wikipedia. There is (of course) debate in philosophy, but to claim that “naturalism” encourages “bad habits” is just plain sloppy thinking and a strawman built against equally sloppy philosophy undergrads.
If intuitions aren’t reliable, then this entire line of thought is unreliable :-)
To be frank, although I speak for myself and not lukeprog, framing the scientific method or world-view in terms of ‘naturalism,’ or in terms of a nature/‘supernature’ dichotomy, is a bad habit. I can’t say much more than that until you explain what you personally mean by ‘naturalism.’
the Stanford Encyclopedia is as good a “proof” as posting a link from Wikipedia.
I don’t follow. A Stanford Encyclopedia is much better evidence for the professional consensus of philosophers than is a Wikipedia article.
If intuitions aren’t reliable, then this entire line of thought is unreliable :-)
Are you alluding to the fact that we all rely on intuitions in our everyday reason? If so, this is an important point. The take-away message from philosophy’s excesses is not ‘Avoid all intuitions.’ It’s ‘Scrutinize intuitions to determine which ones we have reason to expect to match the contours of the territory.’ The successes of philosophy—successes like ‘science’ and ‘mathematics’ and ‘logic’—are formalized and heavily scrutinized networks of intuitions, intuitions that we have good empirical reason to think happen to be of a rare sort that correspond to the large-scale structure of reality. Most of our intuitions aren’t like that, though they may still be useful and interesting in other respects.
To be frank, although I speak for myself and not lukeprog, framing the scientific method or world-view in terms of ‘naturalism,’ or in terms of a nature/‘supernature’ dichotomy, is a bad habit. I can’t say much more than that until you explain what you personally mean by ‘naturalism.’
I’m thinking of naturalism as broadly accepted by modern analytic philosophy, in Quine’s terms and in more modern constructions which emphasize i) that the natural world is the “only” world (this is not to be confused with a dualistic opposition to anything “supernatural”; the supernatural is simply ruled out as an option) and ii) that science is a preferred means of obtaining knowledge about said world.
I realize that’s less clear than you may want, but the vagueness of the term is part of why I found it objectionable to treat is as instilling “bad habits”.
Are you alluding to the fact that we all rely on intuitions in our everyday reason?
Well, indirectly, but the specific point was that the argument presented here is an intuition about what goes on in philosophy, what constitutes the current trends and debates within the discipline, and so on, and it appears to me that it is more strawman than a rigorous reply to those activities.
Given that it’s an intuition underpinning an article about the unreliability of intuitions, well...you can appreciate the meta-humor I found there.
It’s ‘Scrutinize intuitions to determine which ones we have reason to expect to match the contours of the territory.’
Of course, and as I’ve relayed in other comments, this is no insight to philosophers—philosophers already do this. We could of course point out instances where the philosopher’s argument is predicated on validating intutions, but even there you are guaranteed to see a more nuanced position than the uncritical acceptance of common-sense intuitions, and as such even those positions mandate more than a sweeping dismissal.
The successes of philosophy—successes like ‘science’ and ‘mathematics’ and ‘logic’—are formalized and heavily scrutinized networks of intuitions, intuitions that we have good empirical reason to think happen to be of a rare sort that correspond to the large-scale structure of reality.
And ethics/meta-ethics, moral theory, social theory, aesthetics...all of these are, at least in part, beyond the realm of the empirical, and it is a philosophical stance you have taken which puts them in the realm of the physical and empirical or else excludes their reality (if you go the eliminativist route).
These domains are arguably as successful at what they do as math and logic have been in their respective domains, and frankly they don’t operate anything like what you’ve described (re: empirically-discovered relations to the large scale of reality). This is part of why we need naturalistic philosophy, because without it you wind up with unabashed scientism like this, which sits right on the precipice of “ethical” choices which can be monstrous.
Personally I think even other forms of philosophy are not only useful, but what have been called “bad habits” by Eliezer et al. are actually central components of a lived human life. I wouldn’t be so hasty to get rid of them, and certainly not with such a sweeping set of dismissals about the primacy of science.
Define “natural world” so that it’s clearer how the above is non-tautological.
(this is not to be confused with a dualistic opposition to anything “supernatural”;
If you aren’t denying or opposing anything, then what work is “only” doing in the sense “the natural world is the only world”?
the supernatural is simply ruled out as an option)
What does it mean in this context to ‘rule out as an option’ something? How does this differ from ‘opposing’ an option?
and ii) that science is a preferred means of obtaining knowledge about said world.
Define ‘science,’ while you’re at it. Is looking out the window science? Is logical deduction science? Is logical deduction science when your premises are ‘about the world’? Same question for mathematical reasoning. I’d think most scientists in their daily lives would actually consider logical or mathematical reasoning stronger than, ‘preferred’ over, any scientific observation or theory.
I realize that’s less clear than you may want, but the vagueness of the term is part of why I found it objectionable to treat is as instilling “bad habits”.
The vagueness of the term ‘naturalism’ is the primary reason it’s a bad habit to define your methods or world-view in terms of it.
And ethics/meta-ethics, moral theory, social theory, aesthetics...all of these are, at least in part, beyond the realm of the empirical
I don’t know what you mean by ‘beyond the realm of the empirical.’ Plenty of logic and mathematics also transcends the observable. I think we’d get a lot further in this discussion if we started defining or tabooing ‘science,’ ‘philosophy,’ ‘empirical,’ ‘natural,’ etc.
This is part of why we need naturalistic philosophy, because without it you wind up with unabashed scientism like this, which sits right on the precipice of “ethical” choices which can be monstrous.
To be honest, this sentence here pretty much sums up what I think is wrong with modern philosophy. There is virtually no content to ‘naturalism’ or ‘scientism,’ beyond the fact that both are associated with science and the former has a positive connotation, while the latter has a negative connotation. Thus we see much of the modern philosophical (and pop-philosophical) discourse consumed in hand-wringing over whether something is ‘naturalistic’ (goodscience! happy face!) or whether something is ‘scientistic’ (badscience! frowny face!), and the whole framing does nothing but obscure what’s actually under debate. Any non-trivial definition of ‘naturalism’ and ‘scientism’ will allow that a reasonable scientist might be forced to forsake naturalism, or adopt scientism, in at least some circumstances; and any circular or otherwise trivial one is not worth discussing.
If you aren’t denying or opposing anything, then what work is “only” doing in the sense “the natural world is the only world”?
In that there is “no more than”, in ontological terms, there are no other fundamental categories of being. I don’t have to explicitly deny that unicorns exist in order to rule them out of any taxonomy of equine animals.
If you’ve presupposed a worldview that allows for “supernatural” or “mystical” or Cartesian mind-substance or what have you, then of course the opposition seems obvious, but modern analytical naturalism as it stands makes no such allowance. This is why we cannot take our presuppositions for granted.
Define ‘science,’ while you’re at it.
You don’t have the space on this forum for that debate. However, for pragmatic purposes, let’s (roughly) call it the social activity of institutionalized formal empirical inquiry, inclusive of the error-correcting norms and structures meant to filter our systematic errors.
The vagueness of the term ‘naturalism’ is the primary reason it’s a bad habit to define your methods or world-view in terms of it.
Maybe if you didn’t take flippant comments and run with them you wouldn’t encounter this problem. I brought up naturalism because I found it hilarious that “even modern analytic philosophy” teaches these laughably vague “bad habits”—which you still seem surprisingly unconcerned with, given the far more serious issues there—and contemporary naturalism as practiced by many philosophers in the English-speaking world is as pro-science a set of ideas as you’ll find.
Spiraling it out into this protracted debate about whether we can accurately define naturalism—on your terms, no less—is not the point of the exercise (and I suspect it’s only happened to take the focus off the matter at hand: that there is no adequate account of these “bad habits” and we’re seeing an interference play to keep eyes off it).
There is virtually no content to ‘naturalism’ or ‘scientism,’ beyond the fact that both are associated with science and the former has a positive connotation, while the latter has a negative connotation.
Yes I’m well aware of the dislike of anything intrinsically opposed to the formal and computable around these parts, and I also find that position to be laughable (and a shining example of why you folks need to engage with philosophy rather than jumping head-first into troubling [and equally laughable] moral-ethical positions).
But, as per the thread, there is a more interesting and proximate criticism: your intuitions on such are unreliable, by your own lights, so you’ll pardon me if I am hardly persuaded by your fiat declaration that i) there is “no content” to a whole wide-ranging debate (of which you seem barely familiar with, at that, with your introduction of yet another nonsensical opposition that might as well be fiction for all it reflects the actual process*) and ii) that we should—again by decree—paint as “useless” the tools and methods used to engage in the debate.
We are only fortunate that the actual intellectual world doesn’t conduct itself like a message board.
PS There is no serious debate “between” naturalism and scientism. The latter isn’t even a “position” as such, even less so than naturalism could be.
So this comes down to what you said previously about not liking people who came out of Philosophy 101, e.g., it’s an argument against a philosophical tradition that does not actually exist.
No. It’s an argument against a philosophical tradition that does exist.
In this “Philosophy by Humans” sub-sequence, it seems like the most common response I get is, “No, philosophers can’t actually be that stupid,” even though my post went to the trouble of quoting philosophers saying “Yes, this thing here is our standard practice.”
I’ll say it again: by “intuition” they might mean “shared intuition”, in which case they are doing nothing wrong so long as there are some, and so long as they rejected purported intuitions which aren’t shared.
In this “Philosophy by Humans” sub-sequence, it seems like the most common response I get is, “No, philosophers can’t actually be that stupid,” even though my post went to the trouble of quoting philosophers saying “Yes, this thing here is our standard practice.”
So? I can quote scientists saying all manner of stupid, bizarre, unintuitive things...but my selection of course sets up the terms of the discussion. If I choose a sampling that only confirms my existing bias against scientists, then my “quotes” are going to lead to the foregone conclusion. I don’t see why “quoting” a few names is considered evidence of anything besides a pre-existing bias against philosophy.
On a second and more important point, you’ve yet to elaborate on why having a debate about ethics is problematic in the first place. Your appeal to Eliezer and his vague handwaving about “bad habits” and “real work” (which range from “too vague” to “nonsensical” depending on how charitable you want to be) is not persuasive, so I’d ask again: what is wrong with philosophy doing what it is supposed to do, i.e., examine ideas?
I realize that declaring it “wrong” by fiat seems to be the rule around here, if the comments are any indication, but from the philosophical standpoint that’s a laughable argument to make, and it’s not persuasive to anyone who doesn’t already share your presuppositions.
If I choose a sampling that only confirms my existing bias against scientists, then my “quotes” are going to lead to the foregone conclusion. I don’t see why “quoting” a few names is considered evidence of anything besides a pre-existing bias against philosophy.
So you’re worried about the problem of filtered evidence. Throughout this sequence, I’ve given lots of citations and direct quotes of philosophers doing things — and saying that they’re doing things — which don’t make sense given certain pieces of scientific evidence. Can you, then, provide citations or quotes of philosophers saying “No, we aren’t really appealing to intuitions in this way?” I’ll bet you can find a few, but I don’t think they’ll say that their own approach is the standard one.
You’re asking me to do all the work, here. I’ve provided examples and evidence, and you’ve just flatly denied my examples and evidence without providing any counterexamples or counterevidence. That’s logically rude.
you’ve yet to elaborate on why having a debate about ethics is problematic in the first place… what is wrong with philosophy doing what it is supposed to do, i.e., examine ideas?
Here, you managed to straw man me twice in a single paragraph. I never said that debates about ethics are problematic, and I never said there’s something wrong with philosophy examining ideas. I’ve only ever said that specific, particular ways of examining ideas or having philosophical debates are problematic, and I’ve explained in detail why those specific, particular methods are problematic. You’re just ignoring what I’ve actually said, and what I have not said.
I realize that declaring it “wrong” by fiat seems to be the rule around here, if the comments are any indication, but from the philosophical standpoint that’s a laughable argument to make, and it’s not persuasive to anyone who doesn’t already share your presuppositions.
Again, I’m the one who bothered to provide examples and evidence for my position. You’re the one who keeps declaring things wrong without providing any examples and evidence to support your own view. Declaring something wrong without providing reason or evidence is against the cultural norm around here, and you are the one who is violating it.
You’re asking me to do all the work, here. I’ve provided examples and evidence, and you’ve just flatly denied my examples and evidence without providing any counterexamples or counterevidence.
All I’ve asked you to do is at least pretend you have some familiarity with the field’s content, and how that content relates to its raison d’etre. As before, I don’t have to provide “counterevidence” that science doesn’t take luminiferous ether seriously as a hypothesis; anyone familiar with the field would already know this.
I never said that debates about ethics are problematic, and I never said there’s something wrong with philosophy examining ideas.
Of course you didn’t say it, because that would be stupid, but it’s implicit in the points you’ve repeatedly made, viz. “philosophers are stupid, if they only paid attention to science....” Well, they do pay attention to science, in fact there is a whole realm of philosophers who pay attention to science and make that a centerpiece of their discussion, and that given philosophy’s purpose as “engagement with ideas” it is implicit that, wonder of wonders, some philosophers will take positions that disagree with the claim you’ve put forth.
That latter statement is the issue, as you said in your article that, since some philosophers accept intuitions as valid (a claim you never bothered to unpack or examine in any detail), therefore we should consider philosophy a primitive and useless artifact of Cartesian thinking.
You’ve taken it for granted without outright saying it. Maybe if you read more philosophy you wouldn’t make these kinds of errors.
Again, I’m the one who bothered to provide examples and evidence for my position. You’re the one who keeps declaring things wrong without providing any examples and evidence to support your own view. Declaring something wrong without providing reason or evidence is against the cultural norm around here, and you are the one who is violating it.
I see, so the cultural norm is to take unfavorable samples of a field you don’t like, present them as exemplars, used them as grounds to justify a giant-sized strawman against said field, complain when people don’t accept that position without criticism, and then hide behind conveniently linked rules meant to fortify your pre-existing groupthink.
Sounds far more rational than every other web forum ever.
To expand on your point, philosophers like Thomas Kuhn and Paul Feyerabend provide a vision of what sophisticated modern philosophy can do to improve the scientist’s perspective.
Sounds far more rational than every other web forum ever.
It’s so much fun to write that. Still, please don’t. Your point is well made in the previous paragraph—this sentence only detracts from your persuasiveness.
All I’ve asked you to do is at least pretend you have some familiarity with the field’s content, and how that content relates to its raison d’etre.
I don’t understand. Certainly, I’m at least “pretending” to have “some familiarity” with the field’s content, and how that content relates to its raison d’etre, by way of citing hundreds of works in the field, quoting philosophers, hosting a podcast for which I interviewed dozens of philosophers for hours on end, etc.
it’s implicit in the points you’ve repeatedly made, viz. “philosophers are stupid, if they only paid attention to science....” Well, they do pay attention to science, in fact there is a whole realm of philosophers who pay attention to science and make that a centerpiece of their discussion
Of course many philosophers pay attention to science. When Eliezer wrote, “If there’s any centralized repository of reductionist-grade naturalistic cognitive philosophy, I’ve never heard mention of it,” I replied (earlier in this sequence):
When I read that I thought: What? That’s Quinean naturalism! That’s Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!
Again: you’re straw-manning me. I’ve said specific things about the ways in which many philosophers are ignoring scientific results, but I’m quite aware that they pay attention to other parts of science, and of course that many of them (e.g. the experimental philosophers) pay attention to the kinds of evidence that I’m accusing others of ignoring.
you said in your article that, since some philosophers accept intuitions as valid… therefore we should consider philosophy an artifact of Cartesian thinking.
Straw man number… 5? 6? I’ve lost count. Where did I say that?
You’ve taken it for granted without outright saying it.
Wait, first you claim that “you said in your article that...” and in the very next paragraph you claim that I’ve “taken it for granted without outright saying it”? I’m very confused.
I see, so the cultural norm is to take unfavorable samples of a field you don’t like, present them as exemplars, complain when people don’t accept that position without criticism, and then hide behind rules meant to fortify your pre-existing groupthink.
No. I complain when I do all the work of presenting arguments, examples, and evidence, and you simply deny it all without presenting any arguments, examples, and evidence of your own.
Certainly, I’m at least “pretending” to have “some familiarity” with the field’s content, and how that content relates to its raison d’etre, by way of citing hundreds of works in the field, quoting philosophers, hosting a podcast for which I interviewed dozens of philosophers for hours on end, etc.
You’d think if this were the case you’d be able to make a more honest assessment of the field.
I’ve said specific things about the ways in which many philosophers are ignoring scientific results, but I’m quite aware that they pay attention to other parts of science, and of course that many of them (e.g. the experimental philosophers) pay attention to the kinds of evidence that I’m accusing others of ignoring.
Alright, I’ll grant you this. You’ve still made the point that the field of philosophy has not acknowledged the unreliability of intuitions, as if this were a novel insight and not something that is taken very seriously in the modern-day (at least) debates, and that this is a fundamental flaw in the discipline itself.
Where did I say that?
Right here:
What would happen if we dropped all philosophical methods that were developed when we had a Cartesian view of the mind and of reason, and instead invented philosophy anew given what we now know about the physical processes that produce human reasoning?
The implication being that Cartesian views of mind and reason are in any way relevant to modern philosophy. This isn’t even true for Continental philosophy and hasn’t been for a long time.
Wait, first you claim that “you said in your article that...” and in the very next paragraph you claim that I’ve “taken it for granted without outright saying it”? I’m very confused.
I agree, you are, so let’s slow down and look at my actual criticism again.
What you wrote was that philosophers accept intutions at face value, uncritically...which isn’t true, and I responded accordingly.
What you implied, in that it follows necessarily from your explicitly-made argument, is that since some philosophers accept intutions as valid, therefore the discipline-as-a-whole is broken. But that isn’t true; the entire point is to discuss disparate, conflicting, and even dubious ideas; this is no blackmark as you’ve construed it.
No. I complain when I do all the work of presenting arguments, examples, and evidence, and you simply deny it all without presenting any arguments, examples, and evidence of your own.
A convenient way to hide behind your biases, I suppose, but I’m not sure what it accomplishes otherwise. Even the Stanford Encyclopedia’s entries on moral theory and ethics don’t back up your “unique” assessment of the field.
I don’t think this is going anywhere useful. You’re still straw-manning me and failing to provide exact counterexamples and counter-evidence. I’m moving on to more productive activities.
So? I can quote scientists saying all manner of stupid, bizarre, unintuitive things...but my selection of course sets up the terms of the discussion. If I choose a sampling that only confirms my existing bias against scientists, then my “quotes” are going to lead to the foregone conclusion. I don’t see why “quoting” a few names is considered evidence of anything besides a pre-existing bias against philosophy.
Improving upon this: why care about what the worst of a field has to say? It’s the 10% (stergeon’s law) that aren’t crap that we should care about. The best material scientists give us incremental improvements in our materials technology, and the worst write papers that are never read or do research that is never used. But what do the best philosophers of meta-ethics give us? More well examined ideas? How would you measure such a thing? How can those best philosophers know they’re making progress? How can they improve the tools they use? Why should we fund philosophy departments?
The best ethical philosophers give us the foundations of utility calculation, clarify when we can (and can’t) derive facts and values from each other, generate heuristics and frameworks within which to do politics and resolve disputes over goals and priorities. The best metaphysicians give us scientific reasoning, novel interpretations of quantum mechanics, warnings of scientists becoming overreliant on some component of common sense, and new empirical research programs (Einstein’s most important work consisted of metaphysical thought experiments). The best logicians and linguistic philosophers give us the propositional calculus, knowledge of valid and invalid forms, etc., etc. Even if you think the modalists and dialetheists are crazy, you can be very thankful to them for developing modal and paraconsistent logics that have valuable applications outside of traditional philosophical disputes.
And, of course, philosophy in general is useful for testing the tools of our trade. We can be more confident of and skilled in our reasoning in specific domains, like physics and electrical engineering and differential calculus, when those tools have been put to the test in foundational disputes. A bad Philosophy 101 class can lead to hyperskepticism or metaphysical dogmatism, but a good Philosophy 101 class can lead to a healthy skepticism mixed with intellectual curiosity and dynamism. Ultimately, the reason to fund ‘philosophy’ departments is that there is no such thing as ‘philosophy;’ what the departments in question are really teaching is how to think carefully about the most difficult questions. The actual questions have nothing especially in common, beyond their difficulty, their intractability before our ordinary methods.
I’m a bit worried that your conception of philosophy is riding on the coat tails of long-past-philosophy where the distinction between philosophy, math, and science were much more blurred than they are now. Being generous, do you have any examples from the last few decades (that I can read about)?
I’ll agree with you that having some philosophical training is better than none in that it can be useful in getting a solid footing in basic critical thinking skills, but then if that’s a philosophy department’s purpose then it doesn’t need to be funded beyond that.
Could you taboo/define ‘philosophy,’ ‘math,’ and ‘science’ for me in a way that clarifies exactly how they don’t overlap? It’d be very helpful. Is there any principled reason, for example, that theoretical physics cannot be philosophy? Or is some theoretical physics philosophy, and some not? Is there a sharp line, or a continuum between the two kinds of theoretical physics?
if that’s a philosophy department’s purpose then it doesn’t need to be funded beyond that.
If that’s a philosophy department’s purpose, and nothing else can fulfill the same purpose, then philosophy departments are vastly underfunded as it stands. (Though I agree the current funding could be better managed.)
But the real flaw is that we think of philosophy as a college thing. Philosophical training should be fully integrated into quite early-age education in logical, scientific, mathematical, moral, and other forms of reasoning.
I didn’t say they don’t overlap. I said the distinctions have become less blurred (I think because of the need for increased specialization in all intellectual endeavours as we accumulate more knowledge). I define philosophy, math, and science by their professions. That is, their university departments, their journals, their majors, their textbooks, and so on.
Hence, I think the best way to ask if “philosophy” is a worthwhile endeavour is to asked “why should we fund philosophy departments?” A better way to ask that question is “why should we fund philosophy research and professional philosophers (as opposed to teachers of basic philosophy)?”
And though while I think basic philosophy can be helpful in getting a footing in critical thinking, I also think CFAR is considerably better at teaching critical thinking.
I don’t see any principled reason for why we can’t all be generalists without labels. Practical reasons, yes.
I thought you were saying that the distinctions have become less blurred? Now I’m confused.
I define philosophy, math, and science by their professions.
That’s fine for some everyday purposes. But if we want to distinguish the useful behaviors in each profession from the useless ones, and promote the best behaviors both among laypeople and among professionals, we need more fine-grained categories than just ‘everything that people who publish in journals seen as philosophy journals do.’ I think it would be useful to distinguish Professional Philosophy, Professional Science, and Professional Mathematics from the basic human practices of philosophizing, doing science, or reasoning mathematically. Something in the neighborhood of these ideas would be quite useful:
mathematics: carefully and systematically reasoning about quantity, or (more loosely) about the quantitative properties and relationships of things.
philosophy: carefully reasoning about generalizations, via ‘internal’ reflection (phenomenology, thought experiments, conceptual analysis, etc.), in a moderately (more than shamanic storytelling, less than math or logic) systematic way.
science: carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance.
Do you think these would be useful fast-and-ready definitions for everyday promotion of scientific, philosophical, and mathematical literacy? Would you modify any of them?
I thought you were saying that the distinctions have become less blurred?
Yup, my bad. You caught me before my edit.
Do you think these would be useful fast-and-ready definitions for everyday promotion of scientific, philosophical, and mathematical literacy? Would you modify any of them?
I think you’re reifying abstraction and doing so will introduce pitfalls when discussing them. Math, science, and philosophy are the abstracted output of their respective professions. If you take away science’s competitive incentive structure or change its mechanism of output (journal articles) then you’re modifying science. If you install a self-improving recursive feedback cycle with reality in philosophy, then I think you’ve recreated math and science within philosophy (because science is fundamentally concrete reasoning while math is abstract reasoning and philosophy carries both).
If I’m going to promote something to laypeople, it’s that a mechanism of recursive self-improvement is desirable. There’s plenty to unpack there, though. Like you need a measure of improvement that contacts reality.
I think you’re reifying abstraction and doing so will introduce pitfalls when discussing them.
I think your definitions are more abstract than mine. For me, mathematics, philosophy, and science are embodied brain behaviors — modes of reasoning. For you, if I’m understanding you right, they’re professions, institutions, social groups, population-wide behaviors. Sociology is generally considered more abstract or high-level than psychology.
(Of course, I don’t reject your definitions on that account; denying the existence of philosophizing or of professional philosophy because one or the other is ‘abstract’ would be as silly as denying the existence of abstractions like debt, difficulty, truth, or natural selection. I just think your abstraction is of somewhat more limited utility than mine, when our goal is to spread good philosophizing, science, and mathematics rather than to treat the good qualities of those disciplines as the special property of a prestigious intellectual elite belonging to a specific network of organizations.)
Feedback cycles are great, but we don’t need to build them into our definition of ‘science’ in order to praise science for happening to possess them; if we put each scientist on a separate island, their work might suffer as a result, but it’s not clear to me that they would lose all ability to do anything scientific, or that we should fail to clearly distinguish the scientifically-minded desert-islander for his unusual behaviors.
Also, it’s not clear in what sense mathematics has a self-improving recursive feedback cycle with reality. Actually, mathematics and philosophy seem to function very analogously in terms of their relationship to reality and to science.
If I’m going to promote something to laypeople, it’s that a mechanism of recursive self-improvement is desirable.
I’m not sure that’s the best approach. Telling people to find a recursively self-improving method is not likely to be as effective as giving them concrete reasoning skills (like how to perform thought experiments, or how to devise empirical hypotheses, or how to multiply quantities) and then letting intelligent society-wide behaviors emerge via the marketplace of ideas (or via top-down societal structuring, if necessary). Don’t fixate first and foremost on telling people about what our abstract models suggest makes science on a societal scale so effective; fixate first and foremost on making them good scientists in their daily lives, in every concrete action.
For you, if I’m understanding you right, they’re professions, institutions, social groups, population-wide behaviors. Sociology is generally considered more abstract or high-level than psychology.
You’re kind of understanding me. Abstractly, bee hives produce honey. Concretely, this bee hive in front of me is producing honey. Abstractly, science is the product of professions, institutions, ect. Concretely, science is the product of people on our planet doing stuff.
I’m literally trying to not talk about abstractions or concepts but science as it actually is. And of course, science as it actually is does things that we can then categorize into abstractions like feedback cycles. But when you say science is a bunch of abstractions (like I think your definitions are), then you’re missing out on what it actually is.
Feedback cycles are great, but we don’t need to build them into our definition of ‘science’ in order to praise science for happening to possess them; if we put each scientist on a separate island, their work might suffer as a result, but it’s not clear to me that they would lose all ability to do anything scientific, or that we should fail to clearly distinguish the scientifically-minded desert-islander for his unusual behaviors.
This is exactly why I want to avoid defining science with abstractions. It literally does not make sense if you think of science as it is. “Scientific” imports essentialism.
Also, it’s not clear in what sense mathematics has a self-improving recursive feedback cycle with reality.
Mathematics is self-improving while at the same time hinging on reality. This is tricky to explain so I might come back to it tomorrow when I’m more well rested (i.e., not drunk).
I’m not sure that’s the best approach. Telling people to find a recursively self-improving method is not likely to be as effective as giving them concrete reasoning skills (like how to perform thought experiments, or how to devise empirical hypotheses, or how to multiply quantities) and then letting intelligent society-wide behaviors emerge via the marketplace of ideas (or via top-down societal structuring, if necessary). Don’t fixate first and foremost on telling people about what our abstract models suggest makes science on a societal scale so effective; fixate first and foremost on making them good scientists in their daily lives, in every concrete action.
No, I think that kernel (and we are speaking in the context of “fast-and-ready”) of thought is really the most important thing to convey. Speaking abstractly, even science doesn’t take that kernel seriously enough. It doesn’t question how it should allocate its limited resources or improve its function. This is costing millions of lives, untold suffering, and perhaps our species continued existence. But it does employ a self-improving feedback cycle on reality which is just enough for it to uncover reality. It needs to install a self-improving feedback cycle on itself. And then we need a self-improving feedback cycle on feedback cycles. I can’t think of any abstraction more important in making progress with something.
Abstractly, bee hives produce honey. Concretely, this bee hive in front of me is producing honey. Abstractly, science is the product of professions, institutions, ect. Concretely, science is the product of people on our planet doing stuff.
It sounds like you’re conflating abstract/concrete with general/particular. But a universal generalization might just be the conjunction of a lot of particulars. I prefer to think of ‘abstract’ as ‘not spatially extended or localized.’ Societies are generally considered more abstract than mental states because mental states are intuitively treated as more localized. But ‘lots of mental states’ is not more abstract than ‘just one mental state,’ in the same way that thousands of bees (or ‘all the bees,’ in your example) can be just as concrete as a single bee.
But when you say science is a bunch of abstractions (like I think your definitions are)
We’re back at square one. I still don’t see why reasoning is more abstract than professions, institutions, etc. We agree that it all reduces to human behaviors on some level. But the ‘abstract vs. concrete’ discussion is a complete tangent. What’s relevant is whether it’s useful to have separate concepts of ‘the practice of science’ vs. ‘professional science,’ the former being something even laypeople can participate in by adopting certain methodological standards. I think both concepts are useful. You seem to think that only ‘professional science’ is a useful concept, at least in most cases. Is that a fair summary?
This is exactly why I want to avoid defining science with abstractions. It literally does not make sense if you think of science as it is. “Scientific” imports essentialism.
Counterfactuals don’t make sense if you think of things as they are? I don’t think that’s true in any nontrivial sense....
‘Scientific’ is not any more guilty of essentializing than are any of our other fuzzy, ordinary-language terms. There are salient properties associated with being a scientist; I’m suggesting that many of those clustered properties, in particular many of the ones we most care about when we promote and praise things like ‘science’ and ‘naturalism,’ can occur in isolated individuals. If you don’t like calling what I’m talking about ‘scientific,’ then coin a different word for it; but we need some word. We need to be able to denote our exemplary decision procedures, just to win the war of ideas.
‘Professional science’ is not an exemplary decision procedure, any more than ‘the buildings and faculty at MIT’ is an exemplary decision procedure. It’s just an especially effective instantiation thereof.
I can’t think of any abstraction more important in making progress with something.
Maybe we’re just not approaching the problem at the same levels. When I ask about what the optimal way is to define our concepts, I’m trying to define them in a way that allows us to consistently and usefully explain them (in any number of paraphrased forms) to 8th-graders, to congressmen, to literary theorists, such that we can promote the best techniques we associate with scientists, philosophers, and mathematicians. I’m imagining how we would design a scientific+philosophical+mathematical+etc. literacy pamphlet that would teach people how to win at life. It sounds like you’re instead trying to think of a single sentence that summarizes what winning at life is, at its most abstract. ‘Adopt a self-improving feedback cycle linking you to reality’ is just a fancy way of saying ‘Behave in a way that predictably makes you better and better at doing good stuff.’ Which is great, but not especially contentful as yet. I only care about people understanding how winning works insofar as this understanding helps them actually win.
I prefer to think of ‘abstract’ as ‘not spatially extended or localized.’
I prefer to think of it as anything existing at least partly in mind, and then we can say we have an abstraction of an abstraction or that something something is more abstract (something from category theory being a pure abstraction, while something like the category “dog” being less abstract because it connects with a pattern of atoms in reality). By their nature, abstractions are also universals, but things that actually exist like the bee hive in front of me aren’t particulars at the concrete level. The specific bee hive in my mind that I’m imagining is a particular, or the “bee hive” that I’m seeing and interpreting into a bee hive in front of me is also a particular, but the bee hive is just a “pattern” of atoms.
What’s relevant is whether it’s useful to have separate concepts of ‘the practice of science’ vs. ‘professional science,’ the former being something even laypeople can participate in by adopting certain methodological standards. I think both concepts are useful. You seem to think that only ‘professional science’ is a useful concept, at least in most cases. Is that a fair summary?
Framing those concepts in terms of usefulness isn’t helpful, I think. I’d simply say the laypeople are doing something different unless they’re contributing to our body of knowledge. In which case, science as it is requires that those laypeople interact with science as it is (journals and such).
Counterfactuals don’t make sense if you think of things as they are?
No, I mean thinking of someone as being scientific doesn’t make sense if you think of science as it is because e.g. the sixth grader at the science fair that we all “scientific” isn’t interacting with science as it is. We’re taking some essential properties we pattern match in science as it is, and then we abstract them, and then we apply them by pattern matching.
I’m suggesting that many of those clustered properties, in particular many of the ones we most care about when we promote and praise things like ‘science’ and ‘naturalism,’ can occur in isolated individuals.
We can imagine an immortal human being on another planet replicating everything science has done on Earth thus far. So, yes I think it can occur in isolated individuals, but that’s only because the individual has taken on everything that science is and not some like “carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance.”
If I’m going to apply an abstraction to what I praise in science to individuals, it’s not “being scientific” or “doing science”, it’s “working with feedback.” It’s what programmers do, it’s what engineers do, it’s what mathematicians, it’s what scientists do, it’s what people that effectively lose weight do, and so on. It’s the kernel of thought most conducive to progress in any area.
Maybe we’re just not approaching the problem at the same levels. When I ask about what the optimal way is to define our concepts, I’m trying to define them in a way that allows us to consistently ..
I think we are approaching the problem at the same level. I think I have optimally defined the concepts, and I think “behave in a way that predictably makes you better and better at doing good stuff” is what needs to be communicated and not “science: carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance.” If we’re going to add more content, then we should talk about how to effectively measure self-improvement, how to get solid feedback and so on. With that knowledge, I think a bunch of kids working together could rebuild science from the ground up.
If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance… -- Feynman
I’d pass on how important “behave in a way that predictably makes you better and better at doing good stuff” is.
I prefer to think of it as anything existing at least partly in mind
That’s problematic, first, because it leaves mind itself in a strange position. And second because, if mathematical platonism (for example) were true, then there would exist abstract objects that are mind-independent.
We’re taking some essential properties we pattern match in science as it is, and then we abstract them, and then we apply them by pattern matching.
You seem to be assuming the pattern-matching of this sort is a vice. If it’s useful to mark the pattern in question, and we recognize that we’re doing so for utilitarian reasons and not because there’s a transcendent Essence of Scienceyness, then the pattern-matching is benign. It’s how humans think, and we can’t become completely inhuman if our goal is to take the rest of mankind with us into the future. Not yet, anyway.
Religions are also feedback loops. The more I believe, the more my belief gets confirmed. Remarkable! The primary problem with this ultra-attenuated notion of what we want is that all the work is being done by the black-box normative terms like ‘improvement’ and ‘better’ and ‘optimal.’ Everything we’re actually trying to concretely teach is hidden behind those words.
We also need more content than ‘working with a feedback loop from reality’; that kind of metaphorical talk might fly on LessWrong, but it’s really a summary of some implicit intuitions we already share, not instruction we could in those words convey to someone who doesn’t already see what we’re getting at. After all, everything exists in a back-and-forth with reality, and everything is for that matter part of reality. Perhaps my formulations of what we want are too concrete; but yours are certainly too abstract and underdetermined.
You would be surprised to learn how often I talk to Less Wrongers who have been corrupted by a few philosophy classes and therefore engage in the kind of philosophical analysis which assumes that their intuitions are generally shared.
Despite being downvoted in this comment, I think Eliezer is generally right that reading too much mainstream philosophy — even “naturalistic” analytic philosophy — is somewhat likely to “teach very bad habits of thought that will lead people to be unable to do real work.”
Is believing in shared intuitions a result of reading philosophy, or is it just that intuitions feel like truths?
Oh I doubt I’d be surprised, but that’s more a problem of the people coming out of Philosophy 101 than the discipline itself. Frege and Bertrand Russell put most of the metaphysical extravagances to bed (in the Anglo-American tradition at least) with the turn towards formal logic and language, and the modern-day analytic tradition hasn’t ever looked back.
As it stands the field has about as much to do with mind-body dualism or idealism (or their respective toolkits) as theoretical physics. This goes for ethics and meta-ethics, and no serious writer in that topic would entertain Cartesian dualism or Kantian deontology or any other such in a trivial form. The idea of contingent, historical, contextually-sensitive ethics is widely recognized and is indeed a topic of lively discussion.
No, seriously: the assumption that others will share one’s philosophical intuitions is rampant in contemporary philosophy. Go read all the angry papers written in response to the work of experimental philosophers, or the works of the staunch intuitionists like George Bealer and Ernest Sosa.
The field as a whole (or rather, some within it, to be more accurate) takes these issues seriously as a matter of debate, yes, but arguing over controversial claims is the entire point of philosophy so that’s no mark against it. It’s also a radically different position from the strong claim you’ve advanced here that the field itself is broken, which is nonsense to anyone familiar with modern moral philosophy and ethics/meta-ethics and is dangerously close to a strawman argument.
To say the problem is “rampant” is to admit to a limited knowledge of the field and the debates within it.
You have precisely identified the fundamental problem with philosophy.
And your better alternative is...?
DDTT. Don’t study words as if they had meanings that you could discover by examining your intuitions about how to use them. Don’t draw maps without looking out of the window.
Positively, they could always start here.
BS. For example, Eliezer’s take on logical positivism in the most recent Sequence is interesting. But logical positivism has substantial difficulties—identified by competing philosophical schools—that Eliezer has only partially resolved.
Aristotle tried to say insightful things merely by examining etymology, but the best of modern philosophy has learned better.
I only see objections to traditional strains of positivism. It doesn’t seem they even apply to what EY’s been doing. In particular, the problems in objections 1, 3C1, 3C2, and 3F2 have been avoided by being more careful about what is not said. Meanwhile, 2 and 3F1 seem incoherent to me.
I don’t see how Eliezer could dodge this objection, or why he would want to. Very colloquially, Eliezer thinks there is an arrow leading to “Snow is white” from the fact that snow is white. Labeling that arrow “causal” does nothing to explain what that arrow is. If you don’t explain what the arrow is, how do you know that (1) you’ve said something rigorous or (2) that the causal arrows are the same thing as what we want to mean by “true”?
As stated, this objection is too strong (because it assumes moral anti-realism is true). The correspondence theory can be agnostic in the dispute between moral realism and moral anti-realism. But moral realists intend to use the word “true” in exactly the same way that scientists use the word. Thus, a correspondence-theory moral realist needs to be able to identify what corresponds to any particular moral truth—otherwise, moral anti-realism is the correct moral epistemology.
Most people are moral realists, so if your theory of truth is inconsistent with moral realism, they will take that as evidence that your theory of truth is not correct.
Look, no one but a total idiot believes Mark’s epistemic theory. There is an external world, with sufficient regularity that our physical predictions will be accurate within the limits of our knowledge and computational power. The issue is whether that can be stated more rigorously—and the different specifications are where logical positivists, physical pragmitists, Kunn and other theorists disagree.
I do agree that objections 2 and 3F2 are not particularly compelling (as I understand them).
This is actually a very easy one to respond to. Truthbearers do resemble non-truthbearers. What must ultimately be truth-bearing, if anything really is, is some component of the world—a brain-state, an utterance, or what-have-you. These truth-bearing parts of the world can resemble their referents, in the sense that a relatively simple and systematic transformation on one would yield some of the properties of the other. For instance, a literal map clearly resembles its territory; eliminating most of the territory’s properties, and transforming the ones that remain in a principled way, could produce the map. But sentences also resemble the territories they describe, e.g., through temporal and spatial correlation. Even Berkeley’s argument clearly fails for this reason; an immaterial idea can systematically share properties with a non-idea, if only temporal ones.
Language use is a natural phenomenon. Hence, reference is also a natural phenomenon, and one we should try to explain as part of our project of accounting for the patterns of human behavior. Here, we’re trying to understand why humans assert “Snow is white” in the particular patterns they do, and why they assign truth-values to that sentence in the patterns they do. The simplest adequate hypothesis will note that usage of “snow” correlates with brain-states that in turn resemble (heavily transformed) snow, and that “white” correlates with brain-states resembling transformed white light, and that “Snow is white” expresses a relationship between these two phenomena such that white light is reflected off of snow. When normal English language users think white light reflects off of snow, they call the sentence “snow is white” true; and when they think the opposite, they call “snow is white” false. So, there is a physical relationship between the linguistic behavior of this community and the apparent properties of snow.
Yes, but is our goal to convince everyone that we’re correct, or to be correct? The unpopularity of moral anti-realism counts against the rhetorical persuasiveness of a correspondence theory combined with a conventional scientific world-view. But it will only count against the plausibility of this conjunction if we have reason to think that moral statements are true in the same basic way that statements about the whiteness of snow are true.
In brief, I disagree that we are trying to explain human behavior. We are trying to develop an agent-universal explanation of truth. The risk of focusing on human behavior (or human brain states) is that the theory of truth won’t generalize to non-human agents.
Regarding moral facts, I agree that our goal is true philosophy, not comforting philosophy. I’m a moral anti-realist independent of theory-of-truth considerations. But most people seem to feel that their moral senses are facts (yes, I’m well aware of the irony of appealing to universal intuitions in a post that urges rejection of appeals to universal intuitions).
The widespread nature of belief in values-as-truths cries out for explanation, and the only family of theories I’m aware of that even try to provide such an explanation is wildly controversial and unpopular in the scientific community.
I’m not sure ‘agent’ is a natural kind. ‘Truth’ may not be a natural kind either; it may be a very gerrymandered, odd-looking collection of properties. So I spoke in terms of concrete human behaviors in order to maintain agnosticism about how generalizable these properties are. If they do turn out to be generalizable, then great. I don’t think any part of my account precludes that possibility.
Yes. My explanation is that our mental models do treat values as though they were real properties of things. Similarly, our mental models treat chairs as discrete solid objects, treat mathematical objects as mind-independent reals, treat animals as having desires and purposes, and treat possibility and necessity as worldly facts. In all of these cases, our evidence for the metaphysical category actually occurring is much weaker than our apparent confidence in the category’s reality. So the problem is very general; it seems that most of our beliefs are predicated on useful fictions (analogous to our willingness to affirm the truth of ‘Sherlock Holmes is a detective, not a carpenter’), in which case we are committed either to an error theory or to revising our standards for what ‘truth’ is.
If so. rationalists may as well shut up shop, because anyone would be able to add an interest-specific lump to the gerrymander.
ETA
I go for the third option.
People already do that, and yet rationalists see no reason to ‘shut up shop’ as a result. ‘True’ is just a word. Rationality is about systematic optimization for our goals, not about defending our favorite words from the rabble. Sometimes it’s worthwhile to actively criticize a use of ‘truth;’ sometimes it’s worthwhile to participate in the gerrymandering ourselves; and sometimes it’s worthwhile just to avoid getting involved in the kerfuffle. For instance, criticizing people for calling ‘Sherlock Holmes is a detective’ true is both less useful and less philosophically interesting than criticizing people for calling ‘there is exactly one empty set’ true.
Also, it’s important to remember that there are two different respects in which ‘truth’ might be gerrymandered. First, it might be gerrymandered for purely social reasons. Second, it might be gerrymandered because it’s a very complicated property of high-level representational systems. One should not expect mental states in general to be simply and nondisjunctively definable in a strictly physical language. Yet if we learned that ‘pain’ were a highly disjunctive property rather than a natural kind, this would give us no reason to stop deeming pain unpleasant.
People try to do that, but rationalists don’t have to regard it as legitimate, and can object. However, if a notion of truth is adopted that is pluralistic and has no constraint on its pluralism—Anythng Goes—rationalists could no longer object to,eg. Astrological Truth.
So you say. Most rationalists are engaged in some sort of wider debate.
Even if it is intellectually dishonest to do so?
I think you may have confused truth with statesof-mind-having-content-about-truth. Electrons are simple, thoughts about them aren’t.
Somethings not being a natural kind, is not justification for arbitrarily changing its definition. I don’t get to redefine the taste of chocolate as a kind of pain.
No one on this thread, up till now, has mentioned an arbitrarily changing or anything goes model of truth. Perhaps you misunderstood what I meant by ‘gerrymandered.’ All I meant was that the referent of ‘truth’ in physical or biological terms may be an extremely complicated and ugly array of truth-bearing states. Conceding that doesn’t mean that we should allow ‘truth’ (or any word) to be used completely anarchically.
It might be. Then philosphers would be correct to look for a sense that all those referents have in common.
I would phrase that as that he has recast it so it is non-objectionable.
A lot of the other objections are of the nature “how do you know?” And generally he lets the answer be, “we don’t know that to a degree of certainty that—it has been correctly pointed out—would philosophically objectionable.”
Well, that moves much closer to making objection 2 meaningful. If all that the correspondence theory of truth can do is reassure us that our colloquial usage of “truth” gestures at a unified and meaningful philosophical concept, then it isn’t much use. It is not like anyone seriously doubts that “empirically true” is a real thing.
And I say that as a post-modernist.
I still don’t understand this ‘usefulness’ objection. If the correspondence theory of truth is a justification for colloquial notions of truth, its primary utility does lie in our not worrying too much about things we don’t actually need to worry about. There are other uses such as molding the way one approaches knowledge under uncertainty. The lemmas needed to produce the final “everything’s basically OK” result provide significant value.
There are many concepts where the precise contours of the correct position makes no practical difference to most people. Examples include (1) Newtonian vs. Relativity and QM, (2) the meaning of infinity, or (3) persistence of identity. Many of the folk versions of those types of concepts are inadequate in dealing with edge cases (e.g. the folk theory of infinity is hopelessly broken). The concept of “truth” is probably in this no-practical-implications category. As I said, there’s no particular reason to doubt truth exists, whether the correspondence theory is correct or not.
Anyway, edge cases don’t tend to come up in ordinary life, so there’s no good reason for most people to be worried. If one isn’t worried, then the whole correspondence-theory-of-truth project is pointless to you. Without worry, reassurance is irrelevant. By contrast, if you are worried, the correspondence theory is insufficient to reassure you. Your weaker interpretation is vacuous, Eliezer’s stronger version has flaws.
None of this says that one should worry about what “truth” is, but having taken on the question, I think Eliezer has come up short in answering.
I don’t see where it’s coming up short in the first two examples you gave. What else would you want from it?
As far as the third, well, I don’t know that the meaning of truth is directly applicable to this problem.
I haven’t communicated clearly. There are two understandings of useful—practical-useful and philosophy-useful. Arguments aimed at philosophy-use are generally irrelevant to practical-use (aka “Without worry, reassurance is irrelevant”).
In particular, the correspondence theory of truth has essentially no practical-use. The interpretation you advocate here removes philosophical-use.
“Everything’s basically ok.” is a practical-use issue. Therefore, it’s off-topic in a philosophical-use discussion.
I mentioned the examples to try to explain the distinction between practical-use and philosophical-use. Believing the correspondence theory of truth won’t help with any of the examples I gave. Ockham’s Razor is not implied by the correspondence theory. Nor is Bayes’ Theorem. Correspondence theory implies physical realism, but physical realism does not imply correspondence theory.
Out of curiosity, which theory of truth does have a practical use ?
I think is important to note that what we’ve been calling theories of truth are actually aimed at being theories of meaningfulness. As lukeprog implicitly asserts, there are whole areas of philosophy where we aren’t sure there is anything substantive at all. If we could figure out the correct theory of meaningfulness, we could figure out which areas of philosophy could be discarded entirely without close examination.
For example, Carnap and other logical positivists thought Heidegger’s assertion that “Das nicht nichtet” was meaningless nonsense. I’m not sure I agree, but figuring out questions like that is the purpose of a theory of meaning / truth.
I see, so you aren’t really concerned with practical-use applications; you’re more interested in figuring out which areas of philosophy are meaningful. That makes sense, but, on the other hand, can an area of philosophy with a well-established practical use still be meaningless ?
It sure would be surprising if that happened. But meaningfulness is not the only criteria one could apply to a theory. No one thinks Newtonian physics is meaningless, even though everyone thinks Newtonian physics is wrong (i.e. less right than relativity and QM).
In other words, one result of a viable theory of truth would be a formal articulation of “wronger than wrong.”
That’s not the same as “wrong”, though. It’s just “less right”, but it’s still good enough to predict the orbit of Venus (though not Mercury), launch a satellite (though not a GPS satellite), or simply lob cannonballs at an enemy fortress, if you are so inclined.
From what I’ve seen, philosophy is more concerned with logical proofs and boolean truth values. If this is true, then perhaps that is the reason why philosophy is so riddled with deep-sounding yet ultimately useless propositions ? We’d be in deep trouble if we couldn’t use Newtonian mechanics just because it’s not as accurate as QM, even though we’re dealing with macro-sized cannonballs moving slower than sound.
… except, as described below, to discard volumes worth of overthinking the matter.
As far as I can tell, we’re in the middle of a definitional dispute—and I can’t figure out how to get out.
My point remains that Eliezer’s reboot of logical positivism does no better (and no worse) than the best of other logical positivist philosophies. A theory of truth needs to be able to explain why certain propositions are meaningful. Using “correspondence” as a semantic stop sign does not achieve this goal.
Abandoning the attempt to divide the meaningful from the non-meaningful avoids many of the objections to Eliezer’s point, at the expense of failing to achieve a major purpose of the sequence.
It’s not so much a definitional dispute as I have no idea what you’re talking about.
Suggesting that there’s something out there which our ideas can accurately model isn’t a semantic stop sign at all. It suggests we use modeling language, which does, contra your statement elsewhere, suggest using Bayesian inference. It gives sufficient criteria for success and failure (test the models’ predictions). It puts sane epistemic limits on the knowable.
That seems neither impractical nor philosophically vacuous.
The philosophical problem has always been he apparent arbitrariness of the rules. You can say that “meaningful” sentences are empircially verifiable ones. But why should anyone believe that? The sentence “the only meaningful sentences are the empircially verifiable ones” isn’t obviously empirically verifiable. You have over-valued clarity and under-valued plausibility.
Definitions don’t need to be empirically verifiable. How could they be?
They need to be meaningful. If your definition of meaningfullness assers its own meaninglessness, you have a problem. If you are asserting that there is truth-by-stipulation as well as truth-by-correspondence, you have a problem.
Clarity cannot be over-valued; plausibility, however, can be under-valued.
If you believe that, I have two units of clarity to sell you, for ten billion dollars.
Before posting, you should have spent a year thinking up ways to make that comment clearer.
What about mathematics, then ? Does it correspond to something “out there” ? If so, what/where is it ? If not, does this mean that math is not meaningful ?
Math is how you connect inferences. The results of mathematics are of the form ‘if X, Y, and Z, then A’… so, find cases where X, Y, and Z, and then check A.
It doesn’t even need to be a practical problem. Every time you construct an example, that counts.
I don’t see how that addresses the problem. You have said that there is one kind of truth/meanignullness, based on modelling relaity, and then you describe mathematical truth in a form that doens’t match that. If any domain can have its own standards of truth, then astrologers can say there merhcandise is “astrologically true”. You have anything goes.
This stuff is a tricky , typically philophsical problem because the obvious answers all have problems. Saying that all truth is correspondence means that either mathematical Platonism holds—mathematical truths correspond to the status quo in Plato’s heaven—or maths isn’t meaningful/true at all. Or truth isn’t correspondence, it’s anything goes.
I don’t think those problems are iresolvable, and EY has in fact suggested (but not originated) what I think is a promissing approach.
How does it not match? Take the 4 color problem. It says you’re not going to be able to construct a minimally-5-color flat map. Go ahead. Try.
That’s the kind of example I’m talking about here. The examples are artificial, but by constructing them you are connecting the math back to reality. Artificial things are real.
What? How is holding everything is held to the standard of ‘predict accurately or you’re wrong’ the same as ‘anything goes’?
I mean, if astrology just wants to be a closed system that never ever says anything about the outside world… I’m not interested in it, but it suddenly ceases to be false.
That doesn’t matfch reality because it would still be true in universes with different laws of physics.
It isn’t. It’s a standard of truth that too narrow to include much of maths.
That doens’t follow. Astrologers can say their merchandise is about the world, and true, but not true in a way that has anything to do with correspondence or prediction.
If you’re in a different universe with different laws of physics, your implementation of the 4 color problem will have to be different. Your failure to correctly map between math and reality isn’t math’s problem. Math, as noted above, is of the form ‘if X and Y and Z, then A’ - and you can definitely arrange formal equivalents to X, Y, and Z by virtue of being able to express the math in the first place.
It’s about the world but it doesn’t correspond to anything in the world? Then the correspondence model of truth has just said they’re full of shit. Victoreeee!
(note: above ‘victory’ claim is in reference to astrologers, not you)
I don’t have to implement it at all to see its truth. Maths is not just applied maths.
I don’t see that you mean. (Non-applied) maths is just formal, period, AFAIAC..
And Astrologers can just say that the CToT is shit and they have a better ToT.
People who have different ‘theories’ of truth really have different definitions of the word ‘truth.’ Taboo that word away, and correspondence theorists are really criticizing astrologists for failing to describe the world accurately, not for asserting coherentist ‘falsehoods.’ Every reasonable disputant can agree that it is possible to describe the world accurately or inaccurately; correspondence theorists are just insisting that the activity of world-describing is important, and that it counts against astrologists that they fail to describe the world.
(P.S. Real astrologists are correspondence theorists. They think their doctrines are true because they are correctly describing the influence celestial bodies have on human behavior. Even idealists at least partly believe in correspondence theory; my claims about ideas in my head can still be true or false based on whether they accurately describe what I’m thinking.)
That is not at all obvious. Let “that which should be believed” be the defintiion of truth. Then a correspondence theorist and coherence theorist stlll have plenty to disagree about, even if they both hold to the definition.
Agreed. However, it’s still the right view, as well as being the most useful one, since tabooing lets us figure out why people care about which ‘theory’ of ‘truth’ is.… (is what? true?). The real debate is over whether correspondence to the world is important in various discussions, not over whether everyone means the same thing (‘correspondence’) by a certain word (‘truth’).
You can stipulate whatever you want, but “that which should be believed” simply isn’t a credible definition for that word. First, just about everyone thinks it’s possible, in certain circumstances, to ought to believe a falsehood. Second, propositional ‘belief’ itself is the conviction that something is true; we can’t understand belief until we first understand what truth is, or in what sense ‘truth’ is being used when we talk about believing something. Truth is a more basic concept than belief.
At the very least, you can make something formally equivalent if you’re capable of talking about it.
If your branch of mathematics is so unapplied that you can’t even represent it in our universe, I suspect it’s no longer math.
Any maths can be represented the way it ususally is, by writing down some essentially aribtrary symbols. That does not indicate anything about “correspondence” to reality. The problem is the “arbitrary” in arbitrary symbol.
Lets say space is three dimensional. You can write down a formula for 17 dimensional space, but that doens’t mean you have a chunk of 17 dimesional space for the maths to correspond to. You just have chalk on a blackboard.
Sure. And yet, you can implement vectors in 17 dimensional spaces by writing down 17-dimensional vectors in row notation. Math predicts the outcome of operations on these entities.
Show me a 17-vector. And what is being predicited? The onlyy way to get at the behaviour is to do the math, and the only way to do the predictions is...to do the math. I think meaningful prediction requires some non-identity between predictee and predictor.
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6) is a perfectly valid 17-vector.
The predictions of the mathematics of 17-dimensional space would, yes, depend on the outcome of other operations such as addition and multiplication—operations we can implement more directly in matter.
I have personally relied on the geometry of 11-dimensional spaces for a particular curve-fitting model to produce reliable results. If, say, the Pythagorean theorem suddenly stopped applying above 3 dimensions, it simply would not have worked.
I’m seing pixels on a 2d screen. I’m not seeing an existing 17d dimensional thing.
The mathematics of 17d space predict the mathematics of 17d space. They couldn’t fail to. Which means no real prediction is happening at all. 1.0 is not a probability.
There are things we can model as 17-dimensional spaces, and when we do, the behavior comes out the way we were hoping. This is because of formal equivalence: the behavior of the numbers in a 17-dimensional vector space precisely corresponds to the geometric behavior of a counterfactual 17-dimensional euclidean space. You talk about one, you’re also saying something about the other.
Is this point confusing to you?
But they are not 17 dimensional spaces. They have different physics. Treating them as 17 dimesional isn’t modelling them because it isn’t representing thema as they are.
To be concrete, suppose we have a robotic arm with 17 degrees of freedom of movement. It’s current state can and should be represented as a 17-dimensional vector, to which you should do 17-dimensional math to figure out things like “Where is the robotic arm’s index finger pointing?” or “Can the robotic arm touch its own elbow?”
Not obvious. It would just be a redundant way of representing a 3d object in 3 space.
The point of contention is the claim that for any maths, there is something in reality for it to represent. Now, we can model a system of 10 particles as 1 particle in 30 dimensional space, but that doens;t prove that 30d maths has something in reality to represent, since in reality there are 10 particles. Is was our decision, not reality’s to treat is as 1 particle in a higher-d space.
Past a certain degree of complexity, there are lots of decisions about representing objects that are “ours, not reality’s”. For example, even if you represent the 10 particles as 10 vectors in 3D space, you still choose an origin, a scale, and a basis for 3D space, and all of these are arbitrary.
The 30-dimensional particle makes correct predictions of the behavior of the 10 particles. That should be enough.
Treaing a mathermatical formula as something that cranks out predictions is treating it as instrumentally, is treaing it unrealistically. But you cannot’ have coherent notion of modeling or representation if there is no real territory being modeled or represented.
To argue that all maths is representational, you either have to claim we are living in Tegmarks level IV, or you have to stretch the meaning of “representation” to meaninglessness. Kindly and Luke Sommers seem to be heading down the second route.
A correct 30D formula wll make correct predictions, Mathematical space also contains an infinity of formulations that are incorrect. Surely it is obvious that you can’t claim eveything in maths correctly models or predicts something in realiy.
I’d say that math either
Predicts something that could happen in reality (e.g. we’re not rejecting math with 2+2=4 apples just because I only have 3 apples in my kitchen), or
Is an abstraction of other mathematical ideas.
Do you claim that (2) is no longer modeling something in reality? It is arguably still predicting things about reality once you unpack all the layers of abstraction—hopefully at least it has consequences relevant to math that does model something.
Or do you think that I’ve missed a category in my description of math?
I don’t see what abstraction has to do with it. The Standard Model has about 18 parameters. Vary those, and it will mispredict. I don’t think all the infinity of incorrect variations of the SM are more abstract.
Who said vector spaces have anything to do with physics? That’s not math anymore, that’s physics.
Using math to model reality is physics. Phsycis doens’t use all of math, so some math doesn’ model anything real.
As a physicist, I can say with a moderate degree of authority: no.
I have seen mathematical equations to describe population genetics. That was not physics. I have seen mathematical equations used to describe supply and demand curves. That was not physics. Etc.
If you’re using math to model something, or even could so use it, that is sufficient for it to have a correspondent for purposes of the correspondence theory of truth.
But that is not suffcient to show that all maths models.
Well, you can use math for something other than modeling, sure. Can you give a more concrete example of some math you claim doesn’t model anything?
The Standard Model with its 18 parameters set to random values.
… okay, you were confusing before, but now you’re exceptionally confusing. You’re saying that the standard model of particle physics is an example of math that doesn’t model anything?
No, I am saying a mutated, deviant form doens’t model anything -- “with its 18 parameters set to random values”.
Well, it doesn’t model our universe. And the Standard Model is awfully complicated for someone to build a condensed matter system implementing a randomized variant of it. But it’s still a quantum mechanical system, so I wouldn’t bet strongly against it.
And of course if someone decided for some reason to run a quantum simulation using this H-sm-random, then anything you mathematically proved about H-sm-random would be proved about the results of that simulation. The correspondence would be there between the symbols you put in and the symbols you got back, by way of the process used to generate those outputs. It just would be modeling something less cosmically grand than the universe itself, just stuff going on inside a computer. It wouldn’t be worth while to do… but it still corresponds to a relationship that would hold if you were dumb enough to go out of your way to bring it about.
The thing about the correspondence theory of truth is that once something has been reached as corresponding to something and thus being eligible to be true, it serves as a stepping-stone to other things. You don’t need to work your way all the way down to ‘ground floor’ in one leap. You’re allowed to take general cases, not all of which need to be instantiated. Correspondence to patterns instead of instances is a thing.
Which, as in your other examples, is case of a model modeling a model. You can build something physical that simulates a universe where electrons have twice the mass, and you can predict the virtual behaviour of the simulation with an SM where the electron mass paramter is doubled, but the simulation will be made of electrons with standard mass.
It wouldn’t be modeliing reality.
..is that it is a poor fit for mathematical truth. You are making mathetmatical theorems correspondnce-true by giving them something artificial to correspond to. Before the creation of a simulaiton at time T, there is nothing for them to correspond to.This is a mismatch with the intuition that mathematical truths are timelessly true.
You can gerrymander CToT into something that works, however inelegantly, for maths, or you can abandon it in favour something that doesn’t need gerrymandering.
It’s not gerrymandering. What you are doing is gerrymandering. Picking and choosing which parts of the territory we are and aren’t allowed to model.
The territory includes the map.
But not as a map. Maphood is in the eye of the beholder.
The eye of the beholder is part of the territory too. It is a matter of fact that it takes that part of the territory to be a map.
Maphood is still not a matter of fact about maps.
Right, but as Peterdjones said, in this case you have a meaningful system that does not correspond to anything besides, possibly, itself.
Example, please?
Physics uses a subset of maths, so the rest would be examples of vald (I am substituing that for “meaninful”, which I am not sure how t apply here) maths that doesn;t correspond to anything external, absent Platonism.
But you can BUILD something that corresponds to that thing.
Which thing, and why does that matter?
The word “True” is overloaded in the ordinary vernacular. Eliezer’s answer is to set up a separate standard for empirical and mathematical propositions.
Empirical assertions use the label “true” when they correspond to reality. Mathematical assertions use the label “valid” when the theorem follows from the axioms.
I dont’ think it is, and that’s a bad answer anyway. To say that two unrelated approaches are both truth allows anthing to join the truth club, since there are no longer criteria for membership.
However, there is an approach that allows pluralism, AKA “overloading”, but avoids Anything Goes
Well, I don’t think that Eliezer would call mathematically valid propositions “true.” I don’t find that answer any more satisfying than you do. But (as your link suggests), I don’t think he can do better without abandoning the correspondence theory.
Simply put, there’s no one who disagrees with this point. And the correspondence theory cannot demonstrate it, even if there were a dispute.
Let me make an analogy to decision theory: In decision theory, the hard part is not figuring out the right answer in a particular problem. No one disputes that one-boxing in Newcomb’s problem has the best payoff. The difficulty in decision theory is rigorously describing a decision theory that comes up with the right answer on all the problems.
To make the parallel explicit, the existence of the external world is not the hard problem. The hard problem is what “true” means. For example, this comment is a sophisticated argument that “true” (or “meaningful”) are not natural kinds. Even if he’s right, that doesn’t conflict with the idea of an external world.
I’m trying and failing to figure out for what reference class this is supposed to be true.
Who thinks that there isn’t something out there which our ideas can model?
If I understood you correctly, then Berkeley-style Idealists would be an example. However, I have a strong suspicion that I’ve misunderstood you, so there’s that...
Solipsists, by some meanings of “out there”. More generally, skeptics. Various strong forms of relativism, though you might have to give them an inappropriately modernist interpretation to draw that out. My mother-in-law.
I need to knowpositively how to answer typical philosophhical questions such as the meaning of life.
That’s a re-invention of LP, which has problems well known to philosophers.
Eliezer has written quite a bit about how to do philosophy well, and I intend to do so in the future.
If you’ll pardon the pun, I leave you with “Why I Stopped Worrying About the Definition of Life, and Why You Should as Well”.
I ha ve read a lot of philosophy, and I don’t think EY is doing it at particualrly well. His occasional cross-disciplinary insights keep me going (I’m cross disiplinary too, I started in science and work in I.T). But he often fails to communicate clearly (I still don’t know whether he thinks numbers exist) and argues vaguely.
I don’t see your point. For one thing, I’m not on the philosohpy “side” in some sense exclusive of being on the science or CS side or whatever. For another. there are always plenty of phils. who are agin GOCFA (Good Old Fashioned Conceptual Analysis). The collective noun for philosophers is “a disagreement”. Tha’ts another of my catchphrases.
Agree! Very frustrating. What I had in mind was, for example, his advice about dissolving the question, which is not the same advice you’d get from logical positivists or (most) contemporary naturalists.
Sorry, I should have been clearer that I wasn’t trying to make much of a point by sending you the Machery article. I just wanted to send you a bit of snark. :)
I don’t see the significance of that. You definitely get it from some notable naturalists,
I skimmed the paper. Dennett’s project is a dissolving one, though he does less to explain why we think we have qualia than Yudkowsky did with regard to why we think we have free will. But perhaps Dennett wrote something later which more explicitly sets out to explain why we think we have qualia?
Only if the question is meaningful. Of course, just saying “Don’t do that then” doesn’t tell you how to resolve whether that’s the case or not, but necessarily expecting an answer rather than a dissolution is not necessarily correct.
Defund philosophy departments to the benefit of computer science departments?
And the CS departments are going to tell us what the meaning of life is?
If have to give up on even trying to answer the questions, you don’t actually have a better alternative.
I absolutely loathe the way you phrased that question for a variety of reasons (and I suspect analytic philosophers would as well), so I’m going to replace “meaning of life” with something more sensible like “solve metaethics” or “solve the hard problem of consciousness.” In which case, yes. I think computer science is more likely to solve metaethics and other philosophical problems because the field of philosophy isn’t founded on a program and incentive structure of continual improvement through feedback from reality. Oh, and computer science works on those kinds of problems (so do other areas of science, though).
I don’t think you have phrased “the question” differntly and better, I think you have substituted two differnt questions. Well, maybe you think the MoL is a ragbag of different questions, not one big one. Maybe it is. Maybe it isn’t. That would be a philsophical question. I don’t see how empiricsm could help. Speaking of which...
What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values? I didn’t notice and qualiometers or agathometers last time I was in a lab.
I’ve substituted problems that philosophy is actually working on (metaethics and conciousness) with one that analytic philosophy isn’t (meaning of life). Meaning comes from mind. Either we create our own meaning (absurdism, existentialism, ect) or we get meaning from a greater mind that designed us with a purpose (religion). Very simple. How could computer science or science dissolve this problem? (1) By not working on it because it’s unanswerable by the only methods we can have said to have answered something, or (2) making the problem answerable by operationalizing it or by reforming the intent of the question into another, answerable, question.
Through the process of science, we gain enough knowledge to dissolve philosophical questions or make the answer obvious and solved (even though science might not say “the meaning of life is X” but instead show that we evolved, what mind is, and how the universe likely came into being—in which case you can answer the question yourself without any need for a philosophy department).
If I want to know what’s happening in a brain, I have to understand the physical/biological/computational nature of the brain. If I can’t do that, then I can’t really explain qualia or such. You might say we can’t understand qualia through its physical/biological/computational nature. Maybe, but it seems very unlikely, and if we can’t understand the brain through science, then we’ll have discovered something very surprising and can then move in another direction with good reason.
Unless it is. Maybe the MoL breaks down into many of the other topics studied by philosophers. Maybe philosophy is in the process of reducing it.
No, not simple
You say it is “unanswerable” timelessly. How do you know that? It’s unanswered up to present. As are a number of scientific questions.
Maybe. But checking that you have correctly identified the intent, and not changed the subject, is just the sort of armchair conceptual analysis philosophers do.
You say that timelsessly, but at the time of writing we have done where we have and we don’t where we haven;t.
But unless science can relate that back to the initial question , there is no need to consider it answered.
That’s necessary, sure. But if it were sufficient, would we have a Hard Problem of Consciousness?
But I am not suggesting that science be shut down, and the funds transferred to philosophy.
It seems actual to me. We don’t have such an understanding at present. I don’t know what that means for the future, and I don’t how you are computing your confident statement of unlikelihood. One doens’t even have to believe in some kind of non-physicalism to think that we might never. The philosopher Colin McGinn argues that we have good reason to believe both that consc. is physical, and that we will never understand it.
We can’t understand qualia through science now. How long does that have to continue before you give up? What’s the harm in allowing philsophy to continue when it is so cheap compared to science?
PS. I would be interested in hearing of a scientific theory of ethics that doens’t just ignore the is-ought problem.
Even though the wikipedia page for “meaning of life” is enormous, it boils all down to the very simple either/or statement I gave.
How do we know if something is answerable? Did a chicken just materialize 10 billion light years from Earth? We can’t answer that. Is the color blue the best color? We can’t answer that. We can answer questions that contact reality such that we can observe them directly or indirectly. Did a chicken just materialize in front me? No. Is the color blue the most preferred color? I don’t know, but it can be well answered through reported preferences. I don’t know if these currently unanswerable questions will always be unanswerable, but given what I know I can only say that they will almost certainly remain unanswerable (because it’s unfeasible or because it’s a nonsensical question).
Wouldn’t science need to do conceptual analysis? Not really, though it could appear that way. Philosophy has “free will”, science has “volition.” Free will is a label for a continually argued concept. Volition is a label for an axiom that’s been nailed in stone. Science doesn’t really care about concepts, it just wants to ask questions such that it can answer them definitely.
Even though science might provide all the knowledge necessary to easily answer a question, it doesn’t actually answer it, right? My answer: so what? Science doesn’t answer a lot of trivial questions like what I exactly should eat for breakfast, even though the answer is perfectly obvious (healthy food as discovered by science if I want to remain healthy).
Why still have the hard problem of consciousness if it’s answerable by science? Because the brain is hard to understand. Give another century or so. We’ve barely explored the brain.
What if consciousness isn’t explainable by science? When we get to that point, we’ll be much better prepared to understand what direction we need to go to understand the brain. As it is now, philosophy is simply following science’s breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.
A scientific theory of ethics? It wouldn’t have any “you ought to do X because X is good,” but would be more of the form of “science says X,Y,Z are healthy for you” and then you would think “hey, I want to be healthy, so I’m going to eat X,Y,Z.” This is actually how philosophy works now. You get a whole bunch of argumentation as evidence, and then you must enact it personally through hypothetical injunctions like “if I want to maximize well being, then I should act as a utilitarian.”
Providing you ignore the enornous amount of substructure hanging off each option.
We generally perform some sort of armchair conceptual analysis.
Why not? Doesn’t it need to decide which questions it can answer?
First I’ve heard of it. Who did that? Where was it published?
Or impossible, or the brain isn’t solely or responsible, ro something else. It would have helped to have argued for your prefered option.
Give another century or so. We’ve barely explored the brain.
Philosophy generally can’t solve scientific problems, and science generally can’t solve philosophical ones.
And what about my interactions with others? Am I entitled to snatch an orange from a starving man because I need a few extra milligrams of vitamin C?
Well, Lukeprog certainly doesn’t have a limited knowledge of philosophy. Maybe you can somehow show that the problem isn’t rampant.
Sure. Should I go about showing there are no unicorns and leprechauns while I’m at it?
ps when a restricted set of statements is used as the exemplar of a very wide and very deep field of which the entire point is to discuss ideas and their implications the proper response to criticism is not “oh yeah well prove it’s not true”
You’re both arguing over your impressions of philosophy. I’m more inclined to agree with Lukeprog’s impression unless you have some way of showing that your impression is more accurate. Like, for example, show me three papers in meta-ethics from the last year that you think highlight what is representational of that area of philosophy.
From my reading of philosophy, the most well known philosophers (who I’d assume are representational of the top 10% of the field) do keep intuitions and conceptual analysis in their toolbox. But when they bring it out of the toolbox, they dress it up so that it’s not prima facie stupid (and then you get a fractal mess of philosophers publishing how the intuition is wrong where their intuition isn’t, or how they shouldn’t be using intuitions, or how intuitions are useful, and so on with no resolution). If I were to take a step back and look at what philosophy accomplishes, I think I’d have to say “confusion.”
You can say this is just the way things are in philosophy, but then why should we fund philosophy?
Because some of us realize that there are types of inquiry which are valuable and useful despite the confusion they offer to hyper-systemizing brains who can’t accept any view of reality outside a broken conception of radically reductive materialism.
I’m not even remotely autistic.
How is philosophy going to get us the correct conception of reality? How will we know it when it happens? (I think science will progress us to the point where philosophy can answer the question, but by then anyone could)
Just see if people are arguing over it. Duh.
Amusing in light of Russell’s rather exotic metaphysical views.
You can understand the difference between being a rough progenitor of a historical tradition in thought, on the one hand, and the views held by an individual, correct?
Honestly I’d expected a little better than the strategy of circling of the wagons and defending the group on the site of Pure Rationality where we correct biased thinking. Turns out LW is like every other internet forum and the focus on “rationality” makes no difference in the degree biases underpinning the arguments?
Show me three of your favorite papers from the last year in ethics or meta-ethics that highlight the kind of the philosophy you think is representational of the field. (And if you’ve been following Lukeprog’s posts for any length of time, you’d see that he’s probably read more philosophy than most philosophers. His gestalt impression of the field is probably accurate.)
Also could you expand on this as I didn’t catch it before the edit?
It’s not obvious what the “bad habits” might be, and what they are bad relative to. This reads as a claim that would be very hard to defend at face value, and without clarification it reads like a throwaway attack not to be taken seriously.
Examples of bad habits often picked up from reading too much philosophy: arguing endlessly about definitions, or using one’s own intuitions as strong evidence about how the external world works. These are bad habits relative to, you know, not arguing endlessly about definitions, and using science to figure out how the world works.
Is the problem the arguing, or the arguing endlessly? In science, there is little need to argue about definitions because Someone Somewhere has settled the issue, often by stipulation. In philosophy, there is no Someone Somewhere who convenientyl does this for you. Philosophy deals with non-empirical questions (or it would be science), which means it deals with concepts, and since we access concepts with words, it deals with definitions. So the criticism that philosophers shouldn’t argue definitions is tantamount to criticising philosophy for being philosophy. Uless the problem was the “endlessly”.
Who does that? (ETA: at least for the past one hundred years) None of your examples work that way. Questions like “what is knowledge” and “what is the right thing to do” are not about the EW.
The problem is “arguing” as compared to “investigating”.
If there’s a disagreement about how human minds implement certain ideas, then it’s more productive to do experimental psychology than to discuss it abstractly, for the usual scientific reasons: nailing it down to a prediction makes sure that the idea in question is actually coherent, and also there are a lot of potential pitfalls when humans try to use their own brains to examine their own brains.
Though on the other hand, coming up with good experiments for this stuff is really tricky. As Suryc mentions above, you can’t just ask people what they mean by “intentional” or whatever, you’ll get garbage results. Just like how if you ask somebody with no linguistics knowledge to explain English grammar to you you’ll get nonsense back, even if that person is quite capable at actually writing in English.
Also: Who says that concepts are non-empirical? Doesn’t it come down to something like a scientific investigation into the operations of the human brain?
Not with current technology.
So this comes down to what you said previously about not liking people who came out of Philosophy 101, e.g., it’s an argument against a philosophical tradition that does not actually exist.
You mention naturalism as a “bad habit” for using science to understand the world?
Do you actually understand what naturalism is and what relationship it has with science?
No, he doesn’t (which is why I downvoted this comment, BTW). Luke says that even naturalistic philosophers exhibit these bad habits. He does not say that naturalism is a bad habit, or that it’s a bad habit because it uses science to understand the world.
Not quite:
“Teach” implies that engaging one’s self with “too much” mainstream philosophy will cause bad habits to arise (and make one unable to do ‘real work’, whatever that might be).
Unexamined presuppositions make a wonderful basis for discourse.
I don’t think that’s what lukeprog meant. That said, thinking ‘naturalism’ is a unitary concept that the members of some relevant linguistic community or intellectual elite share is itself a startlingly good example of the sort of practice lukeprog’s ‘intuitions aren’t shared’ meme is warning about.
The Stanford Encyclopedia article on naturalism itself begins, amusingly enough:
But calling it a “bad habit” with no justification or qualification is exempt from being an equally good (better, in fact, given that I’d not at all expanded on naturalism and certainly not with a dismissive one-liner) example of the “corrective”?
PS—the Stanford Encyclopedia is as good a “proof” as posting a link from Wikipedia. There is (of course) debate in philosophy, but to claim that “naturalism” encourages “bad habits” is just plain sloppy thinking and a strawman built against equally sloppy philosophy undergrads.
If intuitions aren’t reliable, then this entire line of thought is unreliable :-)
To be frank, although I speak for myself and not lukeprog, framing the scientific method or world-view in terms of ‘naturalism,’ or in terms of a nature/‘supernature’ dichotomy, is a bad habit. I can’t say much more than that until you explain what you personally mean by ‘naturalism.’
I don’t follow. A Stanford Encyclopedia is much better evidence for the professional consensus of philosophers than is a Wikipedia article.
Are you alluding to the fact that we all rely on intuitions in our everyday reason? If so, this is an important point. The take-away message from philosophy’s excesses is not ‘Avoid all intuitions.’ It’s ‘Scrutinize intuitions to determine which ones we have reason to expect to match the contours of the territory.’ The successes of philosophy—successes like ‘science’ and ‘mathematics’ and ‘logic’—are formalized and heavily scrutinized networks of intuitions, intuitions that we have good empirical reason to think happen to be of a rare sort that correspond to the large-scale structure of reality. Most of our intuitions aren’t like that, though they may still be useful and interesting in other respects.
I’m thinking of naturalism as broadly accepted by modern analytic philosophy, in Quine’s terms and in more modern constructions which emphasize i) that the natural world is the “only” world (this is not to be confused with a dualistic opposition to anything “supernatural”; the supernatural is simply ruled out as an option) and ii) that science is a preferred means of obtaining knowledge about said world.
I realize that’s less clear than you may want, but the vagueness of the term is part of why I found it objectionable to treat is as instilling “bad habits”.
Well, indirectly, but the specific point was that the argument presented here is an intuition about what goes on in philosophy, what constitutes the current trends and debates within the discipline, and so on, and it appears to me that it is more strawman than a rigorous reply to those activities.
Given that it’s an intuition underpinning an article about the unreliability of intuitions, well...you can appreciate the meta-humor I found there.
Of course, and as I’ve relayed in other comments, this is no insight to philosophers—philosophers already do this. We could of course point out instances where the philosopher’s argument is predicated on validating intutions, but even there you are guaranteed to see a more nuanced position than the uncritical acceptance of common-sense intuitions, and as such even those positions mandate more than a sweeping dismissal.
And ethics/meta-ethics, moral theory, social theory, aesthetics...all of these are, at least in part, beyond the realm of the empirical, and it is a philosophical stance you have taken which puts them in the realm of the physical and empirical or else excludes their reality (if you go the eliminativist route).
These domains are arguably as successful at what they do as math and logic have been in their respective domains, and frankly they don’t operate anything like what you’ve described (re: empirically-discovered relations to the large scale of reality). This is part of why we need naturalistic philosophy, because without it you wind up with unabashed scientism like this, which sits right on the precipice of “ethical” choices which can be monstrous.
Personally I think even other forms of philosophy are not only useful, but what have been called “bad habits” by Eliezer et al. are actually central components of a lived human life. I wouldn’t be so hasty to get rid of them, and certainly not with such a sweeping set of dismissals about the primacy of science.
Define “natural world” so that it’s clearer how the above is non-tautological.
If you aren’t denying or opposing anything, then what work is “only” doing in the sense “the natural world is the only world”?
What does it mean in this context to ‘rule out as an option’ something? How does this differ from ‘opposing’ an option?
Define ‘science,’ while you’re at it. Is looking out the window science? Is logical deduction science? Is logical deduction science when your premises are ‘about the world’? Same question for mathematical reasoning. I’d think most scientists in their daily lives would actually consider logical or mathematical reasoning stronger than, ‘preferred’ over, any scientific observation or theory.
The vagueness of the term ‘naturalism’ is the primary reason it’s a bad habit to define your methods or world-view in terms of it.
I don’t know what you mean by ‘beyond the realm of the empirical.’ Plenty of logic and mathematics also transcends the observable. I think we’d get a lot further in this discussion if we started defining or tabooing ‘science,’ ‘philosophy,’ ‘empirical,’ ‘natural,’ etc.
To be honest, this sentence here pretty much sums up what I think is wrong with modern philosophy. There is virtually no content to ‘naturalism’ or ‘scientism,’ beyond the fact that both are associated with science and the former has a positive connotation, while the latter has a negative connotation. Thus we see much of the modern philosophical (and pop-philosophical) discourse consumed in hand-wringing over whether something is ‘naturalistic’ (goodscience! happy face!) or whether something is ‘scientistic’ (badscience! frowny face!), and the whole framing does nothing but obscure what’s actually under debate. Any non-trivial definition of ‘naturalism’ and ‘scientism’ will allow that a reasonable scientist might be forced to forsake naturalism, or adopt scientism, in at least some circumstances; and any circular or otherwise trivial one is not worth discussing.
In that there is “no more than”, in ontological terms, there are no other fundamental categories of being. I don’t have to explicitly deny that unicorns exist in order to rule them out of any taxonomy of equine animals.
If you’ve presupposed a worldview that allows for “supernatural” or “mystical” or Cartesian mind-substance or what have you, then of course the opposition seems obvious, but modern analytical naturalism as it stands makes no such allowance. This is why we cannot take our presuppositions for granted.
You don’t have the space on this forum for that debate. However, for pragmatic purposes, let’s (roughly) call it the social activity of institutionalized formal empirical inquiry, inclusive of the error-correcting norms and structures meant to filter our systematic errors.
Maybe if you didn’t take flippant comments and run with them you wouldn’t encounter this problem. I brought up naturalism because I found it hilarious that “even modern analytic philosophy” teaches these laughably vague “bad habits”—which you still seem surprisingly unconcerned with, given the far more serious issues there—and contemporary naturalism as practiced by many philosophers in the English-speaking world is as pro-science a set of ideas as you’ll find.
Spiraling it out into this protracted debate about whether we can accurately define naturalism—on your terms, no less—is not the point of the exercise (and I suspect it’s only happened to take the focus off the matter at hand: that there is no adequate account of these “bad habits” and we’re seeing an interference play to keep eyes off it).
Yes I’m well aware of the dislike of anything intrinsically opposed to the formal and computable around these parts, and I also find that position to be laughable (and a shining example of why you folks need to engage with philosophy rather than jumping head-first into troubling [and equally laughable] moral-ethical positions).
But, as per the thread, there is a more interesting and proximate criticism: your intuitions on such are unreliable, by your own lights, so you’ll pardon me if I am hardly persuaded by your fiat declaration that i) there is “no content” to a whole wide-ranging debate (of which you seem barely familiar with, at that, with your introduction of yet another nonsensical opposition that might as well be fiction for all it reflects the actual process*) and ii) that we should—again by decree—paint as “useless” the tools and methods used to engage in the debate.
We are only fortunate that the actual intellectual world doesn’t conduct itself like a message board.
PS There is no serious debate “between” naturalism and scientism. The latter isn’t even a “position” as such, even less so than naturalism could be.
No. It’s an argument against a philosophical tradition that does exist.
In this “Philosophy by Humans” sub-sequence, it seems like the most common response I get is, “No, philosophers can’t actually be that stupid,” even though my post went to the trouble of quoting philosophers saying “Yes, this thing here is our standard practice.”
I’ll say it again: by “intuition” they might mean “shared intuition”, in which case they are doing nothing wrong so long as there are some, and so long as they rejected purported intuitions which aren’t shared.
So? I can quote scientists saying all manner of stupid, bizarre, unintuitive things...but my selection of course sets up the terms of the discussion. If I choose a sampling that only confirms my existing bias against scientists, then my “quotes” are going to lead to the foregone conclusion. I don’t see why “quoting” a few names is considered evidence of anything besides a pre-existing bias against philosophy.
On a second and more important point, you’ve yet to elaborate on why having a debate about ethics is problematic in the first place. Your appeal to Eliezer and his vague handwaving about “bad habits” and “real work” (which range from “too vague” to “nonsensical” depending on how charitable you want to be) is not persuasive, so I’d ask again: what is wrong with philosophy doing what it is supposed to do, i.e., examine ideas?
I realize that declaring it “wrong” by fiat seems to be the rule around here, if the comments are any indication, but from the philosophical standpoint that’s a laughable argument to make, and it’s not persuasive to anyone who doesn’t already share your presuppositions.
So you’re worried about the problem of filtered evidence. Throughout this sequence, I’ve given lots of citations and direct quotes of philosophers doing things — and saying that they’re doing things — which don’t make sense given certain pieces of scientific evidence. Can you, then, provide citations or quotes of philosophers saying “No, we aren’t really appealing to intuitions in this way?” I’ll bet you can find a few, but I don’t think they’ll say that their own approach is the standard one.
You’re asking me to do all the work, here. I’ve provided examples and evidence, and you’ve just flatly denied my examples and evidence without providing any counterexamples or counterevidence. That’s logically rude.
Here, you managed to straw man me twice in a single paragraph. I never said that debates about ethics are problematic, and I never said there’s something wrong with philosophy examining ideas. I’ve only ever said that specific, particular ways of examining ideas or having philosophical debates are problematic, and I’ve explained in detail why those specific, particular methods are problematic. You’re just ignoring what I’ve actually said, and what I have not said.
Again, I’m the one who bothered to provide examples and evidence for my position. You’re the one who keeps declaring things wrong without providing any examples and evidence to support your own view. Declaring something wrong without providing reason or evidence is against the cultural norm around here, and you are the one who is violating it.
All I’ve asked you to do is at least pretend you have some familiarity with the field’s content, and how that content relates to its raison d’etre. As before, I don’t have to provide “counterevidence” that science doesn’t take luminiferous ether seriously as a hypothesis; anyone familiar with the field would already know this.
Of course you didn’t say it, because that would be stupid, but it’s implicit in the points you’ve repeatedly made, viz. “philosophers are stupid, if they only paid attention to science....” Well, they do pay attention to science, in fact there is a whole realm of philosophers who pay attention to science and make that a centerpiece of their discussion, and that given philosophy’s purpose as “engagement with ideas” it is implicit that, wonder of wonders, some philosophers will take positions that disagree with the claim you’ve put forth.
That latter statement is the issue, as you said in your article that, since some philosophers accept intuitions as valid (a claim you never bothered to unpack or examine in any detail), therefore we should consider philosophy a primitive and useless artifact of Cartesian thinking.
You’ve taken it for granted without outright saying it. Maybe if you read more philosophy you wouldn’t make these kinds of errors.
I see, so the cultural norm is to take unfavorable samples of a field you don’t like, present them as exemplars, used them as grounds to justify a giant-sized strawman against said field, complain when people don’t accept that position without criticism, and then hide behind conveniently linked rules meant to fortify your pre-existing groupthink.
Sounds far more rational than every other web forum ever.
To expand on your point, philosophers like Thomas Kuhn and Paul Feyerabend provide a vision of what sophisticated modern philosophy can do to improve the scientist’s perspective.
It’s so much fun to write that. Still, please don’t. Your point is well made in the previous paragraph—this sentence only detracts from your persuasiveness.
I don’t understand. Certainly, I’m at least “pretending” to have “some familiarity” with the field’s content, and how that content relates to its raison d’etre, by way of citing hundreds of works in the field, quoting philosophers, hosting a podcast for which I interviewed dozens of philosophers for hours on end, etc.
Of course many philosophers pay attention to science. When Eliezer wrote, “If there’s any centralized repository of reductionist-grade naturalistic cognitive philosophy, I’ve never heard mention of it,” I replied (earlier in this sequence):
Again: you’re straw-manning me. I’ve said specific things about the ways in which many philosophers are ignoring scientific results, but I’m quite aware that they pay attention to other parts of science, and of course that many of them (e.g. the experimental philosophers) pay attention to the kinds of evidence that I’m accusing others of ignoring.
Straw man number… 5? 6? I’ve lost count. Where did I say that?
Wait, first you claim that “you said in your article that...” and in the very next paragraph you claim that I’ve “taken it for granted without outright saying it”? I’m very confused.
No. I complain when I do all the work of presenting arguments, examples, and evidence, and you simply deny it all without presenting any arguments, examples, and evidence of your own.
You’d think if this were the case you’d be able to make a more honest assessment of the field.
Alright, I’ll grant you this. You’ve still made the point that the field of philosophy has not acknowledged the unreliability of intuitions, as if this were a novel insight and not something that is taken very seriously in the modern-day (at least) debates, and that this is a fundamental flaw in the discipline itself.
Right here:
The implication being that Cartesian views of mind and reason are in any way relevant to modern philosophy. This isn’t even true for Continental philosophy and hasn’t been for a long time.
I agree, you are, so let’s slow down and look at my actual criticism again.
What you wrote was that philosophers accept intutions at face value, uncritically...which isn’t true, and I responded accordingly.
What you implied, in that it follows necessarily from your explicitly-made argument, is that since some philosophers accept intutions as valid, therefore the discipline-as-a-whole is broken. But that isn’t true; the entire point is to discuss disparate, conflicting, and even dubious ideas; this is no blackmark as you’ve construed it.
A convenient way to hide behind your biases, I suppose, but I’m not sure what it accomplishes otherwise. Even the Stanford Encyclopedia’s entries on moral theory and ethics don’t back up your “unique” assessment of the field.
I don’t think this is going anywhere useful. You’re still straw-manning me and failing to provide exact counterexamples and counter-evidence. I’m moving on to more productive activities.
Improving upon this: why care about what the worst of a field has to say? It’s the 10% (stergeon’s law) that aren’t crap that we should care about. The best material scientists give us incremental improvements in our materials technology, and the worst write papers that are never read or do research that is never used. But what do the best philosophers of meta-ethics give us? More well examined ideas? How would you measure such a thing? How can those best philosophers know they’re making progress? How can they improve the tools they use? Why should we fund philosophy departments?
The best ethical philosophers give us the foundations of utility calculation, clarify when we can (and can’t) derive facts and values from each other, generate heuristics and frameworks within which to do politics and resolve disputes over goals and priorities. The best metaphysicians give us scientific reasoning, novel interpretations of quantum mechanics, warnings of scientists becoming overreliant on some component of common sense, and new empirical research programs (Einstein’s most important work consisted of metaphysical thought experiments). The best logicians and linguistic philosophers give us the propositional calculus, knowledge of valid and invalid forms, etc., etc. Even if you think the modalists and dialetheists are crazy, you can be very thankful to them for developing modal and paraconsistent logics that have valuable applications outside of traditional philosophical disputes.
And, of course, philosophy in general is useful for testing the tools of our trade. We can be more confident of and skilled in our reasoning in specific domains, like physics and electrical engineering and differential calculus, when those tools have been put to the test in foundational disputes. A bad Philosophy 101 class can lead to hyperskepticism or metaphysical dogmatism, but a good Philosophy 101 class can lead to a healthy skepticism mixed with intellectual curiosity and dynamism. Ultimately, the reason to fund ‘philosophy’ departments is that there is no such thing as ‘philosophy;’ what the departments in question are really teaching is how to think carefully about the most difficult questions. The actual questions have nothing especially in common, beyond their difficulty, their intractability before our ordinary methods.
I’m a bit worried that your conception of philosophy is riding on the coat tails of long-past-philosophy where the distinction between philosophy, math, and science were much more blurred than they are now. Being generous, do you have any examples from the last few decades (that I can read about)?
I’ll agree with you that having some philosophical training is better than none in that it can be useful in getting a solid footing in basic critical thinking skills, but then if that’s a philosophy department’s purpose then it doesn’t need to be funded beyond that.
Could you taboo/define ‘philosophy,’ ‘math,’ and ‘science’ for me in a way that clarifies exactly how they don’t overlap? It’d be very helpful. Is there any principled reason, for example, that theoretical physics cannot be philosophy? Or is some theoretical physics philosophy, and some not? Is there a sharp line, or a continuum between the two kinds of theoretical physics?
If that’s a philosophy department’s purpose, and nothing else can fulfill the same purpose, then philosophy departments are vastly underfunded as it stands. (Though I agree the current funding could be better managed.)
But the real flaw is that we think of philosophy as a college thing. Philosophical training should be fully integrated into quite early-age education in logical, scientific, mathematical, moral, and other forms of reasoning.
I didn’t say they don’t overlap. I said the distinctions have become less blurred (I think because of the need for increased specialization in all intellectual endeavours as we accumulate more knowledge). I define philosophy, math, and science by their professions. That is, their university departments, their journals, their majors, their textbooks, and so on.
Hence, I think the best way to ask if “philosophy” is a worthwhile endeavour is to asked “why should we fund philosophy departments?” A better way to ask that question is “why should we fund philosophy research and professional philosophers (as opposed to teachers of basic philosophy)?”
And though while I think basic philosophy can be helpful in getting a footing in critical thinking, I also think CFAR is considerably better at teaching critical thinking.
I don’t see any principled reason for why we can’t all be generalists without labels. Practical reasons, yes.
I thought you were saying that the distinctions have become less blurred? Now I’m confused.
That’s fine for some everyday purposes. But if we want to distinguish the useful behaviors in each profession from the useless ones, and promote the best behaviors both among laypeople and among professionals, we need more fine-grained categories than just ‘everything that people who publish in journals seen as philosophy journals do.’ I think it would be useful to distinguish Professional Philosophy, Professional Science, and Professional Mathematics from the basic human practices of philosophizing, doing science, or reasoning mathematically. Something in the neighborhood of these ideas would be quite useful:
mathematics: carefully and systematically reasoning about quantity, or (more loosely) about the quantitative properties and relationships of things.
philosophy: carefully reasoning about generalizations, via ‘internal’ reflection (phenomenology, thought experiments, conceptual analysis, etc.), in a moderately (more than shamanic storytelling, less than math or logic) systematic way.
science: carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance.
Do you think these would be useful fast-and-ready definitions for everyday promotion of scientific, philosophical, and mathematical literacy? Would you modify any of them?
Yup, my bad. You caught me before my edit.
I think you’re reifying abstraction and doing so will introduce pitfalls when discussing them. Math, science, and philosophy are the abstracted output of their respective professions. If you take away science’s competitive incentive structure or change its mechanism of output (journal articles) then you’re modifying science. If you install a self-improving recursive feedback cycle with reality in philosophy, then I think you’ve recreated math and science within philosophy (because science is fundamentally concrete reasoning while math is abstract reasoning and philosophy carries both).
If I’m going to promote something to laypeople, it’s that a mechanism of recursive self-improvement is desirable. There’s plenty to unpack there, though. Like you need a measure of improvement that contacts reality.
I think your definitions are more abstract than mine. For me, mathematics, philosophy, and science are embodied brain behaviors — modes of reasoning. For you, if I’m understanding you right, they’re professions, institutions, social groups, population-wide behaviors. Sociology is generally considered more abstract or high-level than psychology.
(Of course, I don’t reject your definitions on that account; denying the existence of philosophizing or of professional philosophy because one or the other is ‘abstract’ would be as silly as denying the existence of abstractions like debt, difficulty, truth, or natural selection. I just think your abstraction is of somewhat more limited utility than mine, when our goal is to spread good philosophizing, science, and mathematics rather than to treat the good qualities of those disciplines as the special property of a prestigious intellectual elite belonging to a specific network of organizations.)
Feedback cycles are great, but we don’t need to build them into our definition of ‘science’ in order to praise science for happening to possess them; if we put each scientist on a separate island, their work might suffer as a result, but it’s not clear to me that they would lose all ability to do anything scientific, or that we should fail to clearly distinguish the scientifically-minded desert-islander for his unusual behaviors.
Also, it’s not clear in what sense mathematics has a self-improving recursive feedback cycle with reality. Actually, mathematics and philosophy seem to function very analogously in terms of their relationship to reality and to science.
I’m not sure that’s the best approach. Telling people to find a recursively self-improving method is not likely to be as effective as giving them concrete reasoning skills (like how to perform thought experiments, or how to devise empirical hypotheses, or how to multiply quantities) and then letting intelligent society-wide behaviors emerge via the marketplace of ideas (or via top-down societal structuring, if necessary). Don’t fixate first and foremost on telling people about what our abstract models suggest makes science on a societal scale so effective; fixate first and foremost on making them good scientists in their daily lives, in every concrete action.
You’re kind of understanding me. Abstractly, bee hives produce honey. Concretely, this bee hive in front of me is producing honey. Abstractly, science is the product of professions, institutions, ect. Concretely, science is the product of people on our planet doing stuff.
I’m literally trying to not talk about abstractions or concepts but science as it actually is. And of course, science as it actually is does things that we can then categorize into abstractions like feedback cycles. But when you say science is a bunch of abstractions (like I think your definitions are), then you’re missing out on what it actually is.
This is exactly why I want to avoid defining science with abstractions. It literally does not make sense if you think of science as it is. “Scientific” imports essentialism.
Mathematics is self-improving while at the same time hinging on reality. This is tricky to explain so I might come back to it tomorrow when I’m more well rested (i.e., not drunk).
No, I think that kernel (and we are speaking in the context of “fast-and-ready”) of thought is really the most important thing to convey. Speaking abstractly, even science doesn’t take that kernel seriously enough. It doesn’t question how it should allocate its limited resources or improve its function. This is costing millions of lives, untold suffering, and perhaps our species continued existence. But it does employ a self-improving feedback cycle on reality which is just enough for it to uncover reality. It needs to install a self-improving feedback cycle on itself. And then we need a self-improving feedback cycle on feedback cycles. I can’t think of any abstraction more important in making progress with something.
It sounds like you’re conflating abstract/concrete with general/particular. But a universal generalization might just be the conjunction of a lot of particulars. I prefer to think of ‘abstract’ as ‘not spatially extended or localized.’ Societies are generally considered more abstract than mental states because mental states are intuitively treated as more localized. But ‘lots of mental states’ is not more abstract than ‘just one mental state,’ in the same way that thousands of bees (or ‘all the bees,’ in your example) can be just as concrete as a single bee.
We’re back at square one. I still don’t see why reasoning is more abstract than professions, institutions, etc. We agree that it all reduces to human behaviors on some level. But the ‘abstract vs. concrete’ discussion is a complete tangent. What’s relevant is whether it’s useful to have separate concepts of ‘the practice of science’ vs. ‘professional science,’ the former being something even laypeople can participate in by adopting certain methodological standards. I think both concepts are useful. You seem to think that only ‘professional science’ is a useful concept, at least in most cases. Is that a fair summary?
Counterfactuals don’t make sense if you think of things as they are? I don’t think that’s true in any nontrivial sense....
‘Scientific’ is not any more guilty of essentializing than are any of our other fuzzy, ordinary-language terms. There are salient properties associated with being a scientist; I’m suggesting that many of those clustered properties, in particular many of the ones we most care about when we promote and praise things like ‘science’ and ‘naturalism,’ can occur in isolated individuals. If you don’t like calling what I’m talking about ‘scientific,’ then coin a different word for it; but we need some word. We need to be able to denote our exemplary decision procedures, just to win the war of ideas.
‘Professional science’ is not an exemplary decision procedure, any more than ‘the buildings and faculty at MIT’ is an exemplary decision procedure. It’s just an especially effective instantiation thereof.
Maybe we’re just not approaching the problem at the same levels. When I ask about what the optimal way is to define our concepts, I’m trying to define them in a way that allows us to consistently and usefully explain them (in any number of paraphrased forms) to 8th-graders, to congressmen, to literary theorists, such that we can promote the best techniques we associate with scientists, philosophers, and mathematicians. I’m imagining how we would design a scientific+philosophical+mathematical+etc. literacy pamphlet that would teach people how to win at life. It sounds like you’re instead trying to think of a single sentence that summarizes what winning at life is, at its most abstract. ‘Adopt a self-improving feedback cycle linking you to reality’ is just a fancy way of saying ‘Behave in a way that predictably makes you better and better at doing good stuff.’ Which is great, but not especially contentful as yet. I only care about people understanding how winning works insofar as this understanding helps them actually win.
I prefer to think of it as anything existing at least partly in mind, and then we can say we have an abstraction of an abstraction or that something something is more abstract (something from category theory being a pure abstraction, while something like the category “dog” being less abstract because it connects with a pattern of atoms in reality). By their nature, abstractions are also universals, but things that actually exist like the bee hive in front of me aren’t particulars at the concrete level. The specific bee hive in my mind that I’m imagining is a particular, or the “bee hive” that I’m seeing and interpreting into a bee hive in front of me is also a particular, but the bee hive is just a “pattern” of atoms.
I think that you’re stuck in noun-land while I’m in verb-land, but I don’t think noun-land is concrete (it’s an abstraction).
Framing those concepts in terms of usefulness isn’t helpful, I think. I’d simply say the laypeople are doing something different unless they’re contributing to our body of knowledge. In which case, science as it is requires that those laypeople interact with science as it is (journals and such).
No, I mean thinking of someone as being scientific doesn’t make sense if you think of science as it is because e.g. the sixth grader at the science fair that we all “scientific” isn’t interacting with science as it is. We’re taking some essential properties we pattern match in science as it is, and then we abstract them, and then we apply them by pattern matching.
We can imagine an immortal human being on another planet replicating everything science has done on Earth thus far. So, yes I think it can occur in isolated individuals, but that’s only because the individual has taken on everything that science is and not some like “carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance.”
If I’m going to apply an abstraction to what I praise in science to individuals, it’s not “being scientific” or “doing science”, it’s “working with feedback.” It’s what programmers do, it’s what engineers do, it’s what mathematicians, it’s what scientists do, it’s what people that effectively lose weight do, and so on. It’s the kernel of thought most conducive to progress in any area.
I think we are approaching the problem at the same level. I think I have optimally defined the concepts, and I think “behave in a way that predictably makes you better and better at doing good stuff” is what needs to be communicated and not “science: carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance.” If we’re going to add more content, then we should talk about how to effectively measure self-improvement, how to get solid feedback and so on. With that knowledge, I think a bunch of kids working together could rebuild science from the ground up.
I’d pass on how important “behave in a way that predictably makes you better and better at doing good stuff” is.
That’s problematic, first, because it leaves mind itself in a strange position. And second because, if mathematical platonism (for example) were true, then there would exist abstract objects that are mind-independent.
You seem to be assuming the pattern-matching of this sort is a vice. If it’s useful to mark the pattern in question, and we recognize that we’re doing so for utilitarian reasons and not because there’s a transcendent Essence of Scienceyness, then the pattern-matching is benign. It’s how humans think, and we can’t become completely inhuman if our goal is to take the rest of mankind with us into the future. Not yet, anyway.
Religions are also feedback loops. The more I believe, the more my belief gets confirmed. Remarkable! The primary problem with this ultra-attenuated notion of what we want is that all the work is being done by the black-box normative terms like ‘improvement’ and ‘better’ and ‘optimal.’ Everything we’re actually trying to concretely teach is hidden behind those words.
We also need more content than ‘working with a feedback loop from reality’; that kind of metaphorical talk might fly on LessWrong, but it’s really a summary of some implicit intuitions we already share, not instruction we could in those words convey to someone who doesn’t already see what we’re getting at. After all, everything exists in a back-and-forth with reality, and everything is for that matter part of reality. Perhaps my formulations of what we want are too concrete; but yours are certainly too abstract and underdetermined.
This seems reasonable.
Agreed. What is critical here is whether there are better habits.