You do not want to hear about knowledge that is not legitimized by your body of essays, idols, and peers, but that does not make it trivially true.
Have you read Structure of Scientific Revolutions? Many of us have and find it very interesting. But even if you apply post-modern methods to the scientific process, you still need to explain why science can predict which planes will fly and which will not.
Yes! Thomas Kuhn is a brilliant writer and his theory is powerful. But let me ask you what you think he is saying in that book? I am asking because I feel that we draw different conclusions from it.
Have you read Structure of Scientific Revolutions? Many of us have and find it very interesting. But even if you apply post-modern methods to the scientific process, you still need to explain why science can predict which planes will fly and which will not.<
The post-modern question to science is not about whether or not science can predict reality. The question is whether or not science is produced scientifically. Or to put it another way, can science be separated from power and discourse?
can science be separated from power and discourse?
No. Obviously not. (This is not the majority position in this community).
The post-modern question to science is not about whether or not science can predict reality. The question is whether or not science is produced scientifically.
I would hope that a scientist familiar with post-modern thought would agree that producing knowledge scientifically means nothing more and nothing less than getting better at predicting reality.
My take on Kuhn?
The incommensurability of scientific theories (e.g. Aristotelian physics vs. Newtonian physics) is a real thing, but it does not imply scientific nihilism because there are phenomena. Thus, science is possible because there is “regularity” (not sure what the technical word is) when observing reality.
No. Obviously not. (This is not the majority position in this community).
interesting, can you explain your reasoning?
incommensurability of scientific theories
is that the thing where from one theory the other one looks bogus and you can’t get from one to the other? Seems to me that it doesn’t imply nihilism because using the full power of your current mind, one model looks better than the other. it might be the same as EYs take on the problem of induction here.
Yes, incommensurability is the problem of translating from one theory into a later theory.
Aristotelian physics, from the point of view of Newtonian physics, is absolutely stupid. It’s like Aristotle wasn’t looking at the same reality. Overstating slightly to make a point, Newtonian physics, from the point of view of relativistic physics, is manifestly false. It’s like Newton wasn’t looking at the same reality. How many times must the circle repeat before the Bayesian conclusion is that the different scientists were not looking at the same reality? By the principle of incommensurability, you can’t say that the earlier theory can be massaged into a more simplistic version of the later theory.
If different scientists are looking at a different reality, how on earth did we keep making better predictions? Thus the appeal to the regularity of phenomena, which rescues the concept of scientific progress even if we think that our model is likely to be considered utter nonsense a generation or so into the future.
ETA: The social position of science is an expansion of the halo effect point I made.
Look I am not trying to disagree with the scientific method. It is incredibly powerful and beneficial methodology for producing knowledge. What I am saying is
1-that as an institution and a belief-sysetm “science” does not live up to the scientific method.
2- That it is impossible to do so given what we have learned about the human condition.
1-that as an institution and a belief-sysetm “science” does not live up to the scientific method.
I’m not sure what it would mean for science to “live up” to the scientific method. The scientific method is, well, a method; it’s not an ideology.
Sure, scientists are humans with power and discourse and all kinds of cognitive biases, and thus they don’t practice the scientific method with absolute perfection. And yes, I bet that there are quite a few traditions and institutions within the scientific community that could be improved. But, even with all its imperfections, science has been devastatingly effective as far as “belief systems” are concerned. As it was said upthread, science actually predict which planes will fly and which will fall; so far, no other methodology has been able to even come close.
You are measuring success by material transformation of the world. By that standard, sure science is more successful, but how do you justify such a standard?
I have heard several times this example of planes flying. In response I want to ask: Has flight made humans happier and safer? (Note* this question is a case example of the larger question of “does material transformation and dominance to the extent offered by modern science improve the quality of human life?)
Sure there are examples of how flight technologies have made humans wealthier and more powerful. But they have also (along with shipping technologies) been the primary cause of ecological devastation. They have also birthed new forms of warfare that make killing an even more remote and apathetic process. I am by no means trying to say that flight was a bad thing. I am not a Luddite. I am not against technological innovation. My point is to question why the goods of technological advancement are used as justification of further expansion of human capacity to transform the material world, while the damages are ignored. To me that is like promoting all the benefits of cigarettes, while leaving out the damages they do. What I am trying to question in the majority of my posts is the assumption that a greater capacity to dominate material reality equals a greater benefit to humanity when every major innovation produces equal if not greater damages.
You are measuring success by material transformation of the world.
I would argue that our ability to “materially transform the world” (which is material) is a direct consequence of our ability to acquire progressively more accurate models of the world.
Has flight made humans happier and safer?
Yes. Do you disagree ? I am somewhat surprised by your question, because the answer seems obvious, but I could be wrong. Still, you say,
I am by no means trying to say that flight was a bad thing.
So… it sounds like you agree, maybe ?
What I am trying to question in the majority of my posts is the assumption that a greater capacity to dominate material reality equals a greater benefit to humanity...
This is, at best, an argument against technology, but not against science.
… you conviently do not address some of the examples I provide of the negatives of flight. I am not against either techology or science in moderation, which I do not think exists in the current state of things.
This is, at best, an argument against technology, but not against science.
No, it is an argument against the ideology that endless minipulation/dominance of the material world is purely benefical. Science is as much an attempt to dominate/minpulate reality as technological development.
you conviently do not address some of the examples I provide of the negatives of flight.
Oh, I agree that there are negatives, I just think that the positives outweigh them. I can defend my position, but first, let’s clear up this next point:
Science is as much an attempt to dominate/minpulate reality as technological development.
I’m not sure I understand what you mean by “dominate/manipulate”. As I see it, science is an attempt to understand reality, and technology is an attempt to manipulate it. Do you have different definitions of “science” and “technology” in mind ? Obviously, a certain amount of technology is required in order for science to progress—microscopes and telescopes don’t pop out of thin air ex nihilo—but I think the distinction I’m making is still valid.
Science doesn’t motivate itself. The social purpose of learning to make better predictions (science) is to be better at controlling the environment.
The fact that we can control an environment doesn’t imply that we should control it that way, and Boyi seems to be conflating those points. But that doesn’t change the social purpose of science.
ETA: Understanding reality is what science says it does. But from a functional point of view, it is irrelevant whether the model is “true” because all that matters is whether the model makes accurate predictions.
I agree with what dlthomas said. In fact, most scientists I know are pursuing science out of intrinsic interest, like he said—though that’s just my personal experience, which may not be representative.
But from a functional point of view, it is irrelevant whether the model is “true” because all that matters is whether the model makes accurate predictions.
What’s your definition of “true”, then, besides “makes accurate predictions” ?
What’s your definition of “true”, then, besides “makes accurate predictions” ?
I did say that I was doing a functional analysis. The social purpose of labeling a scientific statement as true is to differentiate statements that are useful in making accurate predictions from those that are not useful for making predictions. Also, see my response to dlthomas.
If we stop using functional analysis, the question of truth remains. Personally, I have a lot of trouble coming to a satisfying conclusion about the concept, because I think the hypothesis of the incommensurability of scientific theories is strongly supported by the evidence. Notwithstanding that incommensurability, I think that the ability of science to make accurate predictions is based on the regularity of phenomena. I wrote this earlier, which is a slightly more detailed version of the same point.
The social purpose of labeling a scientific statement as true is to differentiate statements that are useful in making accurate predictions from those that are not useful for making predictions
I may be exposing my ignorance here, but I don’t understand what you mean by a “social purpose”. The purpose you describe sounds like an entirely pragmatic purpose to me; i.e., it’s the one that makes sense if you want to discover more about the world—but perhaps this is also what you meant ?
I wrote this earlier, which is a slightly more detailed version of the same point
I read that comment, and I disagree with its premise: “It’s like Aristotle [and Newton] wasn’t looking at the same reality”. Both Newton and Aristotle (ok, Aristotle not as much) explain not only their conclusions, but the evidence and reasoning they used to arrive at these conclusions, and it’s rather obvious why they made the mistakes they made… it’s because they were, in fact, looking at the same reality we now inhabit. You’d make the same mistakes too, today, if you knew nothing of modern science but tried to figure out how the world worked.
Furthermore, Newton wasn’t even all that terribly wrong (again, Aristotle was a ways off). If I want to predict the orbit of our Moon with a reasonable degree of certainty, or if I simply want to lob a rock across the top of an enemy fortress’s walls with my trebuchet, I don’t need relativity.
If different scientists are looking at a different reality, how on earth did we keep making better predictions? Thus the appeal to the regularity of phenomena, which rescues the concept of scientific progress...
You make it sound as though the “regularity of phenomena” is some kind of a trick that people invented so they could keep getting tenure, or something. I, on the other hand, would claim that it’s simply the most parsimonious assumption, given our observations.
I don’t understand what you mean by a “social purpose”
It’s not a big deal. I was trying to be precise to avoid the appearance of a naive claim like “purpose is an objective property of things,” which is clearly false. Purpose is only meaningful as a reference to something, and I’m referencing society.
I read that comment, and I disagree with its premise.
The Aristotle / Newton comparison is meant to be evidence for the hypothesis of incommensurability of scientific theories. If it doesn’t convince you, then I regret that I’m not a good enough historian of science to present additional evidence. (For example, the issues about phlogiston do not seem like compelling evidence for the theory to me, although experts in Philosophy of Science apparently disagree). The only other point in favor of incommensurability of scientific theories is something like “It’s awfully lucky that scientific theories are commensurable, because theories of everything that are not scientific (i.e. moral theories) are definitely incommensurable.”
Anyway, disbelieving the scientific incommensurability hypothesis (SIH) means that the point about phenomena is not all that interesting or insightful. But if you believe SIH, then the scientific nihilism (i.e. there is no objective reality at all) is very tempting. But scientific nihilism must be rejected because science keeps making accurate predictions. Not only that, the predictions keep getting better <i.e. once we didn’t know how to build computers. Now, we do> So even if we reject the idea of accurate scientific models based on the SIH, we still are committed to some sort of regularity, because otherwise accurate prediction is extremely unlikely. That’s phenomena. Sort of the middle ground between scientific nihilism and a belief in the accuracy of scientific models.
I was trying to be precise to avoid the appearance of a naive claim like “purpose is an objective property of things,” which is clearly false.
Ah, yes, agreed.
The Aristotle / Newton comparison is meant to be evidence for the hypothesis of incommensurability of scientific theories.
I think I might be misunderstanding what the word “incommensurability” means. I thought that it meant, “the performance of theory A cannot be compared with the performance of theory B”, but in case of Aristotle/Newton/Einstein, we can definitely rank the performance (in the order I listed, in fact). Aristotle’s Laws of Motion are more or less (ok, closer to the “less” side perhaps, but still) useful, as long as you’re dealing with solid objects on Earth. Their predictive power isn’t great, but it’s not zero. Newton’s Laws are much more powerful, and relativity is so powerful that it’s overkill in many cases (f.ex. if you’re trying to accurately lob a rock with a trebuchet). Each set of laws was devised to explain the best evidence that was available at the time; I see nothing incommesurate about that. But, again, it’s possible that I’m using the word incorrectly.
because theories of everything that are not scientific (i.e. moral theories) are definitely incommensurable.
I am not convinced that they are. In fact—again, assuming I’m using the word correctly—how can theories be incommesurable and yet falsifiable ? And if a theory is not falsifiable, it’s not very useful, IMO (nor is it a theory, technically).
As I use incommensurability, I mean that the basic concepts in one theory cannot be made to correspond with the basic concepts of another theory.
At bottom, Aristotelian physics says that what needs to be explained is motion. In contrast, Newtonian physics says that what needs to be explained is acceleration. I assert that there is no way to import principles for explaining motion into a theory that exists to explain acceleration. In other words, Aristotelian physics is not a simpler and more naive form of Newtonian physics. You can produce a post-hoc explanation of the differences like your invocation of the limits of observable evidence (but see this discussion). I find post-hoc explanation unsatisfying because scientists talk as if they can ex ante predict (1) what sorts of new evidence science needs to improve and (2) what the “revolutionary” new theories will look like. And yet that doesn’t seem to be true historically.
And if a theory is not falsifiable, it’s not very useful, IMO (nor is it a theory, technically).
There is some unfortunate equivocation in the the word theory (“Theory of Gravity” vs. “Utilitarianism: A Moral theory”). But something like Freudian thought is unified(-ish) and coherent(-ish). What is wrong with referencing “Freudian theory”? That doesn’t reject Popper’s assertion that Freudian thought isn’t a scientific theory (because Freudian thought isn’t falsifiable). On falsifiability more generally, I’m not sure what it means to ask whether utilitarianism (or any moral theory) is falsifiable.
At bottom, Aristotelian physics says that what needs to be explained is motion. In contrast, Newtonian physics says that what needs to be explained is acceleration. I assert that there is no way to import principles for explaining motion into a theory that exists to explain acceleration.
What about “V = a * t” ? That said, AFAIK “at bottom” Newton didn’t really want to explain acceleration, or motion, or any abstract concept like that; he wanted to know why the planets appear at certain places in the sky at certain times, and not others—but he could pinpoint the position of a planet much better than Aristotle could.
And I think we can, in fact, correspond Newtonian concepts to Aristotelian ones, if only by pointing out which parts Aristotle missed—which would allow us to map one theory to the other. For example, we (or Archimedes, even) could talk about density and displacement, and use it to explain the parts that Aristotle got right (most rocks sink in water) as well as the parts he got wrong (actually some porous rocks can float).
What is wrong with referencing “Freudian theory”?
Nothing really, it’s just that most people around here, AFAIK, mean something like “a scientific, falsifiable, well-tested theory” when they use the word.
That doesn’t reject Popper’s assertion that Freudian thought isn’t a scientific theory (because it isn’t falsifiable).
If it’s unfalsifiable, what good is it ? Isn’t that the same as saying, “it has no explanatory power” and “it lacks any application to anything” ?
On falsifiability more generally, I’m not sure what it means to ask whether utilitarianism (or any moral theory) is falsifiable.
I see utilitarianism as more of a recipe (or an algorithm) than a theory, so it doesn’t need to be falsifiable per se.
For theories to be commensurate, you need to be able to move all the interesting insights of each theory into the other and still have the same insight. Sure, Aristotle and Newton seemed to agree on the definition of velocity and acceleration. But there’s no way to state “An object in motion will tend to stay in motion” as a conclusion of Aristotelian physics because the caveats Aristotle would want to insert would totally change the meaning. (As an aside, I’m making a point about the theories, not the scientists. Boyi might find Newton’s motivation interesting, but I’m trying to limit the focus to the theories themselves).
The point about moral “theory” is sufficiently distinct that I hope you’ll forgive my desire to move it elsewhere just to make this conversation easier to follow.
For theories to be commensurate, you need to be able to move all the interesting insights of each theory into the other and still have the same insight.
In this case, I don’t think I fully understand what you mean by “insights” being “the same”. Any two scientific theories will make different models of reality, by definition; if they didn’t, they’d be the same theory. So, if you go the extreme route, you could say that all theories are incommensurate by definition, but this interpretation would be trivial, since it’d be the same as saying, “different theories are different”.
I agree that there’s “no way to state ‘An object in motion will tend to stay in motion’ as a conclusion of Aristotelian physics”, but that’s because Aristotelian physics is less correct than Newtonian mechanics. But there is a way to partially map Newtonian mechanics to Aristotelian physics, by restricting our observations to a very specific set of circumstances (relatively heavy objects, an atmosphere, the surface of Greece, etc.). Similarly, we can map relativity to Newtonian mechanics (relatively heavy objects, slow speeds, etc.). It seems odd to say that these theories are totally incommensurate, while still being able to perform this kind of mapping.
In fact, we perform this kind of reduction every day, even in practical settings. When I want to drive from point A to point B, Google Maps tells me that the Earth is flat, and I implicitly believe that the Earth is flat. But if I want to fly to China, I have to discard this assumption and go with the round-Earth model. I see nothing philosophically troubling about that—why use an expensive scalpel when a cheap mallet works just as well ?
Boyi might find Newton’s motivation interesting, but I’m trying to limit the focus to the theories themselves
I was trying to make a point that scientific theories are not just about moving abstract concepts around; their whole purpose is to make predictions about our observations. This is what differentiates them from pure philosophy, and this is also what makes it possible to compare one theory to another and rank them according to correctness and predictive power—because we have an external standard by which to judge them.
The point about moral “theory” is sufficiently distinct that I hope you’ll forgive my desire to move it elsewhere just to make this conversation easier to follow.
I can’t write it better than Feyerabend. My argument about Aristotelian and Newtonian physics is a paraphrase of section 5 of his argument, starting at pg. 94, and ending at about 101.
ETA: And I looked at it again and it’s missing 95-96, where some of the definitions are. If there’s interest, I’ll type it up, because I think it addresses the criticisms fairly well.
Ok, I have to admit that I haven’t read the entire book, but only skimmed the section your mentioned—because my time is limited, but also because, in its infinite wisdom, Google decided to exclude some of the pages.
Still, I can see that Feyerabend is talking about the same things you’re talking about; but I can’t see why those things matter. Yes, Aristotle had a very different model of the physical world than Newton; and yes, you can’t somehow “plug in” Aristotelian physics into Newtonian mechanics and expect it to work. I agree with Feyerabend there. But you could still go the other way: you can use Newtonian mechanics, as well as what we know of Aristotle’s environment, to explain why Aristotle got the results he did, and thus derive a very limited subset of the world in which Aristotle’s physics sort of works. This does not entail rewriting the entirety of Newtonian mechanics in terms of Aristotelian physics or vice versa, because Aristotle was flat out wrong about some things (a lot of things, actually). Feyerabend seems to believe that this makes the two theories incommensurate, but, as I said above, by that standard the word “incommensurate” becomes synonymous with “different”, which is not informative. I think that Feyerabend’s standards are simply too high.
I was also rather puzzled by something that Feyerabend says on page 98, toward the bottom. He says that “impoetus” and “momentum” would give you the same value mathematically, and yet we can’t treat them as equivalent, because they rest on different assumptions. They give you the same answer, though ! Isn’t this what science is all about, answers ?
Let me illustrate my point in a more flowery way. Let’s say that Aristotle, Newton, and Einstein all went to a country fair together, and entered the same block-pushing contest. The contestant randomly picks a stone block out of a huge pile of blocks of different sizes, and then a tireless slave will push the block down a lane (the slave is well-trained and always pushes the block with the same force). The contestant’s job is to predict how far the block will slide before coming to rest. The contestant will win some amount of money based on how close his prediction was to the actual distance that the block traveled.
As far as I understand, Feyerabend is either saying that either a). Aristotle would win less money than Newton who would win less than Einstein, but we have no idea why, or that b). We can’t know ahead of time who will win more money. Both options look disingenuous to me, but it’s quite likely that I am misinterpreting Feyerabend’s position. What do you think ?
I was also rather puzzled by something that Feyerabend says on page 98, toward the bottom. He says that “impoetus” and “momentum” would give you the same value mathematically, and yet we can’t treat them as equivalent, because they rest on different assumptions. They give you the same answer, though ! Isn’t this what science is all about, answers ?
If we imagine a test given by an Aristotelian physicist, defining impetus with the Newtonian definition of momentum would get no points (and vice versa). Feyerabend says
. . . the impetus is supposed to be something that pushes the body along, the momentum is the result rather than the cause of [the body’s] motion
In other words, impetus is meant to explain, while momentum is something to be explained. The point is that it’s very odd that two theories on the same subject disagree about what explains and what needs to be explained. (Imagine if one scientist proposed that cold caused ice, and the next generation of scientist proposed that ice caused cold, while making more accurate predictions). In the same way that impetus is a primary explanation for Aristotle, force is a primary explanation for Newton. And impetus and force are nothing alike. The assertion is that this type of difference is more than saying that Newton had better data than Aristotle.
In your hypothetical, I think that Feyerabend says something like (a). Perhaps “Aristotle would win less money than Newton who would win less than Einstein, but the naive scientific method cannot explain why.” For some perspective, Feyerabend is opposing Ernest Nagel and logical positivism, which asserts that empirical statements are true by virtue of their correspondence with reality. If you believe Newtonian physics, the causal explanation “Impetus” doesn’t correspond with any real thing (because momentum does not explain, but is to be explained). You could bit the bullet and accept that impetus is a false concept. But if you do that, then a theory based on lots of false concepts makes predictions in the block-push contest that do substantially better than chance. How can a false theory do that?
In other words, impetus is meant to explain, while momentum is something to be explained.
If that’s what Feyerabend is saying, then he’s confusing the map for the territory:
The point is that it’s very odd that two theories on the same subject disagree about what explains and what needs to be explained.
That would indeed be odd, but as I understand it, both theories are trying to explain why objects (such as stone blocks or planets) behave the way they do. Both “impetus” and “momentum” are features of the explanatory model that the scientist is putting together. Aristotle believed (according to my understanding of Feyerabend) that “impetus” was a real entity that we could reach out and touch, somehow; Newton simply used “momentum” as a shorthand for a bunch of math, and made no claim about its physical or spiritual existence. As it turns out, “impetus” (probably) does not have an independent existence, so Aristotle was wrong, but he could still make decent predictions, because the impetus’s existence or lack thereof actually had no bearing on his calculations—as long as he stuck to calculating the motion of planets or rocks. In the end, it’s all about the rocks.
Perhaps “Aristotle would win less money than Newton who would win less than Einstein, but the naive scientific method cannot explain why.”
What is the “naive scientific method”, in this case ? How is it different from the regular kind ?
If you believe Newtonian physics, the causal explanation “Impetus” doesn’t correspond with any real thing (because momentum does not explain, but is to be explained). You could bit the bullet and accept that impetus is a false concept.
No, you can’t, since the existence of impetus as an independent entity is unfalsifiable (if I understand it correctly). The best you can do is say, “this impetus thing might exist or it might not, but we have no evidence that it does, so I’m going to pretend that it doesn’t until some evidence shows up, which it never will, since the concept is unfalsifiable”. Aristotle probably would not have said that, so that’s another thing he got wrong.
Imagine if one scientist proposed that cold caused ice, and the next generation of scientist proposed that ice caused cold, while making more accurate predictions
The statements “ice causes cold” or “cold causes ice” are both falsifiable, I think, in which case the “ice causes cold” theory would make less accurate predictions. It might fail to account for different freezing temperatures of different materials, or for the fact that the temperature of a liquid will not decrease beyound a certain point until the entire volume of the liquid had frozen, etc.
I think that Feyerabend is mostly talking about maps, not territory. I shouldn’t have said naive scientific method, because naive is unnecessarily snarky and I’m talking about a different basic philosophy of scientists than the scientific method. The basic “truth theory” of science is that we make models and by adding additional data, we can make more accurate models. But in some sense, the basic theory says that all models are “true.”
That leaves the obvious question of how to define truth. “Makes accurate predictions” is one definition, but I think most scientists think that their models “describe” reality. The logical positivists tried to formalize this by saying that models (and statements in general) were true if they “corresponded” with reality. Note that this is different from falsifiability, which is basically a formal way of saying “stick your neck out.” (i.e. the insight that if your theory can explain any occurrence, then it really can’t explain anything) The Earth suddenly reversing the direction of its orbit would falsify impetus, momentum, relativity, and just about everything else human science knows or has ever thought it knew, but that doesn’t tell us what is true.
For the logical positivist, when one says that “impetus does not have an independent existence” that means “impetus is false.” There is some weirdness in a “false” theory making accurate predictions. To push on the map/territory metaphor slightly, if Columbus, Magellan, and Drake all came back with different maps of the world but all clearly got to the same places, we would be justified in thinking that there was something weird going on. Yet if you adopt the logical positivist definition of truth, that seems to be exactly what is happening. At the very least, the lesson is that we should be skeptical of the basic theory’s explanation of what models are.
But in some sense, the basic theory says that all models are “true.”
I really don’t think so. Let’s pretend that my theory says that lighter objects always fall slower than heavier ones, whereas your theory says that all objects always fall at the same rate. Logically speaking, only one of those theories could be true, seeing as they state exactly opposite things.
In addition, if I believe that the Moon is made out of green cheese, and so does everyone else; and then we get to the Moon and find a bunch of rocks but no cheese—then my theory was false. I could make my green cheese theory as internally consistent as I wanted, but it’d still be false, because the actual external Moon is made of rocks, whereas the theory says it’s made of cheese.
That leaves the obvious question of how to define truth.
“Makes accurate predictions” is one definition, but I think most scientists think that their models “describe” reality.
What’s the difference ?
The Earth suddenly reversing the direction of its orbit would falsify impetus, momentum, relativity, and just about everything else human science knows or has ever thought it knew, but that doesn’t tell us what is true.
Well, no, but it would tell us that lots are things we thought are true are probably false. In order to figure out what’s likely to be true, we’d have to construct a bunch of new models, and test them. I don’t see this as a problem; and in fact, this happens all the time—see the orbit of Mercury, for example.
For the logical positivist, when one says that “impetus does not have an independent existence” that means “impetus is false.” There is some weirdness in a “false” theory making accurate predictions.
I wouldn’t say that “impetus is false” (at least, not in the way that you mean), because it’s actually worse than false—it’s irrelevant. There’s no experiment you can run, in principle, that will tell you whether “m*v” is caused by impetus or invisible gnomes. And if you can’t ever tell the difference, then why bother believing in impetus (as an actual, non-metaphorical entity) or gnomes (ditto) ? Aristotle may not have been aware of anything like Ockham Razor (I don’t know whether he was or not), but that’s ok. Aristotle was wrong. Scientists are allowed to be wrong, that’s what science is all about (though Aristotle wasn’t technically a scientist, and that’s ok too).
if Columbus, Magellan, and Drake all came back with different maps of the world but all clearly got to the same places, we would be justified in thinking that there was something weird going on… At the very least, the lesson is that we should be skeptical of the basic theory’s explanation of what models are.
I don’t see why you’d make the logical leap from “These three explorers had different maps but got to the same place”, directly to, “we must abandon the very idea of representing territory schematically on a piece of vellum”, especially when you know that explorers who rely on maps tend to get lost a lot less often than explorers who just wing it. Instead of abandoning all maps altogether, maybe you should figure out what piece of information the explorers were missing, so that you could make better maps in the future.
There’s no experiment you can run, in principle, that will tell you whether “m*v” is caused by impetus.
Is it really your position that no experiment can tell whether something is a cause or an effect? That sounds like an assertion that the statement “gravity is a cause of motion, not an effect” is not meaningful.
I’d like truth to be simple. For practical purposes, it is simple. But “simple” truth doesn’t stand up to rigorous examination, in much the same way that a “simple” definition of infinity doesn’t work.
Is it really your position that no experiment can tell whether something is a cause or an effect?
Sorry, no, that wasn’t what I meant. As far as I understand—and my understanding might be incorrect—Aristotle believed that moving objects are imbued with this substance called “impetus”, which, according to Aristotle, is what imparts motion to these objects. He could calculate the magnitude of impetus as “m*v”, but he also proposed that impetus (which, according to Aristotle, does exist) is undetectable by any material means, other than the motion of the objects.
In a way, we can imagine two possible universes:
Universe 1: Impetus imparts motion to objects but is otherwise undetectable; we can estimate its effects as “m*v”.
Universe 2: There’s no such thing as “impetus”, though m*v is a useful feature of our model.
Is there any way to tell, in principle, whether you are currently living in Universe 1 or Universe 2 ? If the answer is “no”, then it doesn’t matter whether impetus is a cause or an effect, because it is utterly irrelevant.
Contrast this with your “ice causes cold vs cold causes ice” scenario. In this case, ice and cold are both physically measurable, and we can devise a series of experiments to discover which causes which (or whether some other model is closer to the truth).
But “simple” truth doesn’t stand up to rigorous examination,
I would argue that if your rigorous examination cannot explain your simple, useful, and demonstrably effective notion of truth, then the problem is with your examination, not your notion of truth.
in much the same way that a “simple” definition of infinity doesn’t work.
What is a “simple” definition of infinity, and how does it differ from the regular kind ? As far as I understand, infinity is a useful mathematical concept that does not directly translate into any scientific model, but, as usual, I could be wrong.
Impetus imparts motion to objects but is otherwise undetectable
I don’t think an Aristotelian physicist would say that impetus is “otherwise undetectable” any more than a modern physicist would say “gravity causes objects to move, but is otherwise undetectable.”
I would argue that if your rigorous examination cannot explain your simple, useful, and demonstrably effective notion of truth, then the problem is with your examination, not your notion of truth.
There are lots of statements that we desire to assign a truth value to that a much more complicated than the number of sheep in the meadow. Kant described a metaphysical model that was not susceptible to empirical verification (that’s a feature of metaphysical models generally). When we say the model is true (or false), what do we mean? If you want to abandon metaphysics, then what does it mean to say something like “qualia have property X” is true?
As far as I understand, infinity is a useful mathematical concept that does not directly translate into any scientific model, but, as usual, I could be wrong.
Is it your position that all truths are “scientific” truths? Does that mean that non-empirical assertions can’t be labelled true (or false)?
I mentioned infinities an an example of an unintuitive truth, in order to argue by analogy that the intuitiveness of EY’s “definition” of truth does not show that the definition is complete. Folk mathematics would assert something like “All infinities are the same size” and that’s just not true.
I don’t think an Aristotelian physicist would say that impetus is “otherwise undetectable” any more than a modern physicist would say “gravity causes objects to move, but is otherwise undetectable.”
Fair enough, but then, how would an Aristotelian physicist propose to detect impetus, if not by observing the motion of objects ? I’m pretty sure I’m missing the answer to this part, so I genuinely want to know.
The modern physicist doesn’t have to answer this question, because he treats gravity as a useful abstraction in his model. The Aristotelian physicist, on the other hand, believes that impetus is a real thing that actually exists and is causing objects to move. And if the answer is, “you can only detect impetus by observing the motion of objects and using the formula m*v”, then it becomes trivially easy to answer your original question, “how can you explain the fact that Aristotelian physics and Newtonian mechanics make the same predictions despite being so different”. The answer then becomes, “because both of them describe the motion of objects in the same way, one of them just as this extra bit that doesn’t really change much”. As I said though, I may be missing a piece of the puzzle.
If you want to abandon metaphysics, then what does it mean to say something like “qualia have property X” is true?
I personally think that qualia, along with free will, are philosophical red herrings, so I’m not terribly interested in their properties. That sounds like a topic for a separate argument, though...
Kant described a metaphysical model that was not susceptible to empirical verification (that’s a feature of metaphysical models generally). When we say the model is true (or false), what do we mean? … Is it your position that all truths are “scientific” truths?
I would say that statements such as “2+2=4” and “if all men are mortal, and Socrates is a man, then Socrates is mortal” are either true by definition, or derive logically from statements that are true by definition. There’s nothing wrong with that, obviously, but scientific truth is a bit different, since in science, you are not free to pick any axioms you want—instead, the physical universe does that for you.
That said, I’m not sure how your question relates to our main topic: the incommensurability of scientific truths, specifically.
The modern physicist doesn’t have to answer this question, because he treats gravity as a useful abstraction in his model.
A while ago, I said to Boyi that the best of post-modern thought gets co-opted into more mainstream thought. If you think gravity is only a useful abstraction, not “a real thing that actually exists and is causing objects to move,” then you are already much, much closer to Feyerabend than to the logical positivists. As a sociological fact, I assert that most scientists (especially in the “hard” sciences) take a position closer to “gravity is a real thing” than “gravity is a useful abstraction” (if not for gravity in particular, than for whatever fundamental explanatory objects they assert).
The incommensurability of scientific models (I shouldn’t have said truths) is the assertion that an earlier scientific model is not necessarily a simpler version of a later scientific model. I’ve made the best case that I can about Aristotle vs. Newton. The lesson is to be suspicious of the “truth” of scientific models. Because I think most scientists want to say something stronger about the model than “makes more accurate predictions.”
If you think gravity is only a useful abstraction, not “a real thing that actually exists and is causing objects to move,” then you are already much, much closer to Feyerabend than to the logical positivists. As a sociological fact, I assert that most scientists (especially in the “hard” sciences) take a position closer to “gravity is a real thing” than “gravity is a useful abstraction
Isn’t that the whole point of (for example) the search for the Higgs Boson ? Gravity is an abstraction, and we’re trying to refine the abstraction by discovering what is causing the real phenomenon that we observe. Of course, that discovery will not represent the world as it really, truly is, either; but at least it’ll be a bit closer than just “GMm / r^2”. I think there’s a big difference between the scientific concept of an abstraction, which refers to a simplified and incomplete model of reality; and the post-modern concept, which treats every abstraction as just another narrative that is socially constructed and does not relate to any external phenomena.
The incommensurability of scientific models (I shouldn’t have said truths) is the assertion that an earlier scientific model is not necessarily a simpler version of a later scientific model
If this is all you’re saying, then I can fully endorse this statement—but then, as I said before, it basically boils down to saying, “some earlier scientific models were pretty much wrong”. This statement is true, but not very interesting.
Because I think most scientists want to say something stronger about the model than “makes more accurate predictions.”
Like what ? Isn’t that the entire point of the model, to make predictions ?
Let me use another analogy. At one point, people believed that all swans were white; in fact, the very term “black swan” is an idiom meaning “something that is completely unexpected, contradicts most of what we know, and is likely disastrous”. Of course, today we know that black swans do exist.
So, let’s say that I, having never seen a black swan, believe that all swans are white. You believe that some swans are black. Our two models of the world are incommensurate; logically, only one of them can be true. And yet, if I have seen plenty of white swans, but never a black one, I’d be perfectly justified in believing that my model is (probably) true (until you show me some evidence to the contrary). Do you think this means that we should be “suspicious” of the entire notion of predicting the color of the next swan one might come across ?
Let me use another analogy. At one point, people believed that all swans were white; in fact, the very term “black swan” is an idiom meaning “something that is completely unexpected, contradicts most of what we know, and is likely disastrous”. Of course, today we know that black swans do exist.
So, let’s say that I, having never seen a black swan, believe that all swans are white. You believe that some swans are black. Our two models of the world are incommensurate; logically, only one of them can be true. And yet, if I have seen plenty of white swans, but never a black one, I’d be perfectly justified in believing that my model is (probably) true (until you show me some evidence to the contrary). Do you think this means that we should be “suspicious” of the entire notion of predicting the color of the next swan one might come across ?
You are conflating theory conflict with theory commensurability. The fact that theories make different predictions does not prove that the theories are incommensurable. For example, the white-swan theory predicted that there were no black swans, while the black-swan predicted that some black swans existed. But both theories mean the SAME thing by swan, so they are commensurable theories.
In addition, making similar predictions does not mean that theories are commensurable. I think there was a time when epicycle theory and heliocentric theory made similar predictions of planetary motion. Notwithstanding this agreement, there is no way to translate the concepts of Ptolemaic astronomy into heliocentric astronomy, which is what I mean when I say incommensurable.
In reading this discussion of Feyerabend, it seems like I’m defending a position that Feyerabend did not actually endorse. As you say:
I think there’s a big difference between the scientific concept of an abstraction, which refers to a simplified and incomplete model of reality; and the post-modern concept, which treats every abstraction as just another narrative that is socially constructed and does not relate to any external phenomena.
According to that discussion, Feyerabend is fully post-modern as you describe it. (This was the position Boyi was articulating, and I think we agree that it has trouble explaining the success of science). I’m trying to defend a philosophy of science that “treats every abstraction as a narrative that is socially constructed, but does (somehow) relate to external phenomena.”
Isn’t that the entire point of the model, to make predictions ?
Eliezer’s essay that you linked implies that one purpose of science is to know true facts about the world. Gravity isn’t an abstraction of the (hypothetical) Higgs bosom. It’s a property of the particle (or whatever it is-I’m not up on the physics). I’m articulating a position in which we don’t know certain kinds of facts (i.e. models do not “correspond” to reality), but are nonetheless able to make accurate predictions.
You are conflating theory conflict with theory commensurability. The fact that theories make different predictions does not prove that the theories are incommensurable.
Technically, you’re right; my swan example wasn’t fully analogous. I could still argue that one theory meant “swan” to be “a bird that is exclusively white”, whereas the other theory allows swans to be white or black, and thus the two theories do mean different things by the word “swan”… but I don’t know if you or Feyerabend would agree; nor do I think that it’s terribly important.
What’s more important is that I disagree with you (and possibly Feyerabend) regarding what theories are. As far as I understand, you believe that scientific models utilize “concepts” in order to make predictions; these concepts are the primary feature of the model, with predictions being a side-effect (though a very important and useful one, speaking both practically and philosophically). I, on the other hand, would say that it is the concepts that are secondary, and that a scientific model’s main feature is the predictions it makes.
If this is true, then as long as your model makes accurate predictions, you are justified in believing that its concepts are also true. Thus, if your epicycle model allows you to predict the motion of planets with reasonable accuracy, you’re justified in believing that planets move in epicycles. But as soon as some better measurements come along, your predictions will start failing, and you’d be forced to get yourself some new concepts.
In other words, the concepts are not a statement about how the world really, truly works; but only about how it works to the best of your knowledge. Once you get better knowledge, you are forced to get better concepts; and once you do that, you can go back, look at your old concepts, and say, “ok, I can see why I came up with those, because I’d need to know X in order to see that they’re wrong, and we’d just built the X-supercollider last year”.
Thus, I see no philosophical problem with having two scientific models that use different concepts, yet arrive at similar predictions. They are simply two local maxima in our utility function that describes our understanding of the world; and, since we’re not omniscient, neither of them are 100% true. When the maxima are sufficiently close, you can even use a simpler model (f.ex., “the world is flat”) in place of the other (f.ex., “the world is round”), if you’re willing to deal with the marginally increased errors in your predictions (f.ex., lobbing that giant boulder 1cm to the side of its intended target).
Gravity isn’t an abstraction of the (hypothetical) Higgs bosom. It’s a property of the particle (or whatever it is-I’m not up on the physics).
Right, but what’s a “particle” ? In reality, there (probably) aren’t any “particles” at all, there are just waves—except that the waves aren’t exactly real, either, and instead there are “amplitude flows”, except those are a model too… and so on. It is still possible that all of these things are just local maxima, and that in reality the world is a giant computer simulation, or something. For now, our models work quite well, but that doesn’t mean that they are somehow directly tied to actual particles (or waves, etc.) that actually exist. Photons don’t care about what’s in our heads.
If you ask Alice the Engineer what scientific theories do, I think she would say that scientific theories “describe the world” and “make predictions.” Without getting into relative importance, I think she’d say that a theory that couldn’t both describe and predict would be a failure of science. If that’s not what she would say today, I’m fairly confident that her counterpart from 1901 would say that.
I think Feyerabend has a devastating critique of the ability of scientific theories to describe. And the difference is huge. If you follow Feyerabend, you can’t say “Light is both a particle and a wave.” The best you can do is say “Our most accurate theory treats light as both a particle and a wave” and forbid the inference that “the world resembles the theory in any rigorous way.”
It seems like your response is to remove “descriptiveness” from the definition of science, then say that Feyerabend doesn’t have any interesting critique of science as properly defined. But your new definition of science is the one that post-modernism says is best. More importantly, you can’t go back to Alice and say “Look, I’ve driven off the post-modernists with no losses” because she’ll respond by asking about science’s ability to describe the world and cite The Simple Truth at you.
If you ask actual practicing scientists (researchers, doctors, engineers, etc), I assert that they would agree with Alice, if forced to take a position (ignoring for the moment why we’d ever want to force them to think about this theoretical issue). And regardless of the penetration of post-modern theory of science into modern folk philosophy, the overwhelming majority scientists throughout history have asserted the position I’ve ascribed to Alice.
It seems like your response is to remove “descriptiveness” from the definition of science, then say that Feyerabend doesn’t have any interesting critique of science as properly defined.
My intent wasn’t to remove “descriptiveness”, but to remove both certainty and absolute precision. Thus, instead of saying, “planets move in epicycles”, we can only say, “to the best of our knowledge, planets move in something closely resembling epicycles (but we’re not sure of that, and in reality planets don’t move in neat little epicycles because they’re not perfectly round, etc.)”. This may seem like a minor difference, but IMO the difference is huge: instead of treating the features of your model—the “concepts”—as primary, they are now entirely dependent on your observations.
This is what I was trying to show with my (admittedly flawed) swan analogy. I see no problem with two theories making similar predictions yet explaining them using different models, because in the end it’s the predictions that are important. If you are unable to make measurements that are precise enough to tell one model from another, you might as well go with a simpler model just by using Occam’s Razor. This doesn’t mean that your simpler model must be 100% accurate; it just means that it’s much likely to be much closer to the way the world really works than other models.
Thus, there’s no real need to explain why two different theories make similar predictions; the explanation is, “this isn’t a question about theories or true reality, it’s a question about us and our models”, and the answer is, “our model was wrong because we couldn’t make precise enough measurements, but it was still closer to reality than all other models at the time; and BTW, our current model isn’t perfect either, but we think it’s close”.
This approach is different, I think, from your approach of treating the features of the model (the “concepts”) as primary. If you do that, and if you assume that the world must look exactly like your model in order for its predictions to work, then you do have a problem with explaining how more or less correct predictions can arise from incorrect models. But this is a way to look at science that goes too far into the Platonic realm, IMO.
Since I accept theory incommesurability, I don’t believe that closer to reality is a useful thing to say about scientific theories. I’m not even sure what it could mean. Specifically, the statement “precise enough measurements” doesn’t explain or cause me to expect the thing you seem to mean by closer to reality, which sounds to me a lot like what Alice means by “descriptiveness.”
Since I accept theory incommesurability, I don’t believe that closer to reality is a useful thing to say about scientific theories.
I’m confused. Can’t one construct a counterexample?
For consideration, F=ma performs much worse under scrutiny than F=ma*e, where e is the number of elephants in the room plus one, even though the latter is usually accurate.
What exactly is an epicycle supposed to translate into in a heliocentric theory?
You evaluate both theories in terms of predictive power, and then compare the two.
Ah, I see what you and Feyerabend are doing there: commensurability is supposed to allow some translation between the internal parts of the theories. I don’t see why that should be necessary, or why that would be called ‘commensurability’. Ordinarily, to say 2 things are commensurable merely requires that they are comparable by some common standard.
Since I accept theory incommesurability, I don’t believe that closer to reality is a useful thing to say about scientific theories. I’m not even sure what it could mean. … I’m asserting that “makes better predictions != closer to reality”.
But in your example, Alice the Engineer and her hypothetical scientist friends say that
a theory that couldn’t both describe and predict would be a failure of science.
So, it sounds like you disagree with Alice and the scientists, then ? But if so, are you not removing “descriptiveness” from scientific theories, just as you accused me of doing ?
But perhaps, by “makes better predictions != closer to reality”, you only meant “makes better predictions probably == closer to reality, but not certainly” ? I could agree with that.
I think I could also agree with you that, if one accepts theory incommesurability, then it probably wouldn’t make sense to talk about theories being (probably) closer to reality (assuming it exists). But I don’t accept theory incommesurability, so at best we’re at an impasse.
If, on the other hand, one assumes that there probably exists an external reality that influences our senses in some way, however indirect (and which we can influence in return with our bodies), then IMO commensurabilty follows more or less naturally.
Since our understanding of this reality is not (and can probably never be) perfect, we can treat the sum total of all of our scientific models as a sort of cost function, which measures the projected difference between our models and things as they truly are (thus, our models still describe things, but imperfectly). By carrying out experiments and updating our theories we are trying to minimize this cost function. It’s entirely likely that we’d get stuck in some local minima for a while; hence the theories that make similar predictions but describe reality differently.
I take it you disagree with some of this, so which, if any, of my assumptions do you find objectionable ?
Reality probably exists (this seems to be non-controversial)
Reality affects our senses (which are part of it, after all) and we can affect it in turn by moving things around (ditto).
We can create what we think of as models of reality in our heads, however imperfect or wildly incorrect they might be.
Since our models imply predictions, it is possible for us to estimate the difference between our models of reality and the actual thing, by carrying out experiments and comparing the results we get to the expected results.
In trying to minimize this difference, we can get stuck in local minima.
Some other hidden assumption that I have forgotten to list here.
I’m sorry if I wasn’t clear about Alice, who is intended to represent a school of thought in philosophy of science called logical positivism.
I think you were advocating a position similar to her position, especially when you were saying that A Simple Truth was a sufficient theory of what truth is. Further, I agree that the adjustment that Alice should make to her theory is to abandon what I’ve called descriptiveness. Thus, I still think you are closer to Alice than to Feyerabend as long as you think scientific theories get “closer to reality” in some meaningful way.
As I understand it, theory incommesurability should be understood as an empirical theory, much the same way that academic historical theories are empirical theories.
Theories change.
I’m pretty sure Alice agrees.
Some theory changes are radical (i.e. involve incommensurability)
I think this is true, as a historical matter. A geocentric theory (epicycles) was replaced by a heliocentric theory. There’s no reasonable way to translate rotating circles on top of other rotating circles embedded in the sky (epicycles) into anything in the Copernican/Keplerian planets-elliptically-orbit-the-Sun theory. I don’t think Alice rejects this either. I expect she explains that Science became non-empirical for a extended period of time, probably based on influence/co-option by non-empirical entities like the Catholic Church. But when Science was restored to its proper function by the return of empiricism, the geocentric nonsense was flushed away. There was no reason to expect that geocentrism would translate into heliocentrism because geocentrism was not sufficiently based on observation. (I’m not sure if this story is historically correct, but that’s Alice’s problem, not mine).
All significant theory changes were radical theory changes. Alice obviously doesn’t agree. If impetus != momentum, this is evidence in support of this proposition. Likewise, if impetus = momentum, this is evidence against the proposition. If the proposition is true, I think you are right when you say:
if one accepts theory incommesurability, then it probably wouldn’t make sense to talk about theories being (probably) closer to reality.
But I don’t think that requires one to reject the concept of reality.
I think you were advocating a position similar to her position, especially when you were saying that A Simple Truth was a sufficient theory of what truth is.
I don’t think that A Simple Truth advocates a theory per se; I see it as more of call to reject complex and convoluted philosophical truth theories, in favor of actually doing science (and engineering).
theory incommesurability should be understood as an empirical theory
As far as I understand from your arguments so far, the notions of empiricism and incommensurability are incommensurable.
There’s no reasonable way to translate rotating circles on top of other rotating circles embedded in the sky (epicycles) into anything in the Copernican/Keplerian planets-elliptically-orbit-the-Sun theory.
No, but you could probably go the other way. Given both theories, you could calculate the minimum magnitude of the experimental error required in order for them to become indistinguishable. If your instruments are less precise than that, then you may as well use epicycles (Occam’s Razor aside).
I expect she explains that Science became non-empirical for a extended period of time, probably based on influence/co-option by non-empirical entities like the Catholic Church.
I don’t think this is accurate, historically speaking. Yes, the influence of the Catholic Church was quite harmful to science, but they didn’t invent geocentrism. In fact, geocentrism is quite empirical. If you’re a sage living in ancient Babylon, you can very easily look up and see the Sun moving around the (flat) Earth. Given the available evidence, you’d be fully justified in concluding that geocentrism is true. You’d be wrong, as we now know, but it’s ok to be wrong sometimes (see what I said earlier about local minima).
All significant theory changes were radical theory changes.
This sounds like a tautology to me.
If impetus != momentum, this is evidence in support of this proposition. Likewise, if impetus = momentum, …
Sorry, I must have missed a sentence: what is the “this” you’re referring to, when you say “this is evidence” ? As for impetus and momentum, they’re quite different concepts, so you can’t equate them. Impetus is a sort of elan vital of motion, whereas Newton’s momentum (if I understand it correctly) is just an explanation of how objects move. Either impetus exists (in the same way that elan vital was thought to exist), or it doesn’t; there aren’t any other options. Today, we believe that impetus does not exist, but there’s still a small chance that it does; if we ever discover any evidence of it, we’ll update our beliefs.
But I don’t think that requires one to reject the concept of reality.
As far as I understand from your arguments, you are rejecting the notion that scientific theories describe reality in any way; and, due to your belief in incommensurability, you believe that the fact that some theories allow us to develop what seems to be an understanding of the world (*), to be somewhat of a mystery. Does this accurately describe your position ? If so, I don’t see what accepting “a concept of reality” would buy you, since reality is (due to incommensurability) unknowable.
I agree with you that, given that incommensurability is true, your position makes sense. But I still don’t see why I should accept that incommensurability is true. From your arguments, it almost sounds like you require scientists to be omniscient: you see any significant mistake in our scientific understanding of the world as an insurmountable barrier to understanding. But I still don’t understand why. All people make mistakes all the time, not just scientists.
At one point, I personally thought that driving from my house to work takes about 30 minutes. But then I found a shortcut through a corporate parking lot, which shaved the time down to about 25 minutes. My two maps of the world were certainly incompatible: one contained the shortcut, the other did not; and the routes were very different. Does this mean that the two maps are incommensurate, and that we must therefore reject the very notion of them describing the actual terrain in any way ? Why can’t we just say, “Bugmaster was wrong because he didn’t have enough data” ?
(*) Seeing as I’m typing these words using a device powered by our understanding of quantum mechanics, etc.
IIRC, even Feynman refused to answer to whether electrons, or even the interior of a brick, are real, saying that they are useful concepts in our description of the world and that’s all that matters.
That’s not quite what he was saying. Full quote (emphasis mine):
When I sat with the philosophers I listened to them discuss very seriously a book called Process and Reality by Whitehead. They were using words in a funny way, and I couldn’t quite understand what they were saying. Now I didn’t want to interrupt them in their own conversation and keep asking them to explain something, and on the few occasions that I did, they’d try to explain it to me, but I still didn’t get it. Finally they invited me to come to their seminar.
They had a seminar that was like a class. It had been meeting once a week to discuss a new chapter out of Process and Reality — some guy would give a report on it and then there would be a discussion. I went to this seminar promising myself to keep my mouth shut, reminding myself that I didn’t know anything about the subject, and I was going there just to watch.
What happened there was typical — so typical that it was unbelievable, but true. First of all, I sat there without saying anything, which is almost unbelievable, but also true. A student gave a report on the chapter to be studied that week. In it Whitehead kept using the words ‘essential object’ in a particular technical way that presumably he had defined, but that I didn’t understand.
After some discussion as to what ‘essential object’ meant, the professor leading the seminar said something meant to clarify things and drew something that looked like lightning bolts on the blackboard. ‘Mr. Feynman,’ he said, ‘would you say an electron is an “essential object?”’
Well, now I was in trouble. I admitted that I hadn’t read the book, so I had no idea of what Whitehead meant by the phrase; I had only come to watch. ‘But,’ I said, ’I’ll try to answer the professor’s question if you will first answer a question from me, so I can have a better idea of what “essential object” means. Is a brick an essential object?
What I had intended to do was to find out whether they thought theoretical constructs were essential objects. The electron is a theory that we use; it is so useful in understanding the way nature works that we can almost call it real. I wanted to make the idea of a theory clear by analogy. In the case of the brick, my next question was going to be, ‘What about the inside of the brick?’ — and I would then point out that no one has ever seen the inside of a brick. Every time you break the brick, you only see the surface. That the brick has an inside is a simple theory which helps us to understand things better. The theory of electrons is analogous. So I began by asking, ‘Is a brick an essential object?’
Then the answers came out. One man stood up and said, ‘A brick is an individual, specific brick. That is what Whitehead means by an essential object.’ Another man said, ‘No, it isn’t the individual brick that is an essential object; it’s the general character that all bricks have in common — their ‘brickiness’ — that is the essential object.’
Another guy got up and said, ’No, it’s not in the bricks themselves. ‘Essential object’ means the idea in the mind that you get when you think of bricks.’
Another guy got up, and another, and I tell you I have never heard such ingenious different ways of looking at a brick before. And, just like it should in all stories about philosophers, it ended up in complete chaos. In all their previous discussions they hadn’t even asked themselves whether such a simple object as a brick, much less an electron, is an ‘essential object’.
I’m sure many people do intrinsically enjoy science. Nonetheless, the reason society pays for science research is because it leads to being able to make more accurate predictions.
Edited to add: On reflection, I think this is not at all clear. Surely some science funding is so directly motivated, but a lot seems to be more related to signaling.
You are going to have to taboo “dominance”. Understanding something is a lot different from the other members of the “dominance” category. Please explain what you mean to say about understanding without using “dominance”, “oppression”, “force”, “might”, or “western hegemony”.
I’ll concede that the Enlightenment did more to relieve human suffering (or whatever measure you prefer) than the advance of science. <Again, I don’t think this a a majority position in this community.> Will you concede that the Enlightenment’s continued viability is reliant on the increase in wealth it caused, including the increase in wealth from scientific progress?
No, it is an argument against the ideology that endless minipulation/dominance of the material world is purely beneficial.
You don’t need to believe post-modern thought to be an environmentalist. Nor does being post-modern guarantee that you are an environmentalist. (or any other critique of human application of “scientific” domination of nature). In short, you are overstating the usefulness of post-modern analysis. Economists (whether or not they think Kuhn was saying something useful) already have language for the types of problems you identify with the socialapplication of scientific prediction.
But [flight technologies] have also (along with shipping technologies) been the primary cause of ecological devastation.
This might be a bit of a digression, but I’m going to have to ask for a cite on that. My understanding is that power generation and industry are responsible for the majority of carbon emissions; Wikipedia describes transport fuels (road, rail, air and sea inclusive) as representing about 20% of carbon output and 15% of total greenhouse emissions.
Now, you said “ecological devastation”, not “carbon”, and air and sea transport’s more general ecological footprint is of course harder to measure; but given their fuel-intensive nature I’d expect carbon emissions to represent most of it. There’s also noise pollution, non-greenhouse emissions, bird and propeller strikes, pollution associated with manufacture and dismantling, and the odd oil spill, but although those photos of Chittagong shipbreakers are certainly striking I’d be surprised if all of that put together approached the ecological impact of transport’s CO2 output, never mind representing an additional overhead large enough to dominate humanity’s ecological effects.
Sorry I was again assuming a common basis of knowledge. Carbon emissions would be environmental damage (damaging to the biosphere as a whole). Ecological damages more commonly refers to damages to ecosystems (smaller communities within the biosphere). When people talk about ecological damages they are primarily talking about invasive species. Invasive species are plants, animal, bacteria, and fungi that have been artificially transported from one ecosystem to another and have no natural predator within it. Huge portions of American forest are being eradicated as we speak by Asian beetles, plants, etc.
The primary cause of invasive species is trans-Atlantic/ trans-pacific shipping and flights. We try to regulate what gets on and off ships and boats, but it is really really hard. If you ever take a class in ecology this fact will probably be beaten into you. I work with an ecologist so I hear all the time about the devastation of invasive species and the growing frailty of the worlds ecosystems.
Somewhat agree. Science is broken in systematic ways. See the quantum physics sequence.
That statement is a rather bold one to post on a site dedicated to improving human epistemological methods. It doesn’t seem to me that a bit of irrationality should prevent us from doing better; we didn’t even know what we were doing up until now.
EDIT: On 1 did you mean that science as we do it does not match the ideal, or the ideal does not work as well as is possible? Both are true.
I wouldn’t say 2 is bold at all, really, provided it is taken in a weak form—particularly if we factor out the transhumanist element. Yes, we will never be perfect Bayesian reasoning machines. This doesn’t mean we can’t or shouldn’t do better ever better. I’m not sure what reasonably charitable interpretation would be a really bold claim, here… “We’re so far gone we shouldn’t bother trying,” perhaps, but that doesn’t seem to square with this poster’s other posts.
yeah. I interpreted it closer to “impossible to do better” than “impossible to be perfect”. Looking back, the former is the more charitable interpretation.
I get this distinct feeling of having fallen for the fallacy of gray (cant be perfect == can’t do better).
Idiomatically speaking, I think you can usually parse “can’t be perfect” as a proxy for “should not aspire to the ideal, even if you accept that it can only be approached asymptotically”.
On 2. I realize that it is a bold statement given the context of this blog. My reason for making it is that I believe taking the paradox of rationality into account would better serve your purposes.
If what you mean by 2 is that we can never be perfect, then yeah, that is a legitimate concern, and one that has been discussed.
I think the big distinction to make is that just because we aren’t and can’t be perfect, doesn’t mean we should not try to do better. See the stuff on humility and the fallacy of gray.
That’s why we call ourselves “aspiring rationalists” not just “rationalists”. “rational” is an ideal we measure ourselves against, the way thermodynamic engines are measured against the ideal Carnot cycle.
I also said I think it is the wrong ideal. Not completely. I think the idea of rationality is a good one, but ironically it is not a rational one. Rationality is paradoxical.
Why do you say rationality is not the ideal? Around here we use the term rational as a proxy for “learning the truth and winning at your goals”. I can’t think of much that is more ideal. There are places where you will go off the track if you think that the ideal is to be rational. Maybe that’s what you are referring to?
Now is a good time to taboo “rationality”; explain yourself using whatever “rationality” reduces to so that we don’t get confused. (Like I did above with explaining about winning).
I agree that “learning the truth and winning at your goals” should be the ideal. But I also believe the following
-Humans are symbolic creatures: Meaning that to some extent we exist in self-created realities that do not follow a predictable or always logical order.
-Humans are social creatures meaning that not only is human survival is completely dependent on the ability to maintain coexistence with other people, but individual happiness and identity is dependent on social networks.
Before I continue I would like to know what you and anyone else thinks about these two statements.
I suspect many Less Wrong readers will Agree Denotatively But Object Connotatively to your statements. As Nornagest points out, what you wrote is mostly true with one important caveat (the fact that we are irrational in regular and predictable ways). However, your statements are connotatively troubling because phrases like these are sometimes used to defend and/or signal affiliation with the kind of subjectivism that we strongly dislike.
I’d agree that a lot of our perceptual reality is self-generated—as a glance through this site or the cog-sci or psychology literature will tell you, our thinking is riddled with biases, shaky interpolations, false memories, and various other deviations from an ideal model of the world. But by the same token there are substantial regularities in those deviations; as a matter of fact, working back from those tendencies to find the underlying cognitive principles behind them is a decent summary of what heuristics-and-biases research is all about. So I’d disagree that our perceptual worlds are unpredictable: people’s minds differ, but it’s possible to model both individual minds and minds-in-general pretty well.
As to your second clause, most humans do have substantial social needs, but their extent and nature differs quite a bit between individuals, as a function of culture, context, and personality. This too exhibits regularities.
Humans are symbolic creatures: Meaning that to some extent we exist in self-created realities that do not follow a predictable or always logical order.
I don’t understand. Much of our self-identity is symbolic and imaginary. By self-created reality do you mean that our local reality is heavily influenced by us? That our beliefs filter our experiences somewhat? Or that we literally create our own reality? If it’s the last one, the standard response is this: There is a process that generates predictions and a process that generates experiences, they don’t always match up, so we call the former “beliefs” and the latter “reality”.
See the map and territory sequence). If that’s not what you mean (I hope it is not), make your point.
Humans are social creatures meaning that not only is human survival is completely dependent on the ability to maintain coexistence with other people, but individual happiness and identity is dependent on social networks
I don’t understand. Much of our self-identity is symbolic and imaginary. By self-created reality do you mean that our local reality is heavily influenced by us? That our beliefs filter our experiences somewhat? Or that we literally create our own reality? If it’s the last one, the standard response is this: There is a process that generates predictions and a process that generates experiences, they don’t always match up, so we call the former “beliefs” and the latter “reality”. See the map and territory sequence. If that’s not what you mean (I hope it is not), make your point.
You have heard of Niche Construction right? If not, it is the ability of an animal to manipulate their reality to meet their personal adaptations. Most animals display some sort of niche construction. Humans are highly advanced architects of niches. In the same way ants build colonies and bees build hives, humans create a type of social hive that is infinitely more complex. The human hive is not built through wax or honey but through symbols and rituals held together by rules and norms. A person living within a human hive cannot escape the necessity of understanding the dynamics of the symbols that hold it together so that they can most efficiently navigate its chambers.
Keeping that in mind, it stands that all animals must respect the nature of their environment in order to survive. What is unique to humans is that the environments we primarily interact with are socially constructed niches. That is what I mean when I say human reality is self-created.
Earlier I talked about the paradox of rationality. What I meant by that is simply
-For humans what is socially beneficial is rationally beneficial because human survival is dependent on social solidarity.
-What is socially beneficial is not always actually beneficial to the individual or the group.
Thus the paradox of rationality: What is naturally beneficial/harmful is not aligned with what is socially beneficial/harmful.
Do you think that this is an actual paradox or a problem for rationality? If so, then you’re probably not using the r-word the same way we are. As far as I can tell, your argument is: To obtain social goods (e.g. status) you sometimes have to sacrifice non-social goods (e.g. spending time playing videogames). Nonetheless, you can still perform expected value calculations by deciding how much you value various “social” versus “non-social” goods, so I don’t see how this impinges upon rationality.
My argument is to exist socially is not always alligned with what is nessecary for natural health/survival/happiness, and yet at the same time is nessecary.
We exist in a society where the majority of jobs demand us to remain seated and immobile for the better part of the day. That is incredibly unhealthy. It is also bad for intellectual productivity. It is illogical, and yet for a lot of people it is required.
Again, that’s not how we use the word. Being rational does not mean forgoing social goods—quite the opposite, in fact. No one here believes that human beings are inherently good at truth seeking or achieving our goals, but we want to aspire to become better at those things.
Ok but then I do not understand how eliminating God or theism serves this purpose. I completely agree that there are destructive aspects of both these concepts, but you all seem unwilling to accept that they also play a pivitol social role. That was my original point in relation to the author of this essay. Rather than convincing people that it is ok that there is no God, accept the fact that “God” is an important social institution and begin to work to rewrite “God” rationally.
Can you say more about how you determined that “rewriting God” is a more cost-effective strategy for achieving our goals than convincing people that it is OK that there is no God?
You seem very confident of that, but thus far I’ve only seen you using debate tactics in an attempt to convince others of it, with no discussion of how you came to believe it yourself, or how you’ve tested it in the world. The net effect is that you sound more like you’re engaging in apologetics than in a communal attempt to discern truth.
For my own part, I have no horse in that particular race; I’ve seen both strategies work well, and I’ve seen them both fail. I use them both, depending on who I’m talking to, and both are pretty effective at achieving my goals with the right audience, and they are fairly complementary.
But this discussion thus far has been fairly tediously adversarial, and has tended to get bogged down in semantics and side-issues (a frequent failure mode of apologetics), and I’d like to see less of that. So I encourage shifting the style of discourse.
Any time you feel the urge to say, “Why can’t you see that X?”, it’s usually not that the other person is being deliberately obtuse—most likely it’s that you haven’t explained it as clearly as you thought you had. This is especially true when dealing with others in a community you are new to or someone new to your community: their expectations and foundations are probably other than you expect.
I felt the major point of this article, “How to lose an argument,” was that accepting that your beliefs, identity, and personal chocies are wrong is pyschologically damaging, and that most people will opt to deny wrongness to the bitter end rather than accept it. the author suggest that if you truly want to change people’s opinions and not just boost yoru own ego, then it is more cost-effective to provide the oppostion with an exit that does not result with the individual having to bear the pyschological trauma of being wrong.
If you except the author’s statement that without the tact to provide the opposition a line of flight, then they will emotionally reject your position regarldess of its rational base; then rewriting God is more effective than trying to destroy God for the very same reason.
God is “God” to some people, but to others God is like the American flag is, a symbol of family, of home, of identitiy. The rational allstars of humanity are compenent enough to breakdown these connotation, thus destroying the symbol of God. But I think by defintion allstars are a minority, and that the majority of people are unable to break symbols without suffering the pyschological trumma of wrongness.
rewriting God is more effective than trying to destroy God for the very same reason.
the majority of people are unable to break symbols without suffering the pyschological trumma of wrongness.
Yes, this is a statement of your position. Now the question from grandparent was, how did you arrive at it? Why should anyone believe that it is true, rather than the opposite? Show your work.
God is not just a transcendental belief (meaning a belief about the state of the universe or other abstract concepts). God represents a loyalty to a group identity for lots of people as well as their own identity. To attack God is the same as attacking them. So like I stated before, if you agree with Yvain’s argument (that attacking the identity of the opposition is not as effective to argument as providing them with a social line of flight), then you agree with mine (It would be more effective to find a way to dispel the damages done by the symbol of God rather than destroy it, since many people will be adamantly opposed to its destruction for the sake of self-image. I do not see why I have to go further to prove a point that you all readily accepted when it was Yvain who stated it.
That seems to assume that direct argument is the only way to persuade someone of something. It’s in fact a conspicuously poor way of doing so in cases of strong conviction, as Yvain’s post goes to some trouble to explain, but that doesn’t imply we’re obliged to permanently accept any premises that people have integrated into their identities.
You can’t directly force a crisis of faith; trying tends to further root people in their convictions. But you can build a lot of the groundwork for one by reducing inferential distance, and you can help normalize dissenting opinions to reduce the social cost of abandoning false beliefs. It’s not at all clear to me that this would be a less effective approach than trying to bowdlerize major religions into something less epistemically destructive, and it’s certainly a more honest one—instrumentally important in itself given how well-honed our instincts for political dissembling are—for people that already lack religious conviction.
Your mileage may vary if you’re a strong Deist or something, but most of the people here aren’t.
The methodology is the same. If you accept Yvain’s methodology than you except mine. You are right that our purposes and methods are different.
Yvain Wants:
Destroy the Concept of God
To give people a social retreat for a more efficient transition
To suggest that the universe can be moral without God to accomplish this.
I Want:
-To rewrite the concept of God,
- To give people a social retreat for a more efficient transition—SAME
-To suggest that God can be moral without being a literal conception.
The methodology isn’t the same—Yvain’s methodology is to give people a Brand New Thingy that they can latch onto, yours seems to be reinventing the Old Thingy, preserving some of the terminology and narrative that it had. As discussed in his Parable, these are in fact very different. Leaving a line of retreat doesn’t always mean that you have to keep the same concepts from the Old Thingy—in fact, doing so can be very harmful. See also the comments here, especially ata’s comment.
And that is why I disagree with this part of your argument:
if you agree with Yvain’s argument...then you agree with mine
I don’t think anyone here has objected to that part of your methodology, merely to your goal of “rewriting God” and to its effectiveness in relation to the implied supergoal of creating a saner world.
You are assuming that “the majority of people are unable to break symbols without suffering the psychological trumma (sic) of wrongness” and thus “rewriting God is more effective than trying to destroy God”.
Eliezer’s argument assumed the uncontroversial premise “Many people think God is the only basis for morality” and encouraged finding a way around that first. Your argument seems to be assuming the premises (1) “The majority of people are unable to part with beliefs that they consider part of their identity” as well as (2) “It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them”. Yvain may have supported (1), but I didn’t see him arguing in favor of (2).
I do not see why I have to go further to prove a point that you all readily accepted when it was Yvain who stated it.
I don’t think anyone is seriously questioning the “leave a line of retreat” part of your argument.
You don’t have to do anything. But if you want people to believe you, you’re going to have to show your work. Ask yourself the fundamental question of rationality.
Eliezer’s argument assumed the uncontroversial premise “Many people think God is the only basis for morality” and encouraged finding a way around that first.
How is this an uncontroversial claim! What proof have you made of this claim. It is uncontroversial to you because everyone involved in this conversation (excluding me) has accepted this premise. Ask yourself the fundamental question of rationality.
Your argument seems to be assuming the premises (1) “The majority of people are unable to part with beliefs that they consider part of their identity” as well as. (2) “It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them.”
My argument is not that people are unable to part with beliefs, but that 1.) it is harder and 2.) they don’t want to. People learn their faith from their parents, from their communities. Some people have bad experiences with this, but some do not. To them religion is a part of their childhood and their personal history both of which are sacred to the self. Why would they want to give that up? They do not have the foresight or education to see the damages of their beliefs. All they see is you/people like you calling a part of them “vulgar.”
Is that really the rational way to convince someone of something?
How is this an uncontroversial claim! What proof have you made of this claim.
Well, it took me about five minutes on Wikipedia to find its pages on theonomy and divine command theory, and most of that was because I got sidetracked into moral theology. I don’t know what your threshold for “many people” is, but that ought to establish that it’s not an obscure opinion within theology or philosophy-of-ethics circles, nor a low-status one within at least the former.
I consider “[m]any people think God is the only basis for morality” to be uncontroversial because I have heard several people express this view, see no reason to believe that they are misrepresenting their thoughts, and see no reason to expect that they are ridiculous outliers. If we substituted “most” for “many” it would be more controversial (and I’m not sure whether or not it would be accurate). If we substituted “all” for many, it would be false.
Don’t use words if you do not know what they mean.
Indeed.
Better yet, don’t criticize someone’s usage of a word unless you know what it means.
At this point, I no longer give significant credence to the proposition that you are making a good-faith effort at truth-seeking, and you are being very rude. I have no further interest in responding to you.
Show me a definition oft the word bowdlerize that does not use the word vulgar or a synonym.
If I am being rude it is because I am frustrated by the double standards of the people I am talking with. I use the word force and I get scolded for trying to taint the conversation with connotations. I will agree that “force” has some negative connotations, but it has positive ones too. In any case it is far more neutral than bowdlerize. And quite frankly I am shocked that I get criticized for pointing out that you clearly do not know what that word means while you get praised for criticizing me for pointing out what the word actually means.
It is hypocritical to jump down my throat about smuggling connotations into a conversation when your language is even more aggressive.
It is also hypocritical that if I propose that there are people who have faith in religion not because they fear a world without it the burden of proof is on me; while if it is proposed by the opposition that many people have faith in religion because they fear a world without it no proof is required.
I once thought the manifest rightness of post-modern thought would convince those naive realists of the truth, if only they were presented with it clearly. It doesn’t work that way, for several reasons:
Many “post-modern” ideas get co-opted into mainstream thought. Once, Legal Realism was a revolutionary critique of legal formalism. Now it’s what every cynical lawyer thinks while driving to work. In this community, it is possible to talk about “norms of the community” both in reference to this community and other communities. At least in part, that’s an effect of the co-option of post-modern ideas like “imagined communities.”
Post-modernism is often intentionally provocative (i.e. broadening the concept of force). Therefore, you shouldn’t be surprised when your provocation actually provokes. Further, you are challenging core beliefs of a community, and should expect push-back. Cf. the controversy in Texas about including discussion of the Spot Resolution in textbooks.
As Kuhn and Feyerabend said, you can’t be a good philosopher of science if you aren’t a good historian of science. You haven’t demonstrated that you have a good grasp of what science believes about itself, as shown in part by your loose language when asserting claims.
Additionally, you are the one challenging the status quo beliefs, so the burden of proof is placed on you. In some abstract sense, that might not be “fair.” Given your use of post-modern analysis, why are you surprised that people respond badly to challenges to the imagined community? This community is engaging with you fairly well, all things considered.
ETA: In case it isn’t clear, I consider myself a post-modernist, at least compared to what seems to be the standard position here at LW.
Really great post! You are completely right on all accounts. Except I really am not a post-modernist, I just agree with some of their ideas, especially conceptions of power as you have pointed out.
I am particularly impressed with Bullet point # 2, because not only does it show an understanding of the basis of my ideas, but it also accurately points out irrationality in my actions given the theories I assert.
I would then ask you if understand this aspect of communities including your own, would you call this rational? It is no excuse, but I think coming here I was under the impression that equality in burden of proof, acccomdation of norms and standards, would be the norm, because I view these things as rational.
Does it seem rational that one side does not hold the burden of proof? To me it is normal for debate because each side is focused solely on winning. But I would call pure debate a part of rhetoric (“the dark arts”). I thought here people would be more concerned with Truth than winning.
As to your qusetion- I do not think I have made any more extraordinary claims than my opposition. To me saying that because “several people have told someone that they need there to be God because without God the universe would be immoral” is not sufficient enough evidence to make that claim. I would also suggest that my claims are not extraordinary, they are contradictory to several core beliefs of this community, which makes them unpleasant, not unthinkable.
If someone X, before asking him to provide some solid evidence that X, you should stick your neck out and say that you yourself believe that non X.
Otherwise, people might expect that after they do all the legwork of coming up with evidence for X, you’ll just say “well actually I believe X too I was just checking lol”.
You can’t expect people to make efforts for you if you show no signs of reciprocity—by either saying things they find insightful, or proving you did your research, or acknowledging their points, or making good faith attempts to identify and resolve disagreements, etc. If all you do is post rambling walls of texts with typos and dismissive comments and bone-headed defensiveness on every single point, then people just won’t pay attention to you.
Respectfully, if you don’t think post-modernism is an extraordinary claim, you need to spend more time studying the history of ideas. The length of time it took for post-modern thought to develop (even counting from the Renaissance or the Enlightenment) is strong evidence of how unintuitive it is. Even under a very generous definition of post-modernism and a very restrictive start of the intellectual clock, Nietzsche is almost a century after the French Revolution.
my claims are not extraordinary, they are contradictory to several core beliefs of this community.
If your goal is to help us have a more correct philosophy, then the burden is on you to avoid doing things that make it seem like you have other goals (like yanking our chain). I.e. turn the other cheek, don’t nitpick, calm down, take on the “unfair” burden of proof. Consider the relevance of the tone argument.
“several people have told someone that they need there to be God because without God the universe would be immoral” is not sufficient enough evidence to make that claim.
There are many causes of belief in belief. In particular, religious belief has social causes and moral causes. In the pure case, I suspect that David Koresh believed things because he had moral reasons to want to believe them, and the social ostracism might have been seen as a feature, not a bug.
If one decides to deconvert someone else (perhaps to help the other achieve his goals), it seems like it would matter why there was belief in belief. And that’s just an empirical question. I’ve personally met both kinds of people.
I concede that post-modernism is unintuitive when compared to the history of academic thought, but I would argue that modernism is equally unintuitive to unacademic thought. Do you not agree?
What do we mean by modernism? I think the logical positivists are quite intuitive. What’s a more natural concept from “unacademic” thought than the idea that metaphysics is incoherent? The intuitiveness of the project doesn’t make it right, in my view.
Bowdlerization is normally understood to be the idea of removing offensive content, but this offensiveness doesn’t need to have anything to do with “vulgarity”.
There exist things that are offensive against standards of propriety and taste (the things you call “vulgar”). Then again there exist things which offend against standards of e.g. morality.
You don’t seem to understand that there can exist offensiveness which isn’t about good manners, but about moral content.
Please respond to these following two question, if you want me to understand the point of disagreement:
Do you understand/agree that I’m saying “offensive content” is a superset of “vulgar content”?
Therefore do you understand/agree that when I say something contains offensive content, I may be saying that it contains vulgar content, but I may also be saying it contains non-vulgar content that’s offensive to particular moral standards?
First, bowdlerizing has always implied removing content, not adding offensive content. Second, the word has evolved over time to mean any removal of content that changes the “moral/emotional” impact of the work, not simply removal of vulgarity.
All they see is you/people like you calling a part of them “vulgar.” I don’t believe I’ve done this It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them”.
Don’t use words if you do not know what they mean.
The two statements you quoted are not inconsistent because a bowdlerized theory is not calling the original theory vulgar, in current usage. Based on the change in meaning that I identified.
Alice says that she believes in God and a neutral can observe that behaving in accordance with this belief is prevent Alice from achieving her goals. Let’s posit that believing in God is not a goal for Alice, it’s just something she happens to believe. For example, Alice thinks God exists but is not religiously observant and does not desire to be observant.
What should Bob do to help Alice achieve her goals? Doesn’t it depend on whether Alice believes in God or believes that “I believe in God” is/should be one of her beliefs?
More generally, what is wrong (from a post-modern point of view) with saying that all moral beliefs are instances of “belief in belief”?
Well, it certainly clarifies the kind of discourse you’re looking for, which I suppose is all I can ask for. Thanks.
There are pieces of this I agree with, pieces I disagree with, and pieces where a considerable amount of work is necessary just to clarify the claim as something I can agree or disagree with.
Personally, I see truth as a virtue and I am against self-deception. If God does not exist, then I desire to believe that God does not exist, social consequences be damned. For this reason, I am very much against “rewriting” false ideas—I’d much prefer to say oops and move on.
Even if you don’t value truth, though, religious beliefs are still far from optimal in terms of being beneficial social institutions. While it’s true that such belief systems have been socially instrumental in the past, that’s not a reason to continue supporting a suboptimal solution. The full argument for this can be found in Yvain’s Parable on Obsolete Ideologies and Spencer Greenberg’s Your Beliefs as a Temple.
Personally, I see truth as a virtue and I am against self-deception. If God does not exist, then I desire to believe that God does not exist, social consequences be damned. For this reason, I am very much against “rewriting” false ideas—I’d much prefer to say oops and move on.
When you call truth a virtue do you mean in terms of Aristotle’s virtue ethics? If so I definitely agree, but I do not agree with neglecting the social consequences. Take a drug addict for example. If you cut them cold turkey immediately the shock to their system could kill them. In some sense the current state of religion is an addiction for many people, perhaps even the majority of people, that weakens them and ultimately damages their future. It is not only beneficial to want to change this; it is rational seeing as how we are dependent on the social hive that is infected by this sickness. The questions I feel your response fails to address are: is the disease external to the system, can it truly be removed (my point about irrationality potentially being a part of the human condition)? What is the proper process of intervention for an ideological addict? Will they really just be able to stop using, or will they need a more incremental withdrawal process?
Along the lines with my assertions against the pure benefit of material transformation I would argue that force is not always the correct paradigm for solving a problem. Trying to break the symbol of God regardless of the social consequences is to me using intellectual/rational force ( dominance) to fix something.
The purely rationalist position is a newer adaptation of the might makes right ideology.
You are right that people sometimes need time to adapt their beliefs. That is why the original article kept mentioning that the point was to construct a line of retreat for them; to make it easier on them to realize the truth.
Along the lines with my assertions against the pure benefit of material transformation I would argue that force is not always the correct paradigm for solving a problem. Trying to break the symbol of God regardless of the social consequences is to me using intellectual/rational force ( dominance) to fix something.
This is strictly true, but your implication that is it somehow related here is wrong. Intellectual force is what is used in rhetoric. Around here, rhetoric is considered one of the Dark Arts. Rationalists are not the people who are recklessly forcing atheism without regard for consequences. See raising the sanity waterline. Religion is a dead canary and we are trying to pump out the gas, not just hide the canary.
The purely rationalist position is a newer adaptation of the might makes right ideology.
This is just a bullshit flame. If you are going to accuse people of violence, show your work.
You are right that people sometimes need time to adapt their beliefs. That is why the original article kept mentioning that the point was to construct a line of retreat for them; to make it easier on them to realize the truth.
I know! That is what I have been saying from the start. I agree with the idea. My dissent is that I do not think the author’s method truly follows this methodology. I do not think that telling people “it is ok there is no God the universe can still be moral” constructs a line of retreat. I think it over simplifies why people have faith in God.
And just to make sure, and you clear of the differences between a method and a methodology?
Around here, rhetoric is considered one of the Dark Arts. Rationalists are not the people who are recklessly forcing atheism without regard for consequences. See raising the sanity waterline. Religion is a dead canary and we are trying to pump out the gas, not just hide the canary.
Rhetoric can be used as force, but to reduce it to “dark arts” is reductionist. Just as to not see the force being used by rationalists is also reductionist. Anyone who wants to destory/remove someting is by definition using force. Anyone who wants to destory/remove someting is by definition using force. Religion is not a dead canary, it is a missued tool.
The purely rationalist position is a newer adaptation of the might makes right ideology.
This is just a bullshit flame. If you are going to accuse people of violence, show your work.
No, I am not flaming, at least not be the defintion of rationalists on this blog. Fact is intellectual force. Rationalists want to use facts to force people to conform to what they believe. Might is right does not nessecairly mean using violence; it just means you beleive the stronger force is correct. You believe yourself intellecutally stronger than people who believe in a diety, and thus right while they are wrong.
Rhetoric can be used as force, but to reduce it to “dark arts” is reductionist. Just as to not see the force being used by rationalists is also reductionist.
Can you elaborate on what you mean by “reductionist”? You seem to be using it as an epithet, and I honestly don’t understand the connection between the way you’re using the word in those two sentences.
On LessWrong we generally draw a distinction between honest, white-hat writing/speaking techniques that make one’s arguments clearer and dishonest techniques that manipulate the reader/listener (“Dark Arts”). Most rhetoric, especially political or religious rhetoric, contains some of the latter.
Rationalists want to use facts to force people to conform to what they believe
Again, this is just not what we’re about. There’s a huge difference between giving people rationality skills so that they are better at drawing conclusions based on their observations and telling them to believe what we believe.
Can you taboo “force”? That might help this discussion move to more fertile ground.
Can you elaborate on what you mean by “reductionist”? You seem to be using it as an epithet, and I honestly don’t understand the connection between the way you’re using the word in those two sentences.
Reductionist generally means you are over-extending an idea beyond its context or that you are omitting too many variables in the discussion of a topic. In this case I mean the latter. To say that rhetoric is simply wrong and that “white-hat writing/speaking” is right is too black and white. It is reductionist. You assume that it is possible to communicate without using what you call “the dark arts.” If you want me to believe that show your work.
Again, this is just not what we’re about. There’s a huge difference between giving people rationality skills so that they are better at drawing conclusions based on their observations and telling them to believe what we believe.
“Giving people skills” they do not ask for is forcing it on them. It is an act of force.
Reductionist generally means you are over-extending an idea beyond its context or that you are omitting too many variables in the discussion of a topic.
I wonder if there is actually a contingent of people who have Boyi’s “overextending/omitting variables” definition as a connotation for “reductionist,” and to what extent this affects how they view reductionist philosophy. It would certainly explain why “reductionist” is sometimes used as a snarl word.
Ok generally was a bad word. I checked out the wiki and the primary definition there is not one I am familiar with. The definition of theoretical reductionism found on wiki is more related to my use of the term (methodological too). What i call reductionism is trying to create a grand theory (an all encompassing theory). In sociological literature there is pretty strong critique of grand theories. If you would like to check me on this, you could look at t”the sociological imagination” by C Wright Mills. The critiques are basically what I listed above. In trying to create a grand theory it is usually at the cost of over simplifying the system that is under speculation. That is what I call reductionist.
To say that rhetoric is simply wrong and that “white-hat writing/speaking” is right is too black and white.
I don’t think it’s black and white; there is a continuum between clear communication and manipulation. But beware of the fallacy of gray: just because everything has a tinge of darkness, that doesn’t make it black—some things are very Dark Artsy, others are not. I do think it is possible to communicate without manipulative writing/speaking. Just to pick a random example, Khan Academy videos. In them, the speaker uses a combination of clear language and visuals to communicate facts. He does not use dishonesty, emotional manipulation, or other techniques associated with dark artsy rhetoric to do this.
“Giving people skills” they do not ask for is forcing it on them. It is an act of force.
He asked you to taboo “force” to avoid bringing in its connotations. Please resend that thought without using any of “force” “might” “violence” etc. What are you trying to say?
If that is what you mean by force, you coming here and telling us your ideas is “an act of force” too. In fact, by that definition, nearly all communication is “an act of force”. So what? Is there something actually wrong with “giving people ideas or tools they didn’t ask for”?
I’m going to assume that you mean it’s bad to give people ideas they will dislike after the fact, like sending people pictures of gore or child porn. I don’t see how teaching people useful skills to improve their lives is at all on the same level as giving them pictures of gore.
Rhetoric can be used as force, but to reduce it to “dark arts” is reductionist. Just as to not see the force being used by rationalists is also reductionist.
You seem to be using reductionism in a different way than I am used to. Please reduce “reductionism” and say what you mean.
Anyone who wants to destory/remove someting is by definition using force. ... Rationalists want to use facts to force people to conform to what they believe. Might is right does not nessecairly mean using violence; it just means you beleive the stronger force is correct. You believe yourself intellecutally stronger than people who believe in a diety, and thus right while they are wrong.
First of all, what I have been trying to say is that, no, rationalists are not interested in “force[ing] people to confrom”. We are interested in improving general epistemology.
I also think you are wrong that using “intellectual force” to force your beliefs on someone is not violence. Using rhetoric is very much violence, not physical, but definitely violence.
Yes we believe ourselves to be more correct and more right than theists, but you seem to be trying to argue “by definition” to sneak in connotations. If there is something wrong with being right, please explain directly without trying to use definitions to relate it to violence. Where does the specific example of believing ourselves more right than theists go wrong?
An honestly rational position might be more appropriately labeled a “right makes might” ideology—though this is somewhat abusing the polysemy of “right” (here meaning “correct”, whereas in the original it means “moral”).
What is the proper process of intervention for an ideological addict? Will they really just be able to stop using, or will they need a more incremental withdrawal process?
Now I haven’t followed the discussion closely, but it seems like you haven’t explained what you actually advocate. Something like the following seems like the obvious way to offer “incremental withdrawal”:
‘Think of the way your parents and your preacher told you to treat other people. If that still seems right to you when you imagine a world without God, or if you feel sad or frightened at the thought of acting differently, then you don’t have to act differently. Your parents don’t automatically become wrong about everything just because they made one mistake. We all do that from time to time.’
As near as I can tell from the comments I’ve seen, you’d prefer that we promote what I call atheistic Christianity. We could try to redefine the word “God” to mean something that really exists (or nothing at all). This approach may have worked in a lot of countries where non-theism enjoys social respect, and where the dangers of religion seem slightly more obvious. It has failed miserably in the US, to judge by our politics. Indeed, I would expect one large group of US Christians to see atheist theology as a foreign criticism/attack on their community.
Humans are symbolic creatures: Meaning that to some extent we exist in self-created realities that do not follow a predictable or always logical order.
While our internal models of reality are not always “logical”, I would argue that they are quite predictable (though not perfectly so). Just to make up a random example, I can confidently predict that the number of humans on Earth who believe that the sky is purple with green polka dots is vanishingly small (if not zero).
not only is human survival is completely dependent on the ability to maintain coexistence with other people, but individual happiness and identity is dependent on social networks.
Agreed, but I would argue that there are other factors on which human survival and happiness depend, and that these factors are at least as important as “the ability to maintain coexistence with other people”.
While our internal models of reality are not always “logical”, I would argue that they are quite predictable (though not perfectly so). Just to make up a random example, I can confidently predict that the number of humans on Earth who believe that the sky is purple with green polka dots is vanishingly small (if not zero).
I am not trying to be rude or aggressive here, but I just wanted to point out that your argument is based upon a fairly deceptive rhetorical tactic. The tactic is to casually introduce an example as though it were a run of the mill example, but in doing so pick an extreme. You are correct that a person with a normally functioning visual cortex and no significant retina damage can be predicted to seeing the sky in a certain way, but that does not change the fact that a large portion of human existence is socially created. Why do we stop at stop lights or stop signs? There is nothing inherent in the color read that means stop, in other cultures different colors or symbols signify the same thing. We have arbitrarily chosen read to mean stop.
Some things can be logically predicted given the biological capacity of humans, but it is within the biological capacity of humans to create symbolic meaning. We know this to be fact, and yet we are unable to as easily predict what it is that people believe, because unlike the color of the sky major issues of the social hive are not as empirically clear. Issues about what constitutes life, what is love, what is happiness, what is family are in some cases just as arbitrarily defined as what means stop and what means go, but these questions are of much graver concern.
Just to clarify, It is not that I do not think there is a way to rationally chose symbolic narrative, but that initiating rational narrative involves understanding the processes by which narratives are constructed. That does not mean abandoning rationality, but abandoning the idea of universal rationality. Instead I believe rationalists should focus more on understanding the irrationality of human interaction to use irrational means to foster better rationality.
You are correct that a person with a normally functioning visual cortex and no significant retina damage can be predicted to seeing the sky in a certain way, but that does not change the fact that a large portion of human existence is socially created.
Some portion of human experiences includes facts “I don’t fall through the floor when I stand on it” or “I will die if I go outside in a blizzard without any clothes for any length of time.” Some portion of human experience includes facts like “I will be arrested for indecent exposure if I go outside without wearing any clothes for any length of time.”
Facts of the first kind are the overwhelmingly more numerous than facts of the second kind. Facts of the second kind are more important to human life. I agree with you that this community underestimates the proportion of facts of the second kind, which are not universalizable the way facts of the first kind are. But you weaken the case for post-modern analysis by asserting that anything close to a majority of facts are socially determined.
Facts of the first kind are the overwhelmingly more numerous than facts of the second kind. Facts of the second kind are more important to human life. I agree with you that this community underestimates the proportion of facts of the second kind, which are not universalizable the way facts of the first kind are. But you weaken the case for post-modern analysis by asserting that anything close to a majority of facts are socially determined.
I was never trying to argue that the majority of facts are socially determined. I was arguing that the majority of facts important to human happiness and survival are socially determined. I agree that facts of the first kind are more numerous, but as you say facts of the second kind are more important. Is it logical to measure value by size?
Fair enough. I respectfully suggest that your language was loose.
For example:
a large portion of human existence is socially created.
Consider the difference between saying that and saying “a large portion of human decisions are socially created, even if they appear to be universalizable. A much larger proportion than people realize.”
You are correct that a person with a normally functioning visual cortex and no significant retina damage can be predicted to seeing the sky in a certain way, but that does not change the fact that a large portion of human existence is socially created. Why do we stop at stop lights or stop signs?
My example wasn’t meant to be a strawman, but simply an illustration of my point that human thoughts and behaviors are predictable. You may argue that our decision to pick red for stop signs is arbitrary (I disagree even with this, but that’s beside the point), but we can still predict with a high degree of certainty that an overwhelming majority of drivers will stop at a stop signs—despite the fact that stop signs are a social construct. And if there existed a society somewhere on Earth where the stop signs were yellow and rectangular, we could confidently predict that drivers from that nation would have a higher chance of getting into an accident while visiting the U.S. Thus, I would argue that even seemingly arbitrary social constructs still result in predictable behaviors.
but it is within the biological capacity of humans to create symbolic meaning
I’m not sure what this means.
and yet we are unable to as easily predict what it is that people believe … Issues about what constitutes life, what is love, what is happiness, what is family are in some cases just as arbitrarily defined as what means stop and what means go
I am fairly certain I personally can predict what an average American believes regarding these topics (and I can do so more accurately by demographic). I’m just a lowly software engineer, though; I’m sure that sociologists and anthropologists could perform much better than me. Again, “arbitrary” is not the same as “unpredictable”.
...but these questions are of much graver concern.
I don’t know, are they ? I personally think that questions such as “how can we improve crop yields by a factor of 10” can be at least as important as the ones you listed.
Instead I believe rationalists should focus more on understanding the irrationality of human interaction to use irrational means to foster better rationality.
I don’t think that you could brainwash or trick someone into being rational (since your means undermine your goal); and besides, such heavy-handed “Dark Arts” are, IMO, borderline unethical. In any case, I don’t see how you can get from “you should persuade people to be rational by any means necessary” to your original thesis, which I understood to be “rationality is unattainable”.
My example wasn’t meant to be a strawman, but simply an illustration of my point that human thoughts and behaviors are predictable.
I did not say your example was a strawman, my point was that it was reductionist. Determining the general color of the sky or whether or not things will fall is predicting human thoughts and behaviors many degrees simpler than what I am talking about. That is like if I were to say that multiplication is easy, so math must be easy.
I am fairly certain I personally can predict what an average American believes regarding these topics
Well you are wrong about that. No competent sociologist or anthropologist would make a claim to be able to do what you are suggesting.
I don’t know, are they? I personally think that questions such as “how can we improve crop yields by a factor of 10” can be at least as important as the ones you listed.
You can make fun of my diction all you want, but I think it is pretty obvious love; morality, life, and happiness are of the utmost concern (grave concern) to people.
don’t know, are they? I personally think that questions such as “how can we improve crop yields by a factor of 10” can be at least as important as the ones you listed.
I what subsume the concern of food stock under the larger concern of life, but I think it is interesting that you bring up crop yield. This is a perfect example of the ideology of progress I have been discussing in other response. There is no question to whether it is dangerous or rational to try to continuously improve crop yield, it is just blindly seen as right (i.e as progress).
However, if we look at both the good and the bad of the green revolution of the 70s-80s, the practices currently being implemented to increase crop yield are board line ecocide. They are incredibly dangerous, yet we continue to attempt to refine them further and further ignoring the risks in light of further potential to transform material reality to our will.
IMO, borderline unethical. In any case, I don’t see how you can get from “you should persuade people to be rational by any means necessary” to your original thesis, which I understood to be “rationality is unattainable”.
The ethical issues at question are interesting because they are centered around the old debate over collectivist vs. individualist morality. Since the cold war America has been heavily indoctrinated in an ideology of free will (individual autonomy) being a key aspect of morality. I question this idea. As many authors on this site point out, a large portion of human action, thought, and emotion is subconsciously created. Schools, corporations, governments, even parents consciously or unconsciously take advantage of this fact to condition people into ideal types. Is this ethical? If you believe that individual autonomy is essential to morality than no it is not. However, while I am not a total advocate of Foucault and his ideas, I do agree that autonomous causation is a lot less significant than then individualist individually wants to believe. Rather than judging the morality of an action by the autonomy it proivdes for the agents involved I tend to be more of a pragmatist. If we socially engineer people to develop the habits and cognitions they would if they were more individually rational, then I see this as justified. The problem with this idea is who watches the watchmen. By what standard do you judge the elite that would have to produce mass habit and cognition? Is it even possible to control that and maintain a rational course through it?
This I do not know, which is why I am hesitant to act on this idea. But I do think that there a mass of indoctrinated people that does not think about what it is they belief is a social reality.
I did not say your example was a strawman, my point was that it was reductionist. Determining the general color of the sky or whether or not things will fall is predicting human thoughts and behaviors many degrees simpler than what I am talking about.
Agreed, but you appeared to be saying that human thoughts and actions are entirely unpredictable, not merely poorly predictable. I disagree. For example, you brought up the topic of “what is love, what is happiness, what is family”:
Well you are wrong about that. No competent sociologist or anthropologist would make a claim to be able to do what you are suggesting.
Why not ? Here are my predictions:
The average American thinks that love is a mysterious yet important feeling—perhaps the most important feeling in the world, and that this feeling is non-physical in the dualistic sense. Many, thought not all, think that it is a gift from a supernatural deity, as long as it’s shared between a man and a woman (though a growing minority challenge this claim).
Most Americans believe that happiness is an entity similar to love, and that there’s a distinction between short-term happiness that comes from fulfilling your immediate desires, and long-term happiness that comes from fulfilling a plan for your life; most, again, believe that the plan was laid out by a deity.
Most Americans would define “my family” as “everyone related to me by blood or marriage”, though most would add a caveat something like, “up to N steps of separation”, with N being somewhere between 2 and 6.
Ok, so those are pretty vague, and may not be entirely accurate (I’m not an anthropologist, after all), but I think they are generally not too bad. You could argue with some of the details, but note that virtually zero people believe that “family” means “a kind of pickled fruit”, or anything of that sort. So, while human thoughts on these topics are not perfectly predictable, they’re still predictable.
You can make fun of my diction all you want,
I was not making fun of your diction at all, I apologize if I gave that impression.
but I think it is pretty obvious love; morality, life, and happiness are of the utmost concern (grave concern) to people.
First of all, you just made an attempt at predicting human thoughts—i.e., what’s important to people. When I claimed to be able to do the same, you said I was wrong, so what’s up with that ? Secondly, I agree with you that most people would say that these topics are of great concern to them; however, I would argue that, despite what people think, there are other topics which are at least as important (as per my earlier post).
...the practices currently being implemented to increase crop yield are board line ecocide. They are incredibly dangerous, yet we continue to attempt to refine them further and further ignoring the risks...
Again, that’s an argument against a particular application of a specific technology, not an argument against science as a discipline, or even against technology as a whole. I agree with you that monocultures and wholesale ecological destruction are terrible things, and that we should be more careful with the environment, but I still believe that feeding people is a good thing. Our top choices are not between technology and nothing, but between poorly-applied technology and well-applied technology.
Since the cold war America has been heavily indoctrinated in an ideology of free will (individual autonomy) being a key aspect of morality. I question this idea.
Ok, first of all, “individual autonomy” is a concept that predates the Cold War by a huge margin. Secondly, I have some disagreements with the rest of your points regarding “collectivist vs. individualist morality”; we can discuss them if you want, but I think they are tangential to our main discussion of science and technology, so let’s stick to the topic for now. However, if you do advocate “collectivist morality” and “socially engineer[ing] people”, would this not constitute an application of technology (in this case, social technology) on a grand scale ? I thought you were against that sort of thing ? You say you’re “hesitant”, but why don’t you reject this approach outright ?
BTW:
But I do think that there a mass of indoctrinated people that does not think about what it is they belief is a social reality.
This is yet another prediction about people’s thoughts that you are making. This would again imply that people’s thoughts are somewhat predictable, just like I said.
Have you read Structure of Scientific Revolutions? Many of us have and find it very interesting. But even if you apply post-modern methods to the scientific process, you still need to explain why science can predict which planes will fly and which will not.
Yes! Thomas Kuhn is a brilliant writer and his theory is powerful. But let me ask you what you think he is saying in that book? I am asking because I feel that we draw different conclusions from it.
The post-modern question to science is not about whether or not science can predict reality. The question is whether or not science is produced scientifically. Or to put it another way, can science be separated from power and discourse?
No. Obviously not. (This is not the majority position in this community).
I would hope that a scientist familiar with post-modern thought would agree that producing knowledge scientifically means nothing more and nothing less than getting better at predicting reality.
My take on Kuhn? The incommensurability of scientific theories (e.g. Aristotelian physics vs. Newtonian physics) is a real thing, but it does not imply scientific nihilism because there are phenomena. Thus, science is possible because there is “regularity” (not sure what the technical word is) when observing reality.
interesting, can you explain your reasoning?
is that the thing where from one theory the other one looks bogus and you can’t get from one to the other? Seems to me that it doesn’t imply nihilism because using the full power of your current mind, one model looks better than the other. it might be the same as EYs take on the problem of induction here.
Yes, incommensurability is the problem of translating from one theory into a later theory.
If different scientists are looking at a different reality, how on earth did we keep making better predictions? Thus the appeal to the regularity of phenomena, which rescues the concept of scientific progress even if we think that our model is likely to be considered utter nonsense a generation or so into the future.
ETA: The social position of science is an expansion of the halo effect point I made.
You mean can bayes structure work without mapping it onto the social domain? Yes.
RETRACTED: If science works, as in predicts reality, why are any other questions even relevant?
Because scientists do not naturally limit themselves to that domain. Scientists routinely do things like abusing the halo effect
Good point. I knew that point would bite me. I’m going to edit.
That’s exactly the problem that was noted by grandparent.
If science were just determined by “power and discourse”, it would be surprising if you could use it to make planes fly.
Look I am not trying to disagree with the scientific method. It is incredibly powerful and beneficial methodology for producing knowledge. What I am saying is
1-that as an institution and a belief-sysetm “science” does not live up to the scientific method. 2- That it is impossible to do so given what we have learned about the human condition.
I’m not sure what it would mean for science to “live up” to the scientific method. The scientific method is, well, a method; it’s not an ideology.
Sure, scientists are humans with power and discourse and all kinds of cognitive biases, and thus they don’t practice the scientific method with absolute perfection. And yes, I bet that there are quite a few traditions and institutions within the scientific community that could be improved. But, even with all its imperfections, science has been devastatingly effective as far as “belief systems” are concerned. As it was said upthread, science actually predict which planes will fly and which will fall; so far, no other methodology has been able to even come close.
You are measuring success by material transformation of the world. By that standard, sure science is more successful, but how do you justify such a standard?
I have heard several times this example of planes flying. In response I want to ask: Has flight made humans happier and safer? (Note* this question is a case example of the larger question of “does material transformation and dominance to the extent offered by modern science improve the quality of human life?)
Sure there are examples of how flight technologies have made humans wealthier and more powerful. But they have also (along with shipping technologies) been the primary cause of ecological devastation. They have also birthed new forms of warfare that make killing an even more remote and apathetic process. I am by no means trying to say that flight was a bad thing. I am not a Luddite. I am not against technological innovation. My point is to question why the goods of technological advancement are used as justification of further expansion of human capacity to transform the material world, while the damages are ignored. To me that is like promoting all the benefits of cigarettes, while leaving out the damages they do. What I am trying to question in the majority of my posts is the assumption that a greater capacity to dominate material reality equals a greater benefit to humanity when every major innovation produces equal if not greater damages.
I would argue that our ability to “materially transform the world” (which is material) is a direct consequence of our ability to acquire progressively more accurate models of the world.
Yes. Do you disagree ? I am somewhat surprised by your question, because the answer seems obvious, but I could be wrong. Still, you say,
So… it sounds like you agree, maybe ?
This is, at best, an argument against technology, but not against science.
… you conviently do not address some of the examples I provide of the negatives of flight. I am not against either techology or science in moderation, which I do not think exists in the current state of things.
No, it is an argument against the ideology that endless minipulation/dominance of the material world is purely benefical. Science is as much an attempt to dominate/minpulate reality as technological development.
Oh, I agree that there are negatives, I just think that the positives outweigh them. I can defend my position, but first, let’s clear up this next point:
I’m not sure I understand what you mean by “dominate/manipulate”. As I see it, science is an attempt to understand reality, and technology is an attempt to manipulate it. Do you have different definitions of “science” and “technology” in mind ? Obviously, a certain amount of technology is required in order for science to progress—microscopes and telescopes don’t pop out of thin air ex nihilo—but I think the distinction I’m making is still valid.
Science doesn’t motivate itself. The social purpose of learning to make better predictions (science) is to be better at controlling the environment.
The fact that we can control an environment doesn’t imply that we should control it that way, and Boyi seems to be conflating those points. But that doesn’t change the social purpose of science.
ETA: Understanding reality is what science says it does. But from a functional point of view, it is irrelevant whether the model is “true” because all that matters is whether the model makes accurate predictions.
I agree with what dlthomas said. In fact, most scientists I know are pursuing science out of intrinsic interest, like he said—though that’s just my personal experience, which may not be representative.
What’s your definition of “true”, then, besides “makes accurate predictions” ?
I did say that I was doing a functional analysis. The social purpose of labeling a scientific statement as true is to differentiate statements that are useful in making accurate predictions from those that are not useful for making predictions. Also, see my response to dlthomas.
If we stop using functional analysis, the question of truth remains. Personally, I have a lot of trouble coming to a satisfying conclusion about the concept, because I think the hypothesis of the incommensurability of scientific theories is strongly supported by the evidence. Notwithstanding that incommensurability, I think that the ability of science to make accurate predictions is based on the regularity of phenomena. I wrote this earlier, which is a slightly more detailed version of the same point.
I may be exposing my ignorance here, but I don’t understand what you mean by a “social purpose”. The purpose you describe sounds like an entirely pragmatic purpose to me; i.e., it’s the one that makes sense if you want to discover more about the world—but perhaps this is also what you meant ?
I read that comment, and I disagree with its premise: “It’s like Aristotle [and Newton] wasn’t looking at the same reality”. Both Newton and Aristotle (ok, Aristotle not as much) explain not only their conclusions, but the evidence and reasoning they used to arrive at these conclusions, and it’s rather obvious why they made the mistakes they made… it’s because they were, in fact, looking at the same reality we now inhabit. You’d make the same mistakes too, today, if you knew nothing of modern science but tried to figure out how the world worked.
Furthermore, Newton wasn’t even all that terribly wrong (again, Aristotle was a ways off). If I want to predict the orbit of our Moon with a reasonable degree of certainty, or if I simply want to lob a rock across the top of an enemy fortress’s walls with my trebuchet, I don’t need relativity.
You make it sound as though the “regularity of phenomena” is some kind of a trick that people invented so they could keep getting tenure, or something. I, on the other hand, would claim that it’s simply the most parsimonious assumption, given our observations.
It’s not a big deal. I was trying to be precise to avoid the appearance of a naive claim like “purpose is an objective property of things,” which is clearly false. Purpose is only meaningful as a reference to something, and I’m referencing society.
The Aristotle / Newton comparison is meant to be evidence for the hypothesis of incommensurability of scientific theories. If it doesn’t convince you, then I regret that I’m not a good enough historian of science to present additional evidence. (For example, the issues about phlogiston do not seem like compelling evidence for the theory to me, although experts in Philosophy of Science apparently disagree). The only other point in favor of incommensurability of scientific theories is something like “It’s awfully lucky that scientific theories are commensurable, because theories of everything that are not scientific (i.e. moral theories) are definitely incommensurable.”
Anyway, disbelieving the scientific incommensurability hypothesis (SIH) means that the point about phenomena is not all that interesting or insightful. But if you believe SIH, then the scientific nihilism (i.e. there is no objective reality at all) is very tempting. But scientific nihilism must be rejected because science keeps making accurate predictions. Not only that, the predictions keep getting better <i.e. once we didn’t know how to build computers. Now, we do>
So even if we reject the idea of accurate scientific models based on the SIH, we still are committed to some sort of regularity, because otherwise accurate prediction is extremely unlikely. That’s phenomena. Sort of the middle ground between scientific nihilism and a belief in the accuracy of scientific models.
Ah, yes, agreed.
I think I might be misunderstanding what the word “incommensurability” means. I thought that it meant, “the performance of theory A cannot be compared with the performance of theory B”, but in case of Aristotle/Newton/Einstein, we can definitely rank the performance (in the order I listed, in fact). Aristotle’s Laws of Motion are more or less (ok, closer to the “less” side perhaps, but still) useful, as long as you’re dealing with solid objects on Earth. Their predictive power isn’t great, but it’s not zero. Newton’s Laws are much more powerful, and relativity is so powerful that it’s overkill in many cases (f.ex. if you’re trying to accurately lob a rock with a trebuchet). Each set of laws was devised to explain the best evidence that was available at the time; I see nothing incommesurate about that. But, again, it’s possible that I’m using the word incorrectly.
I am not convinced that they are. In fact—again, assuming I’m using the word correctly—how can theories be incommesurable and yet falsifiable ? And if a theory is not falsifiable, it’s not very useful, IMO (nor is it a theory, technically).
As I use incommensurability, I mean that the basic concepts in one theory cannot be made to correspond with the basic concepts of another theory.
At bottom, Aristotelian physics says that what needs to be explained is motion. In contrast, Newtonian physics says that what needs to be explained is acceleration. I assert that there is no way to import principles for explaining motion into a theory that exists to explain acceleration. In other words, Aristotelian physics is not a simpler and more naive form of Newtonian physics. You can produce a post-hoc explanation of the differences like your invocation of the limits of observable evidence (but see this discussion). I find post-hoc explanation unsatisfying because scientists talk as if they can ex ante predict (1) what sorts of new evidence science needs to improve and (2) what the “revolutionary” new theories will look like. And yet that doesn’t seem to be true historically.
There is some unfortunate equivocation in the the word theory (“Theory of Gravity” vs. “Utilitarianism: A Moral theory”). But something like Freudian thought is unified(-ish) and coherent(-ish). What is wrong with referencing “Freudian theory”? That doesn’t reject Popper’s assertion that Freudian thought isn’t a scientific theory (because Freudian thought isn’t falsifiable). On falsifiability more generally, I’m not sure what it means to ask whether utilitarianism (or any moral theory) is falsifiable.
What about “V = a * t” ? That said, AFAIK “at bottom” Newton didn’t really want to explain acceleration, or motion, or any abstract concept like that; he wanted to know why the planets appear at certain places in the sky at certain times, and not others—but he could pinpoint the position of a planet much better than Aristotle could.
And I think we can, in fact, correspond Newtonian concepts to Aristotelian ones, if only by pointing out which parts Aristotle missed—which would allow us to map one theory to the other. For example, we (or Archimedes, even) could talk about density and displacement, and use it to explain the parts that Aristotle got right (most rocks sink in water) as well as the parts he got wrong (actually some porous rocks can float).
Nothing really, it’s just that most people around here, AFAIK, mean something like “a scientific, falsifiable, well-tested theory” when they use the word.
If it’s unfalsifiable, what good is it ? Isn’t that the same as saying, “it has no explanatory power” and “it lacks any application to anything” ?
I see utilitarianism as more of a recipe (or an algorithm) than a theory, so it doesn’t need to be falsifiable per se.
For theories to be commensurate, you need to be able to move all the interesting insights of each theory into the other and still have the same insight. Sure, Aristotle and Newton seemed to agree on the definition of velocity and acceleration. But there’s no way to state “An object in motion will tend to stay in motion” as a conclusion of Aristotelian physics because the caveats Aristotle would want to insert would totally change the meaning.
(As an aside, I’m making a point about the theories, not the scientists. Boyi might find Newton’s motivation interesting, but I’m trying to limit the focus to the theories themselves).
The point about moral “theory” is sufficiently distinct that I hope you’ll forgive my desire to move it elsewhere just to make this conversation easier to follow.
In this case, I don’t think I fully understand what you mean by “insights” being “the same”. Any two scientific theories will make different models of reality, by definition; if they didn’t, they’d be the same theory. So, if you go the extreme route, you could say that all theories are incommensurate by definition, but this interpretation would be trivial, since it’d be the same as saying, “different theories are different”.
I agree that there’s “no way to state ‘An object in motion will tend to stay in motion’ as a conclusion of Aristotelian physics”, but that’s because Aristotelian physics is less correct than Newtonian mechanics. But there is a way to partially map Newtonian mechanics to Aristotelian physics, by restricting our observations to a very specific set of circumstances (relatively heavy objects, an atmosphere, the surface of Greece, etc.). Similarly, we can map relativity to Newtonian mechanics (relatively heavy objects, slow speeds, etc.). It seems odd to say that these theories are totally incommensurate, while still being able to perform this kind of mapping.
In fact, we perform this kind of reduction every day, even in practical settings. When I want to drive from point A to point B, Google Maps tells me that the Earth is flat, and I implicitly believe that the Earth is flat. But if I want to fly to China, I have to discard this assumption and go with the round-Earth model. I see nothing philosophically troubling about that—why use an expensive scalpel when a cheap mallet works just as well ?
I was trying to make a point that scientific theories are not just about moving abstract concepts around; their whole purpose is to make predictions about our observations. This is what differentiates them from pure philosophy, and this is also what makes it possible to compare one theory to another and rank them according to correctness and predictive power—because we have an external standard by which to judge them.
Yeah, that’s a good move, no objections here.
But not too heavy...
Haha, yes, very important detail, that :-)
I can’t write it better than Feyerabend. My argument about Aristotelian and Newtonian physics is a paraphrase of section 5 of his argument, starting at pg. 94, and ending at about 101.
ETA: And I looked at it again and it’s missing 95-96, where some of the definitions are. If there’s interest, I’ll type it up, because I think it addresses the criticisms fairly well.
Ok, I have to admit that I haven’t read the entire book, but only skimmed the section your mentioned—because my time is limited, but also because, in its infinite wisdom, Google decided to exclude some of the pages.
Still, I can see that Feyerabend is talking about the same things you’re talking about; but I can’t see why those things matter. Yes, Aristotle had a very different model of the physical world than Newton; and yes, you can’t somehow “plug in” Aristotelian physics into Newtonian mechanics and expect it to work. I agree with Feyerabend there. But you could still go the other way: you can use Newtonian mechanics, as well as what we know of Aristotle’s environment, to explain why Aristotle got the results he did, and thus derive a very limited subset of the world in which Aristotle’s physics sort of works. This does not entail rewriting the entirety of Newtonian mechanics in terms of Aristotelian physics or vice versa, because Aristotle was flat out wrong about some things (a lot of things, actually). Feyerabend seems to believe that this makes the two theories incommensurate, but, as I said above, by that standard the word “incommensurate” becomes synonymous with “different”, which is not informative. I think that Feyerabend’s standards are simply too high.
I was also rather puzzled by something that Feyerabend says on page 98, toward the bottom. He says that “impoetus” and “momentum” would give you the same value mathematically, and yet we can’t treat them as equivalent, because they rest on different assumptions. They give you the same answer, though ! Isn’t this what science is all about, answers ?
Let me illustrate my point in a more flowery way. Let’s say that Aristotle, Newton, and Einstein all went to a country fair together, and entered the same block-pushing contest. The contestant randomly picks a stone block out of a huge pile of blocks of different sizes, and then a tireless slave will push the block down a lane (the slave is well-trained and always pushes the block with the same force). The contestant’s job is to predict how far the block will slide before coming to rest. The contestant will win some amount of money based on how close his prediction was to the actual distance that the block traveled.
As far as I understand, Feyerabend is either saying that either a). Aristotle would win less money than Newton who would win less than Einstein, but we have no idea why, or that b). We can’t know ahead of time who will win more money. Both options look disingenuous to me, but it’s quite likely that I am misinterpreting Feyerabend’s position. What do you think ?
If we imagine a test given by an Aristotelian physicist, defining impetus with the Newtonian definition of momentum would get no points (and vice versa). Feyerabend says
In other words, impetus is meant to explain, while momentum is something to be explained. The point is that it’s very odd that two theories on the same subject disagree about what explains and what needs to be explained. (Imagine if one scientist proposed that cold caused ice, and the next generation of scientist proposed that ice caused cold, while making more accurate predictions). In the same way that impetus is a primary explanation for Aristotle, force is a primary explanation for Newton. And impetus and force are nothing alike. The assertion is that this type of difference is more than saying that Newton had better data than Aristotle.
In your hypothetical, I think that Feyerabend says something like (a). Perhaps “Aristotle would win less money than Newton who would win less than Einstein, but the naive scientific method cannot explain why.” For some perspective, Feyerabend is opposing Ernest Nagel and logical positivism, which asserts that empirical statements are true by virtue of their correspondence with reality. If you believe Newtonian physics, the causal explanation “Impetus” doesn’t correspond with any real thing (because momentum does not explain, but is to be explained). You could bit the bullet and accept that impetus is a false concept. But if you do that, then a theory based on lots of false concepts makes predictions in the block-push contest that do substantially better than chance. How can a false theory do that?
If that’s what Feyerabend is saying, then he’s confusing the map for the territory:
That would indeed be odd, but as I understand it, both theories are trying to explain why objects (such as stone blocks or planets) behave the way they do. Both “impetus” and “momentum” are features of the explanatory model that the scientist is putting together. Aristotle believed (according to my understanding of Feyerabend) that “impetus” was a real entity that we could reach out and touch, somehow; Newton simply used “momentum” as a shorthand for a bunch of math, and made no claim about its physical or spiritual existence. As it turns out, “impetus” (probably) does not have an independent existence, so Aristotle was wrong, but he could still make decent predictions, because the impetus’s existence or lack thereof actually had no bearing on his calculations—as long as he stuck to calculating the motion of planets or rocks. In the end, it’s all about the rocks.
What is the “naive scientific method”, in this case ? How is it different from the regular kind ?
No, you can’t, since the existence of impetus as an independent entity is unfalsifiable (if I understand it correctly). The best you can do is say, “this impetus thing might exist or it might not, but we have no evidence that it does, so I’m going to pretend that it doesn’t until some evidence shows up, which it never will, since the concept is unfalsifiable”. Aristotle probably would not have said that, so that’s another thing he got wrong.
The statements “ice causes cold” or “cold causes ice” are both falsifiable, I think, in which case the “ice causes cold” theory would make less accurate predictions. It might fail to account for different freezing temperatures of different materials, or for the fact that the temperature of a liquid will not decrease beyound a certain point until the entire volume of the liquid had frozen, etc.
I think that Feyerabend is mostly talking about maps, not territory. I shouldn’t have said naive scientific method, because naive is unnecessarily snarky and I’m talking about a different basic philosophy of scientists than the scientific method. The basic “truth theory” of science is that we make models and by adding additional data, we can make more accurate models. But in some sense, the basic theory says that all models are “true.”
That leaves the obvious question of how to define truth. “Makes accurate predictions” is one definition, but I think most scientists think that their models “describe” reality. The logical positivists tried to formalize this by saying that models (and statements in general) were true if they “corresponded” with reality. Note that this is different from falsifiability, which is basically a formal way of saying “stick your neck out.” (i.e. the insight that if your theory can explain any occurrence, then it really can’t explain anything) The Earth suddenly reversing the direction of its orbit would falsify impetus, momentum, relativity, and just about everything else human science knows or has ever thought it knew, but that doesn’t tell us what is true.
For the logical positivist, when one says that “impetus does not have an independent existence” that means “impetus is false.” There is some weirdness in a “false” theory making accurate predictions. To push on the map/territory metaphor slightly, if Columbus, Magellan, and Drake all came back with different maps of the world but all clearly got to the same places, we would be justified in thinking that there was something weird going on. Yet if you adopt the logical positivist definition of truth, that seems to be exactly what is happening. At the very least, the lesson is that we should be skeptical of the basic theory’s explanation of what models are.
I really don’t think so. Let’s pretend that my theory says that lighter objects always fall slower than heavier ones, whereas your theory says that all objects always fall at the same rate. Logically speaking, only one of those theories could be true, seeing as they state exactly opposite things.
In addition, if I believe that the Moon is made out of green cheese, and so does everyone else; and then we get to the Moon and find a bunch of rocks but no cheese—then my theory was false. I could make my green cheese theory as internally consistent as I wanted, but it’d still be false, because the actual external Moon is made of rocks, whereas the theory says it’s made of cheese.
I prefer my truth to be simple...
What’s the difference ?
Well, no, but it would tell us that lots are things we thought are true are probably false. In order to figure out what’s likely to be true, we’d have to construct a bunch of new models, and test them. I don’t see this as a problem; and in fact, this happens all the time—see the orbit of Mercury, for example.
I wouldn’t say that “impetus is false” (at least, not in the way that you mean), because it’s actually worse than false—it’s irrelevant. There’s no experiment you can run, in principle, that will tell you whether “m*v” is caused by impetus or invisible gnomes. And if you can’t ever tell the difference, then why bother believing in impetus (as an actual, non-metaphorical entity) or gnomes (ditto) ? Aristotle may not have been aware of anything like Ockham Razor (I don’t know whether he was or not), but that’s ok. Aristotle was wrong. Scientists are allowed to be wrong, that’s what science is all about (though Aristotle wasn’t technically a scientist, and that’s ok too).
I don’t see why you’d make the logical leap from “These three explorers had different maps but got to the same place”, directly to, “we must abandon the very idea of representing territory schematically on a piece of vellum”, especially when you know that explorers who rely on maps tend to get lost a lot less often than explorers who just wing it. Instead of abandoning all maps altogether, maybe you should figure out what piece of information the explorers were missing, so that you could make better maps in the future.
Is it really your position that no experiment can tell whether something is a cause or an effect? That sounds like an assertion that the statement “gravity is a cause of motion, not an effect” is not meaningful.
I’d like truth to be simple. For practical purposes, it is simple. But “simple” truth doesn’t stand up to rigorous examination, in much the same way that a “simple” definition of infinity doesn’t work.
Sorry, no, that wasn’t what I meant. As far as I understand—and my understanding might be incorrect—Aristotle believed that moving objects are imbued with this substance called “impetus”, which, according to Aristotle, is what imparts motion to these objects. He could calculate the magnitude of impetus as “m*v”, but he also proposed that impetus (which, according to Aristotle, does exist) is undetectable by any material means, other than the motion of the objects.
In a way, we can imagine two possible universes:
Universe 1: Impetus imparts motion to objects but is otherwise undetectable; we can estimate its effects as “m*v”.
Universe 2: There’s no such thing as “impetus”, though m*v is a useful feature of our model.
Is there any way to tell, in principle, whether you are currently living in Universe 1 or Universe 2 ? If the answer is “no”, then it doesn’t matter whether impetus is a cause or an effect, because it is utterly irrelevant.
Contrast this with your “ice causes cold vs cold causes ice” scenario. In this case, ice and cold are both physically measurable, and we can devise a series of experiments to discover which causes which (or whether some other model is closer to the truth).
I would argue that if your rigorous examination cannot explain your simple, useful, and demonstrably effective notion of truth, then the problem is with your examination, not your notion of truth.
What is a “simple” definition of infinity, and how does it differ from the regular kind ? As far as I understand, infinity is a useful mathematical concept that does not directly translate into any scientific model, but, as usual, I could be wrong.
I don’t think an Aristotelian physicist would say that impetus is “otherwise undetectable” any more than a modern physicist would say “gravity causes objects to move, but is otherwise undetectable.”
There are lots of statements that we desire to assign a truth value to that a much more complicated than the number of sheep in the meadow. Kant described a metaphysical model that was not susceptible to empirical verification (that’s a feature of metaphysical models generally). When we say the model is true (or false), what do we mean? If you want to abandon metaphysics, then what does it mean to say something like “qualia have property X” is true?
Is it your position that all truths are “scientific” truths? Does that mean that non-empirical assertions can’t be labelled true (or false)?
I mentioned infinities an an example of an unintuitive truth, in order to argue by analogy that the intuitiveness of EY’s “definition” of truth does not show that the definition is complete. Folk mathematics would assert something like “All infinities are the same size” and that’s just not true.
Fair enough, but then, how would an Aristotelian physicist propose to detect impetus, if not by observing the motion of objects ? I’m pretty sure I’m missing the answer to this part, so I genuinely want to know.
The modern physicist doesn’t have to answer this question, because he treats gravity as a useful abstraction in his model. The Aristotelian physicist, on the other hand, believes that impetus is a real thing that actually exists and is causing objects to move. And if the answer is, “you can only detect impetus by observing the motion of objects and using the formula m*v”, then it becomes trivially easy to answer your original question, “how can you explain the fact that Aristotelian physics and Newtonian mechanics make the same predictions despite being so different”. The answer then becomes, “because both of them describe the motion of objects in the same way, one of them just as this extra bit that doesn’t really change much”. As I said though, I may be missing a piece of the puzzle.
I personally think that qualia, along with free will, are philosophical red herrings, so I’m not terribly interested in their properties. That sounds like a topic for a separate argument, though...
I would say that statements such as “2+2=4” and “if all men are mortal, and Socrates is a man, then Socrates is mortal” are either true by definition, or derive logically from statements that are true by definition. There’s nothing wrong with that, obviously, but scientific truth is a bit different, since in science, you are not free to pick any axioms you want—instead, the physical universe does that for you.
That said, I’m not sure how your question relates to our main topic: the incommensurability of scientific truths, specifically.
A while ago, I said to Boyi that the best of post-modern thought gets co-opted into more mainstream thought. If you think gravity is only a useful abstraction, not “a real thing that actually exists and is causing objects to move,” then you are already much, much closer to Feyerabend than to the logical positivists. As a sociological fact, I assert that most scientists (especially in the “hard” sciences) take a position closer to “gravity is a real thing” than “gravity is a useful abstraction” (if not for gravity in particular, than for whatever fundamental explanatory objects they assert).
The incommensurability of scientific models (I shouldn’t have said truths) is the assertion that an earlier scientific model is not necessarily a simpler version of a later scientific model. I’ve made the best case that I can about Aristotle vs. Newton. The lesson is to be suspicious of the “truth” of scientific models. Because I think most scientists want to say something stronger about the model than “makes more accurate predictions.”
Isn’t that the whole point of (for example) the search for the Higgs Boson ? Gravity is an abstraction, and we’re trying to refine the abstraction by discovering what is causing the real phenomenon that we observe. Of course, that discovery will not represent the world as it really, truly is, either; but at least it’ll be a bit closer than just “GMm / r^2”. I think there’s a big difference between the scientific concept of an abstraction, which refers to a simplified and incomplete model of reality; and the post-modern concept, which treats every abstraction as just another narrative that is socially constructed and does not relate to any external phenomena.
If this is all you’re saying, then I can fully endorse this statement—but then, as I said before, it basically boils down to saying, “some earlier scientific models were pretty much wrong”. This statement is true, but not very interesting.
Like what ? Isn’t that the entire point of the model, to make predictions ?
Let me use another analogy. At one point, people believed that all swans were white; in fact, the very term “black swan” is an idiom meaning “something that is completely unexpected, contradicts most of what we know, and is likely disastrous”. Of course, today we know that black swans do exist.
So, let’s say that I, having never seen a black swan, believe that all swans are white. You believe that some swans are black. Our two models of the world are incommensurate; logically, only one of them can be true. And yet, if I have seen plenty of white swans, but never a black one, I’d be perfectly justified in believing that my model is (probably) true (until you show me some evidence to the contrary). Do you think this means that we should be “suspicious” of the entire notion of predicting the color of the next swan one might come across ?
You are conflating theory conflict with theory commensurability. The fact that theories make different predictions does not prove that the theories are incommensurable. For example, the white-swan theory predicted that there were no black swans, while the black-swan predicted that some black swans existed. But both theories mean the SAME thing by swan, so they are commensurable theories.
In addition, making similar predictions does not mean that theories are commensurable. I think there was a time when epicycle theory and heliocentric theory made similar predictions of planetary motion. Notwithstanding this agreement, there is no way to translate the concepts of Ptolemaic astronomy into heliocentric astronomy, which is what I mean when I say incommensurable.
In reading this discussion of Feyerabend, it seems like I’m defending a position that Feyerabend did not actually endorse. As you say:
According to that discussion, Feyerabend is fully post-modern as you describe it. (This was the position Boyi was articulating, and I think we agree that it has trouble explaining the success of science). I’m trying to defend a philosophy of science that “treats every abstraction as a narrative that is socially constructed, but does (somehow) relate to external phenomena.”
Eliezer’s essay that you linked implies that one purpose of science is to know true facts about the world. Gravity isn’t an abstraction of the (hypothetical) Higgs bosom. It’s a property of the particle (or whatever it is-I’m not up on the physics). I’m articulating a position in which we don’t know certain kinds of facts (i.e. models do not “correspond” to reality), but are nonetheless able to make accurate predictions.
Technically, you’re right; my swan example wasn’t fully analogous. I could still argue that one theory meant “swan” to be “a bird that is exclusively white”, whereas the other theory allows swans to be white or black, and thus the two theories do mean different things by the word “swan”… but I don’t know if you or Feyerabend would agree; nor do I think that it’s terribly important.
What’s more important is that I disagree with you (and possibly Feyerabend) regarding what theories are. As far as I understand, you believe that scientific models utilize “concepts” in order to make predictions; these concepts are the primary feature of the model, with predictions being a side-effect (though a very important and useful one, speaking both practically and philosophically). I, on the other hand, would say that it is the concepts that are secondary, and that a scientific model’s main feature is the predictions it makes.
If this is true, then as long as your model makes accurate predictions, you are justified in believing that its concepts are also true. Thus, if your epicycle model allows you to predict the motion of planets with reasonable accuracy, you’re justified in believing that planets move in epicycles. But as soon as some better measurements come along, your predictions will start failing, and you’d be forced to get yourself some new concepts.
In other words, the concepts are not a statement about how the world really, truly works; but only about how it works to the best of your knowledge. Once you get better knowledge, you are forced to get better concepts; and once you do that, you can go back, look at your old concepts, and say, “ok, I can see why I came up with those, because I’d need to know X in order to see that they’re wrong, and we’d just built the X-supercollider last year”.
Thus, I see no philosophical problem with having two scientific models that use different concepts, yet arrive at similar predictions. They are simply two local maxima in our utility function that describes our understanding of the world; and, since we’re not omniscient, neither of them are 100% true. When the maxima are sufficiently close, you can even use a simpler model (f.ex., “the world is flat”) in place of the other (f.ex., “the world is round”), if you’re willing to deal with the marginally increased errors in your predictions (f.ex., lobbing that giant boulder 1cm to the side of its intended target).
Right, but what’s a “particle” ? In reality, there (probably) aren’t any “particles” at all, there are just waves—except that the waves aren’t exactly real, either, and instead there are “amplitude flows”, except those are a model too… and so on. It is still possible that all of these things are just local maxima, and that in reality the world is a giant computer simulation, or something. For now, our models work quite well, but that doesn’t mean that they are somehow directly tied to actual particles (or waves, etc.) that actually exist. Photons don’t care about what’s in our heads.
If you ask Alice the Engineer what scientific theories do, I think she would say that scientific theories “describe the world” and “make predictions.” Without getting into relative importance, I think she’d say that a theory that couldn’t both describe and predict would be a failure of science. If that’s not what she would say today, I’m fairly confident that her counterpart from 1901 would say that.
I think Feyerabend has a devastating critique of the ability of scientific theories to describe. And the difference is huge. If you follow Feyerabend, you can’t say “Light is both a particle and a wave.” The best you can do is say “Our most accurate theory treats light as both a particle and a wave” and forbid the inference that “the world resembles the theory in any rigorous way.”
It seems like your response is to remove “descriptiveness” from the definition of science, then say that Feyerabend doesn’t have any interesting critique of science as properly defined. But your new definition of science is the one that post-modernism says is best. More importantly, you can’t go back to Alice and say “Look, I’ve driven off the post-modernists with no losses” because she’ll respond by asking about science’s ability to describe the world and cite The Simple Truth at you.
If you ask actual practicing scientists (researchers, doctors, engineers, etc), I assert that they would agree with Alice, if forced to take a position (ignoring for the moment why we’d ever want to force them to think about this theoretical issue). And regardless of the penetration of post-modern theory of science into modern folk philosophy, the overwhelming majority scientists throughout history have asserted the position I’ve ascribed to Alice.
My intent wasn’t to remove “descriptiveness”, but to remove both certainty and absolute precision. Thus, instead of saying, “planets move in epicycles”, we can only say, “to the best of our knowledge, planets move in something closely resembling epicycles (but we’re not sure of that, and in reality planets don’t move in neat little epicycles because they’re not perfectly round, etc.)”. This may seem like a minor difference, but IMO the difference is huge: instead of treating the features of your model—the “concepts”—as primary, they are now entirely dependent on your observations.
This is what I was trying to show with my (admittedly flawed) swan analogy. I see no problem with two theories making similar predictions yet explaining them using different models, because in the end it’s the predictions that are important. If you are unable to make measurements that are precise enough to tell one model from another, you might as well go with a simpler model just by using Occam’s Razor. This doesn’t mean that your simpler model must be 100% accurate; it just means that it’s much likely to be much closer to the way the world really works than other models.
Thus, there’s no real need to explain why two different theories make similar predictions; the explanation is, “this isn’t a question about theories or true reality, it’s a question about us and our models”, and the answer is, “our model was wrong because we couldn’t make precise enough measurements, but it was still closer to reality than all other models at the time; and BTW, our current model isn’t perfect either, but we think it’s close”.
This approach is different, I think, from your approach of treating the features of the model (the “concepts”) as primary. If you do that, and if you assume that the world must look exactly like your model in order for its predictions to work, then you do have a problem with explaining how more or less correct predictions can arise from incorrect models. But this is a way to look at science that goes too far into the Platonic realm, IMO.
Since I accept theory incommesurability, I don’t believe that closer to reality is a useful thing to say about scientific theories. I’m not even sure what it could mean. Specifically, the statement “precise enough measurements” doesn’t explain or cause me to expect the thing you seem to mean by closer to reality, which sounds to me a lot like what Alice means by “descriptiveness.”
I’m confused. Can’t one construct a counterexample?
For consideration,
F=ma
performs much worse under scrutiny thanF=ma*e
, wheree
is the number of elephants in the room plus one, even though the latter is usually accurate.I’m asserting that makes better predictions != closer to reality.
F= ma(elephants+1) clearly makes worse predictions. That’s a good and sufficient reason to reject it.
A longer explanation of what I think is at stake is here.
If you can reject it because it makes worse predictions, doesn’t that make the theories commensurable, regardless of how they relate to reality?
Not at all. What exactly is an epicycle supposed to translate into in a heliocentric theory?
You evaluate both theories in terms of predictive power, and then compare the two.
Ah, I see what you and Feyerabend are doing there: commensurability is supposed to allow some translation between the internal parts of the theories. I don’t see why that should be necessary, or why that would be called ‘commensurability’. Ordinarily, to say 2 things are commensurable merely requires that they are comparable by some common standard.
Ok I am kind of confused now. At first, you say:
But in your example, Alice the Engineer and her hypothetical scientist friends say that
So, it sounds like you disagree with Alice and the scientists, then ? But if so, are you not removing “descriptiveness” from scientific theories, just as you accused me of doing ?
But perhaps, by “makes better predictions != closer to reality”, you only meant “makes better predictions probably == closer to reality, but not certainly” ? I could agree with that.
I think I could also agree with you that, if one accepts theory incommesurability, then it probably wouldn’t make sense to talk about theories being (probably) closer to reality (assuming it exists). But I don’t accept theory incommesurability, so at best we’re at an impasse.
If, on the other hand, one assumes that there probably exists an external reality that influences our senses in some way, however indirect (and which we can influence in return with our bodies), then IMO commensurabilty follows more or less naturally.
Since our understanding of this reality is not (and can probably never be) perfect, we can treat the sum total of all of our scientific models as a sort of cost function, which measures the projected difference between our models and things as they truly are (thus, our models still describe things, but imperfectly). By carrying out experiments and updating our theories we are trying to minimize this cost function. It’s entirely likely that we’d get stuck in some local minima for a while; hence the theories that make similar predictions but describe reality differently.
I take it you disagree with some of this, so which, if any, of my assumptions do you find objectionable ?
Reality probably exists (this seems to be non-controversial)
Reality affects our senses (which are part of it, after all) and we can affect it in turn by moving things around (ditto).
We can create what we think of as models of reality in our heads, however imperfect or wildly incorrect they might be.
Since our models imply predictions, it is possible for us to estimate the difference between our models of reality and the actual thing, by carrying out experiments and comparing the results we get to the expected results.
In trying to minimize this difference, we can get stuck in local minima.
Some other hidden assumption that I have forgotten to list here.
I’m sorry if I wasn’t clear about Alice, who is intended to represent a school of thought in philosophy of science called logical positivism.
I think you were advocating a position similar to her position, especially when you were saying that A Simple Truth was a sufficient theory of what truth is. Further, I agree that the adjustment that Alice should make to her theory is to abandon what I’ve called descriptiveness. Thus, I still think you are closer to Alice than to Feyerabend as long as you think scientific theories get “closer to reality” in some meaningful way.
As I understand it, theory incommesurability should be understood as an empirical theory, much the same way that academic historical theories are empirical theories.
Theories change.
I’m pretty sure Alice agrees.
Some theory changes are radical (i.e. involve incommensurability)
I think this is true, as a historical matter. A geocentric theory (epicycles) was replaced by a heliocentric theory. There’s no reasonable way to translate rotating circles on top of other rotating circles embedded in the sky (epicycles) into anything in the Copernican/Keplerian planets-elliptically-orbit-the-Sun theory.
I don’t think Alice rejects this either. I expect she explains that Science became non-empirical for a extended period of time, probably based on influence/co-option by non-empirical entities like the Catholic Church. But when Science was restored to its proper function by the return of empiricism, the geocentric nonsense was flushed away. There was no reason to expect that geocentrism would translate into heliocentrism because geocentrism was not sufficiently based on observation. (I’m not sure if this story is historically correct, but that’s Alice’s problem, not mine).
All significant theory changes were radical theory changes.
Alice obviously doesn’t agree. If impetus != momentum, this is evidence in support of this proposition. Likewise, if impetus = momentum, this is evidence against the proposition. If the proposition is true, I think you are right when you say:
But I don’t think that requires one to reject the concept of reality.
I don’t think that A Simple Truth advocates a theory per se; I see it as more of call to reject complex and convoluted philosophical truth theories, in favor of actually doing science (and engineering).
As far as I understand from your arguments so far, the notions of empiricism and incommensurability are incommensurable.
No, but you could probably go the other way. Given both theories, you could calculate the minimum magnitude of the experimental error required in order for them to become indistinguishable. If your instruments are less precise than that, then you may as well use epicycles (Occam’s Razor aside).
I don’t think this is accurate, historically speaking. Yes, the influence of the Catholic Church was quite harmful to science, but they didn’t invent geocentrism. In fact, geocentrism is quite empirical. If you’re a sage living in ancient Babylon, you can very easily look up and see the Sun moving around the (flat) Earth. Given the available evidence, you’d be fully justified in concluding that geocentrism is true. You’d be wrong, as we now know, but it’s ok to be wrong sometimes (see what I said earlier about local minima).
This sounds like a tautology to me.
Sorry, I must have missed a sentence: what is the “this” you’re referring to, when you say “this is evidence” ? As for impetus and momentum, they’re quite different concepts, so you can’t equate them. Impetus is a sort of elan vital of motion, whereas Newton’s momentum (if I understand it correctly) is just an explanation of how objects move. Either impetus exists (in the same way that elan vital was thought to exist), or it doesn’t; there aren’t any other options. Today, we believe that impetus does not exist, but there’s still a small chance that it does; if we ever discover any evidence of it, we’ll update our beliefs.
As far as I understand from your arguments, you are rejecting the notion that scientific theories describe reality in any way; and, due to your belief in incommensurability, you believe that the fact that some theories allow us to develop what seems to be an understanding of the world (*), to be somewhat of a mystery. Does this accurately describe your position ? If so, I don’t see what accepting “a concept of reality” would buy you, since reality is (due to incommensurability) unknowable.
I agree with you that, given that incommensurability is true, your position makes sense. But I still don’t see why I should accept that incommensurability is true. From your arguments, it almost sounds like you require scientists to be omniscient: you see any significant mistake in our scientific understanding of the world as an insurmountable barrier to understanding. But I still don’t understand why. All people make mistakes all the time, not just scientists.
At one point, I personally thought that driving from my house to work takes about 30 minutes. But then I found a shortcut through a corporate parking lot, which shaved the time down to about 25 minutes. My two maps of the world were certainly incompatible: one contained the shortcut, the other did not; and the routes were very different. Does this mean that the two maps are incommensurate, and that we must therefore reject the very notion of them describing the actual terrain in any way ? Why can’t we just say, “Bugmaster was wrong because he didn’t have enough data” ?
(*) Seeing as I’m typing these words using a device powered by our understanding of quantum mechanics, etc.
IIRC, even Feynman refused to answer to whether electrons, or even the interior of a brick, are real, saying that they are useful concepts in our description of the world and that’s all that matters.
That’s not quite what he was saying. Full quote (emphasis mine):
That is similar to my take on this.
Some people do seem to pursue science out of intrinsic interest, however.
I’m sure many people do intrinsically enjoy science. Nonetheless, the reason society pays for science research is because it leads to being able to make more accurate predictions.
I think that’s pretty clearly the case, yes.
Edited to add: On reflection, I think this is not at all clear. Surely some science funding is so directly motivated, but a lot seems to be more related to signaling.
In what sense is understanding something not an act of dominance?
* Sorry I forgot the “not” the first time.
You are going to have to taboo “dominance”. Understanding something is a lot different from the other members of the “dominance” category. Please explain what you mean to say about understanding without using “dominance”, “oppression”, “force”, “might”, or “western hegemony”.
Usually not at all. If you dominate someone they have to do the work of understanding you.
I agree with nyan_sandwich: please explain what you mean without using that word, because I’m fairly sure we have different definitions of it.
In the sense that this seems almost entirely backward. I usually expect acts of dominance in the form of not understanding.
That’s what I was going to ask you !Edit: I posted that before you added the crucial “NOT”. See my other comment.
I’ll concede that the Enlightenment did more to relieve human suffering (or whatever measure you prefer) than the advance of science. <Again, I don’t think this a a majority position in this community.>
Will you concede that the Enlightenment’s continued viability is reliant on the increase in wealth it caused, including the increase in wealth from scientific progress?
You don’t need to believe post-modern thought to be an environmentalist. Nor does being post-modern guarantee that you are an environmentalist. (or any other critique of human application of “scientific” domination of nature).
In short, you are overstating the usefulness of post-modern analysis. Economists (whether or not they think Kuhn was saying something useful) already have language for the types of problems you identify with the social application of scientific prediction.
This might be a bit of a digression, but I’m going to have to ask for a cite on that. My understanding is that power generation and industry are responsible for the majority of carbon emissions; Wikipedia describes transport fuels (road, rail, air and sea inclusive) as representing about 20% of carbon output and 15% of total greenhouse emissions.
Now, you said “ecological devastation”, not “carbon”, and air and sea transport’s more general ecological footprint is of course harder to measure; but given their fuel-intensive nature I’d expect carbon emissions to represent most of it. There’s also noise pollution, non-greenhouse emissions, bird and propeller strikes, pollution associated with manufacture and dismantling, and the odd oil spill, but although those photos of Chittagong shipbreakers are certainly striking I’d be surprised if all of that put together approached the ecological impact of transport’s CO2 output, never mind representing an additional overhead large enough to dominate humanity’s ecological effects.
Sorry I was again assuming a common basis of knowledge. Carbon emissions would be environmental damage (damaging to the biosphere as a whole). Ecological damages more commonly refers to damages to ecosystems (smaller communities within the biosphere). When people talk about ecological damages they are primarily talking about invasive species. Invasive species are plants, animal, bacteria, and fungi that have been artificially transported from one ecosystem to another and have no natural predator within it. Huge portions of American forest are being eradicated as we speak by Asian beetles, plants, etc.
The primary cause of invasive species is trans-Atlantic/ trans-pacific shipping and flights. We try to regulate what gets on and off ships and boats, but it is really really hard. If you ever take a class in ecology this fact will probably be beaten into you. I work with an ecologist so I hear all the time about the devastation of invasive species and the growing frailty of the worlds ecosystems.
Thank you; that makes sense.
That’s not a debate about science.
Somewhat agree. Science is broken in systematic ways. See the quantum physics sequence.
That statement is a rather bold one to post on a site dedicated to improving human epistemological methods. It doesn’t seem to me that a bit of irrationality should prevent us from doing better; we didn’t even know what we were doing up until now.
EDIT: On 1 did you mean that science as we do it does not match the ideal, or the ideal does not work as well as is possible? Both are true.
I wouldn’t say 2 is bold at all, really, provided it is taken in a weak form—particularly if we factor out the transhumanist element. Yes, we will never be perfect Bayesian reasoning machines. This doesn’t mean we can’t or shouldn’t do better ever better. I’m not sure what reasonably charitable interpretation would be a really bold claim, here… “We’re so far gone we shouldn’t bother trying,” perhaps, but that doesn’t seem to square with this poster’s other posts.
I don’t really have a clear idea of what boyi is even trying to say, so I’m not trying to square it with other posts.
The way I see it, “it’s impossible to make science live up to the ideals” is pretty bold. I’ll try to see a charitable interpretation.
I don’t know, there’s a general sense in which ideals are almost never reached.
yeah. I interpreted it closer to “impossible to do better” than “impossible to be perfect”. Looking back, the former is the more charitable interpretation.
I get this distinct feeling of having fallen for the fallacy of gray (cant be perfect == can’t do better).
Idiomatically speaking, I think you can usually parse “can’t be perfect” as a proxy for “should not aspire to the ideal, even if you accept that it can only be approached asymptotically”.
On 1. I meant both.
On 2. I realize that it is a bold statement given the context of this blog. My reason for making it is that I believe taking the paradox of rationality into account would better serve your purposes.
If what you mean by 2 is that we can never be perfect, then yeah, that is a legitimate concern, and one that has been discussed.
I think the big distinction to make is that just because we aren’t and can’t be perfect, doesn’t mean we should not try to do better. See the stuff on humility and the fallacy of gray.
What I mean by 2 is that we can never be perfect and that the “rationale man” is the wrong ideal.
That’s why we call ourselves “aspiring rationalists” not just “rationalists”. “rational” is an ideal we measure ourselves against, the way thermodynamic engines are measured against the ideal Carnot cycle.
Read the stuff I linked for more info.
I also said I think it is the wrong ideal. Not completely. I think the idea of rationality is a good one, but ironically it is not a rational one. Rationality is paradoxical.
Why do you say rationality is not the ideal? Around here we use the term rational as a proxy for “learning the truth and winning at your goals”. I can’t think of much that is more ideal. There are places where you will go off the track if you think that the ideal is to be rational. Maybe that’s what you are referring to?
Now is a good time to taboo “rationality”; explain yourself using whatever “rationality” reduces to so that we don’t get confused. (Like I did above with explaining about winning).
I agree that “learning the truth and winning at your goals” should be the ideal. But I also believe the following
-Humans are symbolic creatures: Meaning that to some extent we exist in self-created realities that do not follow a predictable or always logical order. -Humans are social creatures meaning that not only is human survival is completely dependent on the ability to maintain coexistence with other people, but individual happiness and identity is dependent on social networks.
Before I continue I would like to know what you and anyone else thinks about these two statements.
I suspect many Less Wrong readers will Agree Denotatively But Object Connotatively to your statements. As Nornagest points out, what you wrote is mostly true with one important caveat (the fact that we are irrational in regular and predictable ways). However, your statements are connotatively troubling because phrases like these are sometimes used to defend and/or signal affiliation with the kind of subjectivism that we strongly dislike.
I’d agree that a lot of our perceptual reality is self-generated—as a glance through this site or the cog-sci or psychology literature will tell you, our thinking is riddled with biases, shaky interpolations, false memories, and various other deviations from an ideal model of the world. But by the same token there are substantial regularities in those deviations; as a matter of fact, working back from those tendencies to find the underlying cognitive principles behind them is a decent summary of what heuristics-and-biases research is all about. So I’d disagree that our perceptual worlds are unpredictable: people’s minds differ, but it’s possible to model both individual minds and minds-in-general pretty well.
As to your second clause, most humans do have substantial social needs, but their extent and nature differs quite a bit between individuals, as a function of culture, context, and personality. This too exhibits regularities.
I don’t understand. Much of our self-identity is symbolic and imaginary. By self-created reality do you mean that our local reality is heavily influenced by us? That our beliefs filter our experiences somewhat? Or that we literally create our own reality? If it’s the last one, the standard response is this: There is a process that generates predictions and a process that generates experiences, they don’t always match up, so we call the former “beliefs” and the latter “reality”. See the map and territory sequence). If that’s not what you mean (I hope it is not), make your point.
yes
You have heard of Niche Construction right? If not, it is the ability of an animal to manipulate their reality to meet their personal adaptations. Most animals display some sort of niche construction. Humans are highly advanced architects of niches. In the same way ants build colonies and bees build hives, humans create a type of social hive that is infinitely more complex. The human hive is not built through wax or honey but through symbols and rituals held together by rules and norms. A person living within a human hive cannot escape the necessity of understanding the dynamics of the symbols that hold it together so that they can most efficiently navigate its chambers. Keeping that in mind, it stands that all animals must respect the nature of their environment in order to survive. What is unique to humans is that the environments we primarily interact with are socially constructed niches. That is what I mean when I say human reality is self-created.
Earlier I talked about the paradox of rationality. What I meant by that is simply
-For humans what is socially beneficial is rationally beneficial because human survival is dependent on social solidarity. -What is socially beneficial is not always actually beneficial to the individual or the group.
Thus the paradox of rationality: What is naturally beneficial/harmful is not aligned with what is socially beneficial/harmful.
Do you think that this is an actual paradox or a problem for rationality? If so, then you’re probably not using the r-word the same way we are. As far as I can tell, your argument is: To obtain social goods (e.g. status) you sometimes have to sacrifice non-social goods (e.g. spending time playing videogames). Nonetheless, you can still perform expected value calculations by deciding how much you value various “social” versus “non-social” goods, so I don’t see how this impinges upon rationality.
My argument is to exist socially is not always alligned with what is nessecary for natural health/survival/happiness, and yet at the same time is nessecary.
We exist in a society where the majority of jobs demand us to remain seated and immobile for the better part of the day. That is incredibly unhealthy. It is also bad for intellectual productivity. It is illogical, and yet for a lot of people it is required.
Correct me if I’m wrong, but isn’t this just another way of saying, “the way we do things is poorly optimized”?
Yes it is, and I do not think a solely rational agenda will fix the problem because I do not see humans are solely rational creatures.
Again, that’s not how we use the word. Being rational does not mean forgoing social goods—quite the opposite, in fact. No one here believes that human beings are inherently good at truth seeking or achieving our goals, but we want to aspire to become better at those things.
Ok but then I do not understand how eliminating God or theism serves this purpose. I completely agree that there are destructive aspects of both these concepts, but you all seem unwilling to accept that they also play a pivitol social role. That was my original point in relation to the author of this essay. Rather than convincing people that it is ok that there is no God, accept the fact that “God” is an important social institution and begin to work to rewrite “God” rationally.
Can you say more about how you determined that “rewriting God” is a more cost-effective strategy for achieving our goals than convincing people that it is OK that there is no God?
You seem very confident of that, but thus far I’ve only seen you using debate tactics in an attempt to convince others of it, with no discussion of how you came to believe it yourself, or how you’ve tested it in the world. The net effect is that you sound more like you’re engaging in apologetics than in a communal attempt to discern truth.
For my own part, I have no horse in that particular race; I’ve seen both strategies work well, and I’ve seen them both fail. I use them both, depending on who I’m talking to, and both are pretty effective at achieving my goals with the right audience, and they are fairly complementary.
But this discussion thus far has been fairly tediously adversarial, and has tended to get bogged down in semantics and side-issues (a frequent failure mode of apologetics), and I’d like to see less of that. So I encourage shifting the style of discourse.
I really like the last paragraph here.
Any time you feel the urge to say, “Why can’t you see that X?”, it’s usually not that the other person is being deliberately obtuse—most likely it’s that you haven’t explained it as clearly as you thought you had. This is especially true when dealing with others in a community you are new to or someone new to your community: their expectations and foundations are probably other than you expect.
I felt the major point of this article, “How to lose an argument,” was that accepting that your beliefs, identity, and personal chocies are wrong is pyschologically damaging, and that most people will opt to deny wrongness to the bitter end rather than accept it. the author suggest that if you truly want to change people’s opinions and not just boost yoru own ego, then it is more cost-effective to provide the oppostion with an exit that does not result with the individual having to bear the pyschological trauma of being wrong.
If you except the author’s statement that without the tact to provide the opposition a line of flight, then they will emotionally reject your position regarldess of its rational base; then rewriting God is more effective than trying to destroy God for the very same reason.
God is “God” to some people, but to others God is like the American flag is, a symbol of family, of home, of identitiy. The rational allstars of humanity are compenent enough to breakdown these connotation, thus destroying the symbol of God. But I think by defintion allstars are a minority, and that the majority of people are unable to break symbols without suffering the pyschological trumma of wrongness.
is that good enough?
Yes, this is a statement of your position. Now the question from grandparent was, how did you arrive at it? Why should anyone believe that it is true, rather than the opposite? Show your work.
God is not just a transcendental belief (meaning a belief about the state of the universe or other abstract concepts). God represents a loyalty to a group identity for lots of people as well as their own identity. To attack God is the same as attacking them. So like I stated before, if you agree with Yvain’s argument (that attacking the identity of the opposition is not as effective to argument as providing them with a social line of flight), then you agree with mine (It would be more effective to find a way to dispel the damages done by the symbol of God rather than destroy it, since many people will be adamantly opposed to its destruction for the sake of self-image. I do not see why I have to go further to prove a point that you all readily accepted when it was Yvain who stated it.
That seems to assume that direct argument is the only way to persuade someone of something. It’s in fact a conspicuously poor way of doing so in cases of strong conviction, as Yvain’s post goes to some trouble to explain, but that doesn’t imply we’re obliged to permanently accept any premises that people have integrated into their identities.
You can’t directly force a crisis of faith; trying tends to further root people in their convictions. But you can build a lot of the groundwork for one by reducing inferential distance, and you can help normalize dissenting opinions to reduce the social cost of abandoning false beliefs. It’s not at all clear to me that this would be a less effective approach than trying to bowdlerize major religions into something less epistemically destructive, and it’s certainly a more honest one—instrumentally important in itself given how well-honed our instincts for political dissembling are—for people that already lack religious conviction.
Your mileage may vary if you’re a strong Deist or something, but most of the people here aren’t.
The two arguments aren’t the same at all. Yvain really is in favor of destroying the symbol, whereas you seem to be more interested in (as you put it) “rewriting” it.
The methodology is the same. If you accept Yvain’s methodology than you except mine. You are right that our purposes and methods are different.
Yvain Wants:
Destroy the Concept of God
To give people a social retreat for a more efficient transition
To suggest that the universe can be moral without God to accomplish this.
I Want:
-To rewrite the concept of God, - To give people a social retreat for a more efficient transition—SAME -To suggest that God can be moral without being a literal conception.
The methodology isn’t the same—Yvain’s methodology is to give people a Brand New Thingy that they can latch onto, yours seems to be reinventing the Old Thingy, preserving some of the terminology and narrative that it had. As discussed in his Parable, these are in fact very different. Leaving a line of retreat doesn’t always mean that you have to keep the same concepts from the Old Thingy—in fact, doing so can be very harmful. See also the comments here, especially ata’s comment.
And that is why I disagree with this part of your argument:
I don’t think anyone here has objected to that part of your methodology, merely to your goal of “rewriting God” and to its effectiveness in relation to the implied supergoal of creating a saner world.
You are assuming that “the majority of people are unable to break symbols without suffering the psychological trumma (sic) of wrongness” and thus “rewriting God is more effective than trying to destroy God”.
Eliezer’s argument assumed the uncontroversial premise “Many people think God is the only basis for morality” and encouraged finding a way around that first. Your argument seems to be assuming the premises (1) “The majority of people are unable to part with beliefs that they consider part of their identity” as well as (2) “It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them”. Yvain may have supported (1), but I didn’t see him arguing in favor of (2).
I don’t think anyone is seriously questioning the “leave a line of retreat” part of your argument.
You don’t have to do anything. But if you want people to believe you, you’re going to have to show your work. Ask yourself the fundamental question of rationality.
How is this an uncontroversial claim! What proof have you made of this claim. It is uncontroversial to you because everyone involved in this conversation (excluding me) has accepted this premise. Ask yourself the fundamental question of rationality.
My argument is not that people are unable to part with beliefs, but that 1.) it is harder and 2.) they don’t want to. People learn their faith from their parents, from their communities. Some people have bad experiences with this, but some do not. To them religion is a part of their childhood and their personal history both of which are sacred to the self. Why would they want to give that up? They do not have the foresight or education to see the damages of their beliefs. All they see is you/people like you calling a part of them “vulgar.”
Is that really the rational way to convince someone of something?
Well, it took me about five minutes on Wikipedia to find its pages on theonomy and divine command theory, and most of that was because I got sidetracked into moral theology. I don’t know what your threshold for “many people” is, but that ought to establish that it’s not an obscure opinion within theology or philosophy-of-ethics circles, nor a low-status one within at least the former.
I consider “[m]any people think God is the only basis for morality” to be uncontroversial because I have heard several people express this view, see no reason to believe that they are misrepresenting their thoughts, and see no reason to expect that they are ridiculous outliers. If we substituted “most” for “many” it would be more controversial (and I’m not sure whether or not it would be accurate). If we substituted “all” for many, it would be false.
No one has argued against it.
None.
Yes. By the way, you both asked a question above and asserted its answer. You could have saved yourself some time.
Was this an attempt at a tu quoque? You were advancing a proposition, and I was clarifying the request for you to show your work.
I don’t believe I’ve done this, and I’m not sure what you mean by “people like you”. Was that supposed to be racist / sexist?
That sounds roughly like my #2 above, which is what I noted Yvain and Eliezer did not advance in the relevant articles.
“It is harder and/or worse to get people to part with these beliefs than to adopt a bowdlerized version of them”.
Don’t use words if you do not know what they mean.
Indeed.
Better yet, don’t criticize someone’s usage of a word unless you know what it means.
At this point, I no longer give significant credence to the proposition that you are making a good-faith effort at truth-seeking, and you are being very rude. I have no further interest in responding to you.
Show me a definition oft the word bowdlerize that does not use the word vulgar or a synonym.
If I am being rude it is because I am frustrated by the double standards of the people I am talking with. I use the word force and I get scolded for trying to taint the conversation with connotations. I will agree that “force” has some negative connotations, but it has positive ones too. In any case it is far more neutral than bowdlerize. And quite frankly I am shocked that I get criticized for pointing out that you clearly do not know what that word means while you get praised for criticizing me for pointing out what the word actually means.
It is hypocritical to jump down my throat about smuggling connotations into a conversation when your language is even more aggressive.
It is also hypocritical that if I propose that there are people who have faith in religion not because they fear a world without it the burden of proof is on me; while if it is proposed by the opposition that many people have faith in religion because they fear a world without it no proof is required.
I once thought the manifest rightness of post-modern thought would convince those naive realists of the truth, if only they were presented with it clearly. It doesn’t work that way, for several reasons:
Many “post-modern” ideas get co-opted into mainstream thought. Once, Legal Realism was a revolutionary critique of legal formalism. Now it’s what every cynical lawyer thinks while driving to work. In this community, it is possible to talk about “norms of the community” both in reference to this community and other communities. At least in part, that’s an effect of the co-option of post-modern ideas like “imagined communities.”
Post-modernism is often intentionally provocative (i.e. broadening the concept of force). Therefore, you shouldn’t be surprised when your provocation actually provokes. Further, you are challenging core beliefs of a community, and should expect push-back. Cf. the controversy in Texas about including discussion of the Spot Resolution in textbooks.
As Kuhn and Feyerabend said, you can’t be a good philosopher of science if you aren’t a good historian of science. You haven’t demonstrated that you have a good grasp of what science believes about itself, as shown in part by your loose language when asserting claims.
Additionally, you are the one challenging the status quo beliefs, so the burden of proof is placed on you. In some abstract sense, that might not be “fair.” Given your use of post-modern analysis, why are you surprised that people respond badly to challenges to the imagined community? This community is engaging with you fairly well, all things considered.
ETA: In case it isn’t clear, I consider myself a post-modernist, at least compared to what seems to be the standard position here at LW.
Really great post! You are completely right on all accounts. Except I really am not a post-modernist, I just agree with some of their ideas, especially conceptions of power as you have pointed out.
I am particularly impressed with Bullet point # 2, because not only does it show an understanding of the basis of my ideas, but it also accurately points out irrationality in my actions given the theories I assert.
I would then ask you if understand this aspect of communities including your own, would you call this rational? It is no excuse, but I think coming here I was under the impression that equality in burden of proof, acccomdation of norms and standards, would be the norm, because I view these things as rational.
Does it seem rational that one side does not hold the burden of proof? To me it is normal for debate because each side is focused solely on winning. But I would call pure debate a part of rhetoric (“the dark arts”). I thought here people would be more concerned with Truth than winning.
Does it really seem to you that the statement “Extraordinary claims require extraordinary support” is not rational?
Obviously, there’s substantial power in deciding what claims are extraordinary.
Your dodging my question.
As to your qusetion- I do not think I have made any more extraordinary claims than my opposition. To me saying that because “several people have told someone that they need there to be God because without God the universe would be immoral” is not sufficient enough evidence to make that claim. I would also suggest that my claims are not extraordinary, they are contradictory to several core beliefs of this community, which makes them unpleasant, not unthinkable.
If someone X, before asking him to provide some solid evidence that X, you should stick your neck out and say that you yourself believe that non X.
Otherwise, people might expect that after they do all the legwork of coming up with evidence for X, you’ll just say “well actually I believe X too I was just checking lol”.
You can’t expect people to make efforts for you if you show no signs of reciprocity—by either saying things they find insightful, or proving you did your research, or acknowledging their points, or making good faith attempts to identify and resolve disagreements, etc. If all you do is post rambling walls of texts with typos and dismissive comments and bone-headed defensiveness on every single point, then people just won’t pay attention to you.
Respectfully, if you don’t think post-modernism is an extraordinary claim, you need to spend more time studying the history of ideas. The length of time it took for post-modern thought to develop (even counting from the Renaissance or the Enlightenment) is strong evidence of how unintuitive it is. Even under a very generous definition of post-modernism and a very restrictive start of the intellectual clock, Nietzsche is almost a century after the French Revolution.
If your goal is to help us have a more correct philosophy, then the burden is on you to avoid doing things that make it seem like you have other goals (like yanking our chain). I.e. turn the other cheek, don’t nitpick, calm down, take on the “unfair” burden of proof. Consider the relevance of the tone argument.
There are many causes of belief in belief. In particular, religious belief has social causes and moral causes. In the pure case, I suspect that David Koresh believed things because he had moral reasons to want to believe them, and the social ostracism might have been seen as a feature, not a bug.
If one decides to deconvert someone else (perhaps to help the other achieve his goals), it seems like it would matter why there was belief in belief. And that’s just an empirical question. I’ve personally met both kinds of people.
I concede that post-modernism is unintuitive when compared to the history of academic thought, but I would argue that modernism is equally unintuitive to unacademic thought. Do you not agree?
What do we mean by modernism? I think the logical positivists are quite intuitive. What’s a more natural concept from “unacademic” thought than the idea that metaphysics is incoherent? The intuitiveness of the project doesn’t make it right, in my view.
Bowdlerization is normally understood to be the idea of removing offensive content, but this offensiveness doesn’t need to have anything to do with “vulgarity”.
X is offensive. Vulgar is offensive. Therefore X is vulgar. Logic equals very yes?
vul·gar : indecent; obscene; lewd: a vulgar work; a vulgar gesture.
And just incase....
Indecent: offending against generally accepted standards of propriety or good taste; improper; vulgar:
Or are you going to tell me that “offensive content” is different from something that is offending?
There exist things that are offensive against standards of propriety and taste (the things you call “vulgar”). Then again there exist things which offend against standards of e.g. morality.
You don’t seem to understand that there can exist offensiveness which isn’t about good manners, but about moral content.
??? Um no read sentence # 2.
Please respond to these following two question, if you want me to understand the point of disagreement:
Do you understand/agree that I’m saying “offensive content” is a superset of “vulgar content”?
Therefore do you understand/agree that when I say something contains offensive content, I may be saying that it contains vulgar content, but I may also be saying it contains non-vulgar content that’s offensive to particular moral standards?
First, bowdlerizing has always implied removing content, not adding offensive content. Second, the word has evolved over time to mean any removal of content that changes the “moral/emotional” impact of the work, not simply removal of vulgarity.
I do not say it means adding content. It means to remove offensive content. Offensive content that is morally base is considered vulgar.
The two statements you quoted are not inconsistent because a bowdlerized theory is not calling the original theory vulgar, in current usage. Based on the change in meaning that I identified.
Alice says that she believes in God and a neutral can observe that behaving in accordance with this belief is prevent Alice from achieving her goals. Let’s posit that believing in God is not a goal for Alice, it’s just something she happens to believe. For example, Alice thinks God exists but is not religiously observant and does not desire to be observant.
What should Bob do to help Alice achieve her goals? Doesn’t it depend on whether Alice believes in God or believes that “I believe in God” is/should be one of her beliefs?
More generally, what is wrong (from a post-modern point of view) with saying that all moral beliefs are instances of “belief in belief”?
Well, it certainly clarifies the kind of discourse you’re looking for, which I suppose is all I can ask for. Thanks.
There are pieces of this I agree with, pieces I disagree with, and pieces where a considerable amount of work is necessary just to clarify the claim as something I can agree or disagree with.
Personally, I see truth as a virtue and I am against self-deception. If God does not exist, then I desire to believe that God does not exist, social consequences be damned. For this reason, I am very much against “rewriting” false ideas—I’d much prefer to say oops and move on.
Even if you don’t value truth, though, religious beliefs are still far from optimal in terms of being beneficial social institutions. While it’s true that such belief systems have been socially instrumental in the past, that’s not a reason to continue supporting a suboptimal solution. The full argument for this can be found in Yvain’s Parable on Obsolete Ideologies and Spencer Greenberg’s Your Beliefs as a Temple.
When you call truth a virtue do you mean in terms of Aristotle’s virtue ethics? If so I definitely agree, but I do not agree with neglecting the social consequences. Take a drug addict for example. If you cut them cold turkey immediately the shock to their system could kill them. In some sense the current state of religion is an addiction for many people, perhaps even the majority of people, that weakens them and ultimately damages their future. It is not only beneficial to want to change this; it is rational seeing as how we are dependent on the social hive that is infected by this sickness. The questions I feel your response fails to address are: is the disease external to the system, can it truly be removed (my point about irrationality potentially being a part of the human condition)? What is the proper process of intervention for an ideological addict? Will they really just be able to stop using, or will they need a more incremental withdrawal process?
Along the lines with my assertions against the pure benefit of material transformation I would argue that force is not always the correct paradigm for solving a problem. Trying to break the symbol of God regardless of the social consequences is to me using intellectual/rational force ( dominance) to fix something.
The purely rationalist position is a newer adaptation of the might makes right ideology.
You are right that people sometimes need time to adapt their beliefs. That is why the original article kept mentioning that the point was to construct a line of retreat for them; to make it easier on them to realize the truth.
This is strictly true, but your implication that is it somehow related here is wrong. Intellectual force is what is used in rhetoric. Around here, rhetoric is considered one of the Dark Arts. Rationalists are not the people who are recklessly forcing atheism without regard for consequences. See raising the sanity waterline. Religion is a dead canary and we are trying to pump out the gas, not just hide the canary.
This is just a bullshit flame. If you are going to accuse people of violence, show your work.
I know! That is what I have been saying from the start. I agree with the idea. My dissent is that I do not think the author’s method truly follows this methodology. I do not think that telling people “it is ok there is no God the universe can still be moral” constructs a line of retreat. I think it over simplifies why people have faith in God.
And just to make sure, and you clear of the differences between a method and a methodology?
Rhetoric can be used as force, but to reduce it to “dark arts” is reductionist. Just as to not see the force being used by rationalists is also reductionist. Anyone who wants to destory/remove someting is by definition using force. Anyone who wants to destory/remove someting is by definition using force. Religion is not a dead canary, it is a missued tool.
No, I am not flaming, at least not be the defintion of rationalists on this blog. Fact is intellectual force. Rationalists want to use facts to force people to conform to what they believe. Might is right does not nessecairly mean using violence; it just means you beleive the stronger force is correct. You believe yourself intellecutally stronger than people who believe in a diety, and thus right while they are wrong.
Can you elaborate on what you mean by “reductionist”? You seem to be using it as an epithet, and I honestly don’t understand the connection between the way you’re using the word in those two sentences.
On LessWrong we generally draw a distinction between honest, white-hat writing/speaking techniques that make one’s arguments clearer and dishonest techniques that manipulate the reader/listener (“Dark Arts”). Most rhetoric, especially political or religious rhetoric, contains some of the latter.
Again, this is just not what we’re about. There’s a huge difference between giving people rationality skills so that they are better at drawing conclusions based on their observations and telling them to believe what we believe.
Can you taboo “force”? That might help this discussion move to more fertile ground.
Reductionist generally means you are over-extending an idea beyond its context or that you are omitting too many variables in the discussion of a topic. In this case I mean the latter. To say that rhetoric is simply wrong and that “white-hat writing/speaking” is right is too black and white. It is reductionist. You assume that it is possible to communicate without using what you call “the dark arts.” If you want me to believe that show your work.
“Giving people skills” they do not ask for is forcing it on them. It is an act of force.
That isn’t what it generally means.
I wonder if there is actually a contingent of people who have Boyi’s “overextending/omitting variables” definition as a connotation for “reductionist,” and to what extent this affects how they view reductionist philosophy. It would certainly explain why “reductionist” is sometimes used as a snarl word.
FWIW, I have heard the word used in exactly this kind of pejorative sense. I don’t know which usage is more common, generally.
Ok generally was a bad word. I checked out the wiki and the primary definition there is not one I am familiar with. The definition of theoretical reductionism found on wiki is more related to my use of the term (methodological too). What i call reductionism is trying to create a grand theory (an all encompassing theory). In sociological literature there is pretty strong critique of grand theories. If you would like to check me on this, you could look at t”the sociological imagination” by C Wright Mills. The critiques are basically what I listed above. In trying to create a grand theory it is usually at the cost of over simplifying the system that is under speculation. That is what I call reductionist.
I don’t think it’s black and white; there is a continuum between clear communication and manipulation. But beware of the fallacy of gray: just because everything has a tinge of darkness, that doesn’t make it black—some things are very Dark Artsy, others are not. I do think it is possible to communicate without manipulative writing/speaking. Just to pick a random example, Khan Academy videos. In them, the speaker uses a combination of clear language and visuals to communicate facts. He does not use dishonesty, emotional manipulation, or other techniques associated with dark artsy rhetoric to do this.
Please taboo “force.”
He asked you to taboo “force” to avoid bringing in its connotations. Please resend that thought without using any of “force” “might” “violence” etc. What are you trying to say?
If that is what you mean by force, you coming here and telling us your ideas is “an act of force” too. In fact, by that definition, nearly all communication is “an act of force”. So what? Is there something actually wrong with “giving people ideas or tools they didn’t ask for”?
I’m going to assume that you mean it’s bad to give people ideas they will dislike after the fact, like sending people pictures of gore or child porn. I don’t see how teaching people useful skills to improve their lives is at all on the same level as giving them pictures of gore.
You seem to be using reductionism in a different way than I am used to. Please reduce “reductionism” and say what you mean.
First of all, what I have been trying to say is that, no, rationalists are not interested in “force[ing] people to confrom”. We are interested in improving general epistemology.
I also think you are wrong that using “intellectual force” to force your beliefs on someone is not violence. Using rhetoric is very much violence, not physical, but definitely violence.
Yes we believe ourselves to be more correct and more right than theists, but you seem to be trying to argue “by definition” to sneak in connotations. If there is something wrong with being right, please explain directly without trying to use definitions to relate it to violence. Where does the specific example of believing ourselves more right than theists go wrong?
An honestly rational position might be more appropriately labeled a “right makes might” ideology—though this is somewhat abusing the polysemy of “right” (here meaning “correct”, whereas in the original it means “moral”).
Now I haven’t followed the discussion closely, but it seems like you haven’t explained what you actually advocate. Something like the following seems like the obvious way to offer “incremental withdrawal”:
‘Think of the way your parents and your preacher told you to treat other people. If that still seems right to you when you imagine a world without God, or if you feel sad or frightened at the thought of acting differently, then you don’t have to act differently. Your parents don’t automatically become wrong about everything just because they made one mistake. We all do that from time to time.’
As near as I can tell from the comments I’ve seen, you’d prefer that we promote what I call atheistic Christianity. We could try to redefine the word “God” to mean something that really exists (or nothing at all). This approach may have worked in a lot of countries where non-theism enjoys social respect, and where the dangers of religion seem slightly more obvious. It has failed miserably in the US, to judge by our politics. Indeed, I would expect one large group of US Christians to see atheist theology as a foreign criticism/attack on their community.
They clearly play a social role. Whether it is pivotal depends on what is meant by “pivotal”.
While our internal models of reality are not always “logical”, I would argue that they are quite predictable (though not perfectly so). Just to make up a random example, I can confidently predict that the number of humans on Earth who believe that the sky is purple with green polka dots is vanishingly small (if not zero).
Agreed, but I would argue that there are other factors on which human survival and happiness depend, and that these factors are at least as important as “the ability to maintain coexistence with other people”.
I am not trying to be rude or aggressive here, but I just wanted to point out that your argument is based upon a fairly deceptive rhetorical tactic. The tactic is to casually introduce an example as though it were a run of the mill example, but in doing so pick an extreme. You are correct that a person with a normally functioning visual cortex and no significant retina damage can be predicted to seeing the sky in a certain way, but that does not change the fact that a large portion of human existence is socially created. Why do we stop at stop lights or stop signs? There is nothing inherent in the color read that means stop, in other cultures different colors or symbols signify the same thing. We have arbitrarily chosen read to mean stop.
Some things can be logically predicted given the biological capacity of humans, but it is within the biological capacity of humans to create symbolic meaning. We know this to be fact, and yet we are unable to as easily predict what it is that people believe, because unlike the color of the sky major issues of the social hive are not as empirically clear. Issues about what constitutes life, what is love, what is happiness, what is family are in some cases just as arbitrarily defined as what means stop and what means go, but these questions are of much graver concern.
Just to clarify, It is not that I do not think there is a way to rationally chose symbolic narrative, but that initiating rational narrative involves understanding the processes by which narratives are constructed. That does not mean abandoning rationality, but abandoning the idea of universal rationality. Instead I believe rationalists should focus more on understanding the irrationality of human interaction to use irrational means to foster better rationality.
Some portion of human experiences includes facts “I don’t fall through the floor when I stand on it” or “I will die if I go outside in a blizzard without any clothes for any length of time.” Some portion of human experience includes facts like “I will be arrested for indecent exposure if I go outside without wearing any clothes for any length of time.”
Facts of the first kind are the overwhelmingly more numerous than facts of the second kind. Facts of the second kind are more important to human life. I agree with you that this community underestimates the proportion of facts of the second kind, which are not universalizable the way facts of the first kind are. But you weaken the case for post-modern analysis by asserting that anything close to a majority of facts are socially determined.
I was never trying to argue that the majority of facts are socially determined. I was arguing that the majority of facts important to human happiness and survival are socially determined. I agree that facts of the first kind are more numerous, but as you say facts of the second kind are more important. Is it logical to measure value by size?
Fair enough. I respectfully suggest that your language was loose.
For example:
Consider the difference between saying that and saying “a large portion of human decisions are socially created, even if they appear to be universalizable. A much larger proportion than people realize.”
My example wasn’t meant to be a strawman, but simply an illustration of my point that human thoughts and behaviors are predictable. You may argue that our decision to pick red for stop signs is arbitrary (I disagree even with this, but that’s beside the point), but we can still predict with a high degree of certainty that an overwhelming majority of drivers will stop at a stop signs—despite the fact that stop signs are a social construct. And if there existed a society somewhere on Earth where the stop signs were yellow and rectangular, we could confidently predict that drivers from that nation would have a higher chance of getting into an accident while visiting the U.S. Thus, I would argue that even seemingly arbitrary social constructs still result in predictable behaviors.
I’m not sure what this means.
I am fairly certain I personally can predict what an average American believes regarding these topics (and I can do so more accurately by demographic). I’m just a lowly software engineer, though; I’m sure that sociologists and anthropologists could perform much better than me. Again, “arbitrary” is not the same as “unpredictable”.
I don’t know, are they ? I personally think that questions such as “how can we improve crop yields by a factor of 10” can be at least as important as the ones you listed.
I don’t think that you could brainwash or trick someone into being rational (since your means undermine your goal); and besides, such heavy-handed “Dark Arts” are, IMO, borderline unethical. In any case, I don’t see how you can get from “you should persuade people to be rational by any means necessary” to your original thesis, which I understood to be “rationality is unattainable”.
I did not say your example was a strawman, my point was that it was reductionist. Determining the general color of the sky or whether or not things will fall is predicting human thoughts and behaviors many degrees simpler than what I am talking about. That is like if I were to say that multiplication is easy, so math must be easy.
Well you are wrong about that. No competent sociologist or anthropologist would make a claim to be able to do what you are suggesting.
You can make fun of my diction all you want, but I think it is pretty obvious love; morality, life, and happiness are of the utmost concern (grave concern) to people.
I what subsume the concern of food stock under the larger concern of life, but I think it is interesting that you bring up crop yield. This is a perfect example of the ideology of progress I have been discussing in other response. There is no question to whether it is dangerous or rational to try to continuously improve crop yield, it is just blindly seen as right (i.e as progress).
However, if we look at both the good and the bad of the green revolution of the 70s-80s, the practices currently being implemented to increase crop yield are board line ecocide. They are incredibly dangerous, yet we continue to attempt to refine them further and further ignoring the risks in light of further potential to transform material reality to our will.
The ethical issues at question are interesting because they are centered around the old debate over collectivist vs. individualist morality. Since the cold war America has been heavily indoctrinated in an ideology of free will (individual autonomy) being a key aspect of morality. I question this idea. As many authors on this site point out, a large portion of human action, thought, and emotion is subconsciously created. Schools, corporations, governments, even parents consciously or unconsciously take advantage of this fact to condition people into ideal types. Is this ethical? If you believe that individual autonomy is essential to morality than no it is not. However, while I am not a total advocate of Foucault and his ideas, I do agree that autonomous causation is a lot less significant than then individualist individually wants to believe.
Rather than judging the morality of an action by the autonomy it proivdes for the agents involved I tend to be more of a pragmatist. If we socially engineer people to develop the habits and cognitions they would if they were more individually rational, then I see this as justified. The problem with this idea is who watches the watchmen. By what standard do you judge the elite that would have to produce mass habit and cognition? Is it even possible to control that and maintain a rational course through it?
This I do not know, which is why I am hesitant to act on this idea. But I do think that there a mass of indoctrinated people that does not think about what it is they belief is a social reality.
Agreed, but you appeared to be saying that human thoughts and actions are entirely unpredictable, not merely poorly predictable. I disagree. For example, you brought up the topic of “what is love, what is happiness, what is family”:
Why not ? Here are my predictions:
The average American thinks that love is a mysterious yet important feeling—perhaps the most important feeling in the world, and that this feeling is non-physical in the dualistic sense. Many, thought not all, think that it is a gift from a supernatural deity, as long as it’s shared between a man and a woman (though a growing minority challenge this claim).
Most Americans believe that happiness is an entity similar to love, and that there’s a distinction between short-term happiness that comes from fulfilling your immediate desires, and long-term happiness that comes from fulfilling a plan for your life; most, again, believe that the plan was laid out by a deity.
Most Americans would define “my family” as “everyone related to me by blood or marriage”, though most would add a caveat something like, “up to N steps of separation”, with N being somewhere between 2 and 6.
Ok, so those are pretty vague, and may not be entirely accurate (I’m not an anthropologist, after all), but I think they are generally not too bad. You could argue with some of the details, but note that virtually zero people believe that “family” means “a kind of pickled fruit”, or anything of that sort. So, while human thoughts on these topics are not perfectly predictable, they’re still predictable.
I was not making fun of your diction at all, I apologize if I gave that impression.
First of all, you just made an attempt at predicting human thoughts—i.e., what’s important to people. When I claimed to be able to do the same, you said I was wrong, so what’s up with that ? Secondly, I agree with you that most people would say that these topics are of great concern to them; however, I would argue that, despite what people think, there are other topics which are at least as important (as per my earlier post).
Again, that’s an argument against a particular application of a specific technology, not an argument against science as a discipline, or even against technology as a whole. I agree with you that monocultures and wholesale ecological destruction are terrible things, and that we should be more careful with the environment, but I still believe that feeding people is a good thing. Our top choices are not between technology and nothing, but between poorly-applied technology and well-applied technology.
Ok, first of all, “individual autonomy” is a concept that predates the Cold War by a huge margin. Secondly, I have some disagreements with the rest of your points regarding “collectivist vs. individualist morality”; we can discuss them if you want, but I think they are tangential to our main discussion of science and technology, so let’s stick to the topic for now. However, if you do advocate “collectivist morality” and “socially engineer[ing] people”, would this not constitute an application of technology (in this case, social technology) on a grand scale ? I thought you were against that sort of thing ? You say you’re “hesitant”, but why don’t you reject this approach outright ?
BTW:
This is yet another prediction about people’s thoughts that you are making. This would again imply that people’s thoughts are somewhat predictable, just like I said.
Rationality helps you reach your goals. Terminal goals are not chosen rationally. Is that what you are getting at?
What do you mean by “the paradox of rationality”?
(Have you read this?)