I think it’s reasonable to believe that there are no ontologically basic mental entities because you don’t believe that anybody demostrated telepathy.
If you however believe that the data supports telepathy, then I find it strange to say “I defy the data, because I don’t believe in tologically basic mental entities” as your whole case for there not being ontologically basic mental entities was about there not being telepathy.
I don’t think it’s true for many people that their main reason for not believing in OBMEs is that there appears to be no telepathy. If I disbelieve in OBMEs because I don’t see how to fit them into a reductionist understanding of the world that has, on my view, achieved such stunning empirical success that it would need overwhelming evidence to overturn it, then defying the data when presented with apparent evidence for telepathy isn’t so unreasonable.
(Someone doing that should of course consider possible mechanisms for telepathy that don’t involve OBMEs, and should reconsider their objection to OBMEs if enough apparent evidence for them turns up. I am not defending outright immovability.)
If I disbelieve in OBMEs because I don’t see how to fit them into a reductionist understanding of the world that has, on my view, achieved such stunning empirical success that it would need overwhelming evidence to overturn it
Steam-engine weren’t build because of reductionist thinking but because of empirical experimentation.
When medicine was reductionist based instead of empirical based it is commonly believed that it killed more people than it cured. When it comes to new drugs 90% of those where there reductionist reason to believe they work turn out to flawed.
I think you get very soon into problems if you think that only things that you can explain from the ground up exist.
Pratically I think it’s very worthwhile to have a state of non-judgement where you let experience speak for itself without commiting to any deeper notion of the way things are.
Of course I grant that there are people who deeply believe in the naturalist view of the world and therefore will reject telepathy on those grounds. On the other hand I don’t see why someone who has had a few spiritual experiences and seeks for more spiritual experiences should have that committment or why he should adopt it based on the reasoning of this article.
It sounds to me like you’re arguing against a straw man. Reductionism doesn’t mean believing the proposition “Nothing exists that I can’t explain from the ground up”. It means a commitment to trying to explain things from the ground up (or, actually, from the top down, but with the intention of getting as near as possible to whatever ground there may be), and to remaining dissatisfied with explanations in so far as they appeal to things whose properties aren’t clearly specified.
Steam-engine weren’t build because of reductionist thinking but because of empirical experimentation.
You say that as if “reductionist” and “empirical” are opposing ideas somehow. Of course they aren’t; reductionism and empiricism are two of the key ideas that make science work. You do everything you can to find out what actually happens, and you try to build theories as detailed and bullshit-free as you can that explains what you’ve found, and then you look for more empirical evidence to help decide between those theories, and then you look for better theories that match what you’ve found, and so on.
When medicine was reductionist based instead of empirical based [...]
Not being empirical is a terrible mistake. It’s not clear exactly what and when you’re talking about, but do you have any grounds for thinking that the bad results you describe were the result of too much reductionism rather than of not enough empiricism?
When it comes to new drugs [...]
Most new drugs don’t work, quite true. Do you have any reason to think drug discovery would work better if it were somehow driven by a less reductionist view of how drugs work? Would you, if so, like to be more specific about what you have in mind? (And … has anyone actually done it, saved lots of lives, and got rich?)
if you think that only things that you can explain from the ground up exist
Who thinks that? (Thinking that certainly isn’t what I mean by reductionism.)
I don’t see why someone who has had a few spiritual experiences [...] should have that commitment [sc. to naturalism] or why he should adopt it based on the reasoning of this article.
The article isn’t claiming to make a compelling case for naturalism, so I think Eliezer would agree with the last part of that. As to the first part, it sounds (but maybe I’m misunderstanding) as if you are saying that having had “a few spiritual experiences” constitutes strong evidence against naturalism. It’s probably true that having “spiritual experiences” tends to make people less likely to be naturalists, but it’s not at all clear to me why they are strong evidence against naturalism. There’s nothing in naturalism to suggest that people shouldn’t have such experiences.
(Unless you mean outright miraculous experiences. Those might be very good evidence against naturalism. By an extraordinary coincidence, they also appear to be very rare and to evaporate when examined closely.)
Do you have any reason to think drug discovery would work better if it were somehow driven by a less reductionist view of how drugs work? Would you, if so, like to be more specific about what you have in mind?
The QS movement is an alternative to reductionism. As a concrete example I believe that we should fund trials for vitamin D3 in the morning vs. vitamin D3 in the evening based on self-reports that people found vitamin D3 in the morning to be more helpful. I think those empiric experience should drive research priorities instead of
research priorities being driven by molecular biological findings.
QS profits a lot from better technical equipment. Additionally we likely want to get better at developing phenomelogical abilities of select individuals to perceive and write down what goes on in their own bodies. In addition to qualitative descriptions those people also should do quantitave predictions over various QS metrics and calibrate their credence on those metrics.
As to the first part, it sounds (but maybe I’m misunderstanding) as if you are saying that having had “a few spiritual experiences” constitutes strong evidence against naturalism.
The position for which I’m arguing is empiricism. Letting real world feedback guide your actions instead of being committed to theories.
I think that there are cases where committment to naturalism leads to people making worse predictions than people who are committed to empiricism and simply letting the data speak for itself.
If I take someone with a standard STEM background and put him in an enviroment conductive to spiritual experiences I think that the person who’s more open to updating their beliefs through data will make better predictions than one committed to his preconveived notions. At the process updating would optimally more about letting go off beliefs than about changing beliefs.
The QS movement is an alternative to reductionism.
I think perhaps we mean very different things by “reductionism”. I see absolutely no conflict between the QS movement and reductionism.
I believe we should fund [...] based on self-reports
Fine with me, at least in principle. (Whether I’d actually be on board with funding those trials would depend on how much money is available, what other promising things there are to spend money on, etc.; it could be that those other things have stronger evidence that they’re worth funding.)
empiric evidence should drive research priorities instead of research priorities being driven by molecular biological findings
I don’t see why we shouldn’t have both. Research should be directed at things that, on the basis of the available evidence, have the best chance of producing the most valuable results. Some of the available evidence comes from direct observation. Some comes from theoretical analysis or modelling of molecular-bio systems. Different kinds of evidence will be differentially relevant to different kinds of desired effects. (If you want to maximize your chance of living to 100, you may do best to look at lifestyles of different communities. If you want to maximize your chance of living to 200, you probably need something—no one has a very good idea what yet—for which direct empirical evidence doesn’t exist yet, because no one is living to anything like 200. Maybe what’s needed is some kind of funky nanotech. If so, it’s probably going to need those molecular biologists.)
the position for which I’m arguing is empiricism.
Splendid. I’m all in favour of empiricism. But again, perhaps we mean different things by that word. You speak of not being committed to theories, but the further we go in that direction the less ability we have to generalize the things we discover empirically. To make any statement that goes beyond just repeating simple empirical observations we’ve already made, we need theories. Our attachment to our theories shouldn’t go beyond the evidence we have for them. We should be on the lookout for signs that our theories are wrong. But that doesn’t mean giving up on theories; it just means being rational about them.
If the evidence for (say) ghosts is good enough, I will (I hope) start believing in ghosts. If it’s not quite that good, I may start believing that the world behaves kinda as if there are ghosts—which is probably enough to generate those better predictions you say more open-minded people will have.
Right now, it looks to me as if quite-firmly-committed naturalism generates pretty good predictions. Would you like to be more specific about some things you think naturalists get wrong?
The question isn’t “why shouldn’t we have both” it’s rather “why don’t we have both in a way that reasonable founded”.
You speak of not being committed to theories, but the further we go in that direction the less ability we have to generalize the things we discover empirically.
If you train calibration you can generalize without theories. Generalizing isn’t something that you need to do explicitely through theories. Phenomelogical investigation provides a way to have knowledge that your brain can generalize on system I level.
If the evidence for (say) ghosts is good enough, I will (I hope) start believing in ghosts.
That not the direction in which I’m arguing. I’m arguing that you should focus on predictions instead of concepts like whether or not ghosts exists.
Splendid. I’m all in favour of empiricism.
Being for empiricism is not the same thing as practicing it. Actually practicing means valuing experience higher than theories.
Would you like to be more specific about some things you think naturalists get wrong?
The framing “things that naturalists get wrong” suggests that I think “naturalists get belief X wrong and should believe Y instead”. That not the main position that I advocate.
Studies consistently show that people get things wrong by being overconfident. The key is to become more open to accept that reality tends to unfold in ways that your theories wouldn’t predict.
OK, so your response to “system 1 makes a lot of big mistakes” is not “get system 2 in charge in those situations” but “try to train system 1 to do better”. Once again I have to ask: why not both?
Now, let’s apply some empiricism to your suggestion here. Making theories, making them precise, getting detailed predictions out of them and comparing with experiment has been at the heart of the scientific enterprise since, say, Galileo. It’s worked incredibly well. Not instead of empirical investigation; not instead of well-trained Systems 1 generating intuitive predictions and ideas.
What do we have on the other side? Perhaps “be more specific about some things naturalists get wrong” was the wrong challenge. But so far everything you’ve offered is, well, theories. Maybe you’d rather call them predictions. But what they clearly aren’t is empirical evidence.
[...] predictions instead of concepts [...]
First of all, if you read the sentence I wrote immediately after the one you quoted, you will see that I endorsed exactly that idea before you mentioned it: given substantial evidence for ghosts but not enough to justify a change of overall theory, I should adopt the belief that the world behaves in something like the way it would if it contained ghosts.
Second: it turns out that concepts are really useful. They are especially useful when more than one person is involved. Suppose I am good at predicting the weather. If all I have is a well-trained system 1, I can’t communicate my expertise to you at all; I can just demonstrate it and hope you catch on. If I have half-baked folk theories, I can say “when the sky is such-and-such a colour the weather the following day tends to be such-and-such”, and you can test how well those claims hold up and use them to predict a bit yourself. If I have a full-blown scientific theory, you can put it into a big computer and take lots of measurements and use them to predict where hurricanes will make landfall. This actually works pretty well considering what a big hairy system global weather is.
Actually practicing means valuing experience higher than theories.
Type error.
What you should actually do is to pay attention to both experience and theories in proportion to how well established they are. You can be wrong about your experiences (especially your past experiences). You can be very wrong about your interpretation of your experiences. You can be even wronger about other people’s experiences. And, yes, theories can be badly wrong too. (And so can your deductions from them.)
This is all kinda obvious, and I suspect you aren’t really saying we should have no theories at all or that we should unquestioningly accept everything that comes dressed as empirical evidence. Rather, you think the balance is wrong. (Right?) But: whose balance? How do you know? E.g., it looks to me as if you are making unwarranted assumptions about my own relative valuation of theory and experience; for all I know, perhaps you’re doing the same to others and this whole thing is an exercise in knocking down straw men. “But I tell you, you should have an open mind and not assume your theories are always right!” “But I tell you, the sun does rise in the east!”
become more open
More than what? If the answer is “always more” then that seems to require that theories are completely valueless, which (see above) I think is an absurd position.
You have been saying a lot about how important it is to look at actual empirical evidence rather than just building theories. Good; let us do so. You are suggesting, in this thread, that people open to spiritual experiences will make better predictions than committed naturalists. Let’s have some empirical evidence. What better predictions are the spiritual-experience guys making? What worse predictions are the naturalists making? Give us some examples!
Or does your elevation of experience over theory only apply to other people’s theories?
If I have a full-blown scientific theory, you can put it into a big computer and take lots of measurements and use them to predict where hurricanes will make landfall. This actually works pretty well considering what a big hairy system global weather is.
There are multiple issues here. You can throw a bunch of weather data into a machine learning algorithm and get results even if you don’t have a good scientific theory. I don’t need a commitment to the underlying structure of the weather and decide whether it’s atoms or air/water/fire/earth.
If the machine learning algorithm includes a node for which you don’t have any reductionist reason for the node being useful for predicting the weather, I don’t think you should cut that node when the model with the node fits the data better.
Secondly we have access to information about our bodies through perception in a way we don’t have perception of the weather.
Let’s have some empirical evidence. What better predictions are the spiritual-experience guys making? What worse predictions are the naturalists making? Give us some examples!
I can’t give you experiences via this medium. When I speak about the value of experience I mean using actual experience.
More practically I can’t effectively tell you a story about a territory for which you don’t have a map. You can reach for maps that you know but in which you don’t believe like ghosts did it but that doesn’t help.
Imagine I tell you a story of a card magician. His audience makes all sorts of predictions that turn out to be wrong. I could tell you about the experience the audience has and how things that violate their reductionist driven predictions constantly happen. Then you would tell me: “But the magician isn’t really doing it, that example doesn’t count”, “Please tell me what the magician is really doing”. If I would try to explain a card trick that wouldn’t shift your underlying beliefs at all.
If I tell you: “A workshop facilitator can hold the support point for the movement of the whole room”, then apart from “A workshop facilitator” you would likely get a different meaning for every following word than the one that’s intended because you lack the relevant mental map to make sense of the sentence.
Or does your elevation of experience over theory only apply to other people’s theories?
No, it’s certainly also something I practice myself.
You can throw a bunch of weather data into a machine learning algorithm and get results even if you don’t have a good scientific theory.
But can you get results as good as you can get with the theory? I notice that the world’s major meteorological offices all seem to have big simulations rather than throwing everything into a machine learning algorithm and hoping for the best, so I’m thinking probably not.
I can’t give you experiences via this medium.
I’m not asking you to give me experiences. I’m asking you to be more specific about these allegedly better predictions you say people can make if they are less committed to naturalism.
I can’t effectively tell you a story about a territory for which you don’t have a map. [...] you lack the relevant mental map to make sense of the sentence.
I don’t think you have any idea what maps I have for what territories. But in any case I’m not asking you to tell me an effective story. I am asking you to give me some examples where being less committed to naturalism has led to better predictions.
If the only examples of better predictions you can find are ones that you can’t even describe without technical jargon, and whose technical jargon you are unable to explain to anyone who hasn’t had the same experiences you have, I hope I will be forgiven for being a bit skeptical about these alleged better predictions.
Then you would tell me: [...]
Please don’t tell me what I would do unless you actually know. It’s rude and it’s counterproductive.
I’d be interested, though, if you’d say a little more about this card magician example, because if you’re suggesting that some such example would support your argument here (which I appreciate you might not be) then again I wonder whether you’re using terms like “reductionist” differently from me, to denote some kind of straw-man naive reductionism that I think few people here would endorse. But maybe not; you haven’t exactly made it clear what you have in mind.
No, it’s certainly also something I practice myself.
I regret to say that in this thread it doesn’t look that way. You are making claims and dispensing advice in the name of empiricism but completely refusing to give a shred of empirical evidence supporting either the claims or the advice.
Please don’t tell me what I would do unless you actually know.
I believe that making explicit predictions is very helpful and that the reason that one might be theoretically wrong shouldn’t stop prediction making.
I hope I will be forgiven for being a bit skeptical about these alleged better predictions.
The key here is the meaning of the word ‘skeptic’. If you use the word meaning that you don’t know whether the claims I’m making are true that’s completely fine. If you mean with it that you reject the claims than I think that’s a wrong conclusion to draw. I don’t think that’s true skepticism.
If you simply would believe in less stuff I would be okay with that outcome. You don’t need to believe that the specific claim that I made is true. There might be a contest in the future where you make experience that verify what I told you, but I’m okay with the fact that you haven’t yet made them.
When I’m saying: “Don’t reject theories because you believe them to be impossible based on reductionist reasoning and instead be open (with is something different than accepting)” I’m advocating skepticism.
. You are making claims and dispensing advice in the name of empiricism but completely refusing to give a shred of empirical evidence supporting either the claims or the advice.
I believe that emprirical evidence is about actually experiencing something and that’s not something I can give you. I would prefer to live in a world where I could transfer the evidence that I have for believing what I believe over the internet but I don’t believe I live in such a world.
I’m also okay with pluralism where other people don’t believe in what I believe.
While thinking about specific examples, what do you think will the average person say when I ask them: “Is it possible to perceive the sound of silence in a way that’s different from simply hearing the absence of sounds?”
I believe that making explicit predictions is very helpful
Sure. I suggest that when you make explicit predictions about someone you are in conversation with, you take the trouble to (1) make your level of confidence explicit and (2) acknowledge that you are extrapolating and could be wrong. Because otherwise you are at risk of being obnoxiously rude, and you are likely to be wrong.
The key here is the meaning of the word ‘skeptic’.
What I meant on this occasion is that (1) you have given me no reason to believe the confident-sounding claims you are making about better predictions, (2) I think it likely that if you had actual good support for those claims you would be showing some of it, and (3) on the whole I think it very likely that in fact those claims are false. But of course I don’t know they’re false.
(You made some remarks earlier about mental maps I allegedly don’t have. Here’s something you seem to be lacking: you write as if my only options are “believe true”, “believe false”, and “no opinion”, but in fact there are many more. If I think there’s a 40% chance that you actually have something a reasonable person could regard as good evidence that less-naturalist people make better predictions in any situations it’s reasonable to care about, and a 20% chance that in fact less-naturalist people do make better predictions in any situations it’s reasonable to care about—have I “rejected” your claims, or just “don’t know whether the claims are true”? I suggest: not exactly either.)
instead be open (which is something different than accepting)
I’m afraid you are still failing to be clear. (Whether the problem is that you aren’t expressing yourself clearly, or that you aren’t thinking clearly, I don’t know.)
If “reject theories” and “believe them to be impossible” mean “consider them certainly false”, then: that’s just not a thing I do, and it’s not a thing the standard-issue LW position advocates, and it’s not something any good reasoner should be doing in any but the most extreme cases. If you’re arguing against that then you are fighting a straw man.
If those phrases mean “consider them at least a bit less likely”, then: Yup, I do that, and I endorse it, and I expect others around here to do so too—and nothing you have said has offered the slightest vestige of a reason to think there’s anything wrong with that.
If they mean something intermediate, then for what you say to be any use you need to give some indication of what intermediate thing they mean. You think reductionists (or naturalists, or whatever other term you prefer on any given occasion) are too confident about naturalism, that they’re giving too much weight to their theoretical understanding of the universe when making predictions. But you seem astonishingly unwilling to be any more specific than that. You won’t give examples. You won’t say what level of confidence, what degree of weight, might be appropriate. You certainly aren’t prepared to make any attempt at communicating any reasons you might have for thinking this. All you’re apparently willing to do is to say: “booo, these people are wronger than I am”.
What possible use is that to anyone else?
I believe that empirical evidence is about actually experiencing something and that’s not something I can give you.
Let’s be clear here about what I was asking for. I’m not asking for you to transfer (say) some spiritual experience from your mind to mine. We’re one level of abstraction up from that. I’m asking for examples of predictions that more-naturalist, more-reductionist people get wronger than less-naturalist, less-reductionist people.
what do you think will the average person say when I ask them [...]
I don’t know. There aren’t many average people here. What I would say if asked that question is something like: “For sure there are multiple different possible experiences of not-sound—e.g., being in an anechoic chamber, having your eardrums destroyed, having the nerves joining ears to brain severed, being completely deaf from birth, maybe surrounding yourself with very predictable sound and training yourself not to notice it—and multiple different ways to experience any of those things—e.g., you can attend to things other than the soundlessness, or attend to the soundlessness in different ways. Whether I’d call any of the possibilities ‘perceiving the sound of silence’, I don’t know; would you care to say more about what you mean by that?”
And I would give maybe 60:40 odds in favour of your having something interesting to say about silence, or perception, or experience, or something, rather than merely emitting deep-sounding word salad.
Were you by any remote chance intending that this might lead to some actual examples of predictions that more-committed naturalists tend to get wronger? That would be interesting.
Because otherwise you are at risk of being obnoxiously rude, and you are likely to be wrong.
I think norms of conversation that prevent honest communication by labeling it as rude are not useful for discussions that are about learning about the world.
You should express different beliefs because your beliefs are rude kills an atmosphere of learning.
Of course managing the resulting emotions with empathy is something that’s much easier in person and it might very well prevent anything positive to happen in this online conversation.
I’m afraid you are still failing to be clear. (Whether the problem is that you aren’t expressing yourself clearly, or that you aren’t thinking clearly, I don’t know.)
The problem is that I’m refering to concepts that are likely not in your map.
I know that various people have taken months of in person teaching to get the concepts to which I’m refering, so it’s not suprising to me that the ideas don’t feel clear to you. If what I’m saying what feel clear to you, you would ignore what I’m saying. Successfully pointing somewhere that’s outside of your present map feels inherently unclear. For me it’s a success that you don’t feel like I’m meaning of those those things that are inside your map.
Whether I’d call any of the possibilities ‘perceiving the sound of silence’, I don’t know; would you care to say more about what you mean by that?”
At one of the meditations I lead in an LW context I made the point to focus on perception of silence as something besides simply absence of sound. Afterwards I checked with the person in the room where I was predicting that they least likely got something from the experience and they did experience a silence that was distinct from the absence of sound.
It’s no big shiny effect, but I would suspect that many committed naturalists think silence = absence of sound and any suggestion that it isn’t is emitting deep-sounding word salad. The person developed a new phenomological category for listening to silence that’s distinct from not hearing sounds.
Now, that’s an experience I gave the person in a 20 minute meditation and it wasn’t the only thing I did in that 20 minutes. In multiple days, especially with a teacher that has more skill than I have at the moment, more new experiences are possible.
You should express different beliefs because your beliefs are rude kids an atmosphere of learning.
Perhaps I wasn’t clear; I certainly wasn’t suggesting you should say things you don’t believe for fear of rudeness. I was suggesting you shouldn’t make baseless claims about other people for fear of rudeness. Actually, I think there are more important reasons than rudeness (making confident false statements can mislead others or even yourself), but your comments about making explicit predictions led me to suspect that you’d be unmoved by them.
The problem is that I’m referring to concepts that are likely not in your map.
Perhaps that’s the problem. Or perhaps the problem is that you aren’t even trying to be understood. “You guys are making worse predictions than you would if you thought like me.” Oh, that’s interesting; what predictions? “There’s no point saying; you don’t have the necessary concepts.” Oh, what concepts? “There’s no point saying; you wouldn’t understand.” Well, you might be right, but how can a conversation like this possibly be any use to anyone? If indeed you know ahead of time that no one who disagrees with you is capable of understanding what you say without lengthy in-person training, what is the point of saying it?
listening to silence
OK, so let’s take a look at what’s happened here. The question is, if I understand you right, whether committed LW-style naturalist reductionists make worse predictions than you do about whether there’s scope for listening in a quiet room to produce something subjectively different from mere not-hearing-sound.
We’ve got exactly two data points here. One: you. Unfortunately, you haven’t told us what your prediction ahead of time actually was, but you say that the person you thought least likely to have had that experience did in fact have it, which doesn’t sound like a big predictive success to me. (Though it could have been, if you thought they were 95% likely to have the experience and others in the room more like 99%.) Two: me. If you read what I wrote you will see that the first thing I said was “For sure there are multiple different possible experiences of not-sound”, and i commented specifically that attending to the not-sound in different ways makes a difference. That looks like a straightforwardly correct prediction to me. I said I wasn’t sure whether that was what you meant by “perceiving the sound of silence”; i.e., I kept my mind open about things I wasn’t in a position to know. That looks to me like what you’re claiming people should do and naturalists are bad at.
So, maybe I’m missing something, but so far this example doesn’t seem like a triumphant success for the “materialists make bad predictions” position.
I would suspect that many committed naturalists think [...]
First, maybe you should apply some of that empiricism you like to talk about and notice that when you actually put the question to a committed naturalist you didn’t get that response.
Second, it seems to me—in fact it seems obvious to me—that there’s no actual inconsistency between “silence is just the absence of sound” and “if you tell people to listen to silence they often find that a novel experience and say it’s more than the absence of sound”. Those are two almost completely unrelated propositions.
First, maybe you should apply some of that empiricism you like to talk about and notice that when you actually put the question to a committed naturalist you didn’t get that response.
I do applied empiricism in the sense that I made a prediction that it’s worthless to try to give you a specific example and indeed I find that it’s worthless.
Or perhaps the problem is that you aren’t even trying to be understood. “You guys are making worse predictions than you would if you thought like me.” Oh, that’s interesting; what predictions? “There’s no point saying; you don’t have the necessary concepts.” Oh, what concepts? “There’s no point saying; you wouldn’t understand.”
Leading to the question
Well, you might be right, but how can a conversation like this possibly be any use to anyone?
A little later...
But generally writing more about the purpose of this conversation would only open more issues that I can’t fully explain.
If what I’m saying what feel clear to you, you would ignore what I’m saying.
We’re all empiricists here, so let’s run an experiment. You’ve got this theory that gjm won’t understand if you try to explain. How ’bout you stop rehashing that, actually try to explain some of those technical terms you mentioned earlier, and see how your theory holds up?
If you train calibration you can generalize without theories.
That is a rather astonishing claim. What does achieving a 60% success rate on yes-no decisions when I am 60% confident have to do with extrapolation without theories?
I think that there are cases where committment to naturalism leads to people making worse predictions than people who are committed to empiricism and simply letting the data speak for itself.
Reductionism doesn’t mean “is currently being explained by being reduced to simpler ideas”. It’s closer to “can potentially be explained by being reduced to simpler ideas”. Testing hypotheses in general is neither reductionist nor anti-reductionist, although there could be anti-reductionist ways of generating the hypotheses. If you think that differences in vitamin D3 ultimately will depend on some molecular cause, you’re fine. If you think differences in vitamin D3 will just depend on the time of day because there’s a special physical law dealing with vitamin D3 and time of day and this physical law has no components, you’re not.
In other words, you’re overstating what counts as anti-reductionist in order to make spiritual experiences, which actually are anti-reductionist in practice, look good.
In other words, you’re overstating what counts as anti-reductionist in order to make spiritual experiences, which actually are anti-reductionist in practice, look good.
You are hiding behind definitions of words while ignoring why our society funds things the way it does.
I care about the predictions that people who are commited to certain ideas make. I don’t care about whether a position is justificable under rationalism with definition X.
Then let me phrase it without using definitions: You’re classifying “vitamin D3 response depends on time of day” with “spiritual experiences” in order to make spiritual experiences look good. They aren’t similar.
You’re not classifying it as a spiritual experience, but you’re classifying it in the same category as a spiritual experience. You’re saying that both of them are “empiric”. You imply that since taking vitamin D3 at different times of day iis empiric, and nobody could object to that, and spiritual experiences are empiric too, nobody should object to them either.
But your category “empiric” is so broad that it includes things that aren’t really very similar.
You imply that since taking vitamin D3 at different times of day iis empiric,
No. There isn’t something inherently empiric of taking vitamin D3 at a specific time of the day. There’s something empiric about the way that advice get’s generated as opposed to theory driven drug development that only tests drug candidates where it has a biochemical target.
spiritual experiences are empiric too, nobody should object to them either.
Objecting to spiritual experience is an interesting choice of words.
Do you mean that if people meditate in a spiritually framed setting, do you think they won’t have experiences?
Do you mean you object in the sense that you think those are bad experiences and the people shouldn’t have those experience?
The way people object to LSD and ban it, because it leads to objectionable experiences?
“Object” here means “object to the use of, as a way of determining things about reality”.
I don’t really care if you like triggering brain malfunctions, but don’t expect me to believe you when you tell me the hallucinations are of real things. And that’s equally true whether you triggered the brain malfunction through a drug or a “spiritual experience”. Billions of people believe that when Mohammed starved himself and went into the desert, the angel Gabriel that he saw really was there. I do not.
I don’t really care if you like triggering brain malfunctions, but don’t expect me to believe you when you tell me the hallucinations are of real things.
The question of whether the object of a hallucination is “real” is a question about having a theory about the world. I advocate against focusing on that question. I advocate to focus on whether you can make reliable predictions.
Yes, that’s not an easy concept to understand if you are bound up with thinking the important and meaningful question is whether or not the angel Gabriel was really there.
It typical for the new atheist crowd to focus on those questions and because you are emotionally invested into that question you pattern-match myself into a category that’s not the position I advocate.
Yes, that’s not an easy concept to understand if you are bound up with thinking the important and meaningful question is whether or not the angel Gabriel was really there.
Whether the angel Gabriel was really there is inherently the most important and meaningful question because how people act based on that can leave me dead. Whether something leaves me dead is pretty important. You can’t just say it isn’t important and make it become unimportant.
Whether the angel Gabriel was really there is inherently the most important and meaningful question because how people act based on that can leave me dead.
Very few people act on whether or not the angel Gabriel was really there or not. A lot of people act on whether or not they think the angel Gabriel was really there. If James thinks that Gabriel was there, then James will act as if Gabriel had been there; if John think Gabriel wasn’t there, then John will act as if Gabriel hadn’t been there.
You are replying as though “X is an important question” means “the truth value of X has important effects”, but in this context it really means “knowing the truth value of X has important effects”. The fact that people will act based on what they think the answer is, rather than the actual answer, is irrelevant to the latter parsing.
Whether the angel Gabriel was really there is inherently the most important and meaningful question because how people act based on that can leave me dead.
Yes, you care about the question and it’s very meaningful to you.
At the same time is valuable to understand that there are other people who don’t care about the question and care about different things and that you won’t understand them if you project your own values about which questions are meaningful on them.
That’s a fully general argument—you could say it about the importance of anything.
It has nothing to do specifically with hallucinations.
If you just mean that it’s unimportant whether something is a hallucination because everything is unimportant to someone, then I can’t disagree. But you don’t seem to have meant that.
It sounds like you made statements specifically about hallucinations and atheists and only retreated to “well, everything is unimportant to someone” when challenged.
I haven’t used the word hallucinations or intentended to refer to that concept before you did. I also haven’t said atheists but new atheists, which is a term that refers to a subgroup of atheists.
That’s a fully general argument—you could say it about the importance of anything.
If someone goes offtopic and you tell them that they are offtopic it’s indeed a quite general argument. That doesn’t make it wrong.
That’s not a standard term, so with no way to distinguish them, anything you say about it just ends up being a statement about atheists.
It is a standard term, the fact that you don’t know it doesn’t mean that it doesn’t have a regular usage.
It’s standard in the way that it has a Wikipedia page: https://en.wikipedia.org/wiki/New_Atheism
If you can’t follow me when I talk about concepts that are easily understandable and well documented on Wikipedia, there no hope that you get a glimpse when I talk about things in this discussion that are not easy to understand. No hope for medium level concepts like the nature of modern drug development and the QS.
No at all for hard concepts like living knowledge, body knowledge, beginners mind, support points, effects of ideology and phenomenological investigation.
Okay, I just read that page. It’s odd, then, that I haven’t heard of “new atheism”, even though I have heard of most of the people mentioned on that page. It’s also odd that nobody on that page is quoted as calling themselves a new atheist. Is this a term used by people other than their detractors?
This link suggests that the term arose from “journalistic commentary on the contents and impacts of their books”—that is, they don’t call themselves that and it’s just a label attached by someone else. This doesn’t give me confidence that the label is used for more than just “people I don’t like”.
And while rationalwiki is untrustworthy for a lot of things, the article on new atheism there is decidedly lukewarm on it. “The term “New Atheism” is generally only used in blogs and opinion columns, and is more of a pejorative than a self-descriptor for the New Atheists”.
Do you object to the core idea? That there’s as Wikipedia describes:
A social and political movement that began in the early 2000s in favour of atheism and secularism promoted by a collection of modern atheist writers who have advocated the view that “religion should not simply be tolerated but should be countered, criticized, and exposed by rational argument wherever its influence arises”.
If you would want to take a self-description you could use the term ‘militant atheist’. Richard Darwin used the phrase in his TED talk but I would expect that most people would understand it more pejoratively than “new atheism”.
It’s quite worthile to distinguish the cluster the cluster of new atheists from other atheists. The average atheist in Germany simply doesn’t believe in God. He doesn’t go around and argues that religion should be fought in the way Dawkins et al do.
They average atheist in Germany does care very much for the question of whether “Whether the angel Gabriel was really there”. But people like you care about the question. It’s useful to have a term for that cluster of beliefs.
Do you object to the core idea? That there’s as Wikipedia describes:
I object to the idea of someone claiming that his opponents are all part of the same group when the targets in question don’t actually identify as part of the same group. Labelling other people this way is highly prone to bias.
If you would want to take a self-description you could
That’s a self-description of one person, not an assertion about how he should be grouped with other people.
I think it’s reasonable to believe that there are no ontologically basic mental entities because you don’t believe that anybody demostrated telepathy.
If you however believe that the data supports telepathy, then I find it strange to say “I defy the data, because I don’t believe in tologically basic mental entities” as your whole case for there not being ontologically basic mental entities was about there not being telepathy.
I don’t think it’s true for many people that their main reason for not believing in OBMEs is that there appears to be no telepathy. If I disbelieve in OBMEs because I don’t see how to fit them into a reductionist understanding of the world that has, on my view, achieved such stunning empirical success that it would need overwhelming evidence to overturn it, then defying the data when presented with apparent evidence for telepathy isn’t so unreasonable.
(Someone doing that should of course consider possible mechanisms for telepathy that don’t involve OBMEs, and should reconsider their objection to OBMEs if enough apparent evidence for them turns up. I am not defending outright immovability.)
Steam-engine weren’t build because of reductionist thinking but because of empirical experimentation. When medicine was reductionist based instead of empirical based it is commonly believed that it killed more people than it cured. When it comes to new drugs 90% of those where there reductionist reason to believe they work turn out to flawed.
I think you get very soon into problems if you think that only things that you can explain from the ground up exist. Pratically I think it’s very worthwhile to have a state of non-judgement where you let experience speak for itself without commiting to any deeper notion of the way things are.
Of course I grant that there are people who deeply believe in the naturalist view of the world and therefore will reject telepathy on those grounds. On the other hand I don’t see why someone who has had a few spiritual experiences and seeks for more spiritual experiences should have that committment or why he should adopt it based on the reasoning of this article.
It sounds to me like you’re arguing against a straw man. Reductionism doesn’t mean believing the proposition “Nothing exists that I can’t explain from the ground up”. It means a commitment to trying to explain things from the ground up (or, actually, from the top down, but with the intention of getting as near as possible to whatever ground there may be), and to remaining dissatisfied with explanations in so far as they appeal to things whose properties aren’t clearly specified.
You say that as if “reductionist” and “empirical” are opposing ideas somehow. Of course they aren’t; reductionism and empiricism are two of the key ideas that make science work. You do everything you can to find out what actually happens, and you try to build theories as detailed and bullshit-free as you can that explains what you’ve found, and then you look for more empirical evidence to help decide between those theories, and then you look for better theories that match what you’ve found, and so on.
Not being empirical is a terrible mistake. It’s not clear exactly what and when you’re talking about, but do you have any grounds for thinking that the bad results you describe were the result of too much reductionism rather than of not enough empiricism?
Most new drugs don’t work, quite true. Do you have any reason to think drug discovery would work better if it were somehow driven by a less reductionist view of how drugs work? Would you, if so, like to be more specific about what you have in mind? (And … has anyone actually done it, saved lots of lives, and got rich?)
Who thinks that? (Thinking that certainly isn’t what I mean by reductionism.)
The article isn’t claiming to make a compelling case for naturalism, so I think Eliezer would agree with the last part of that. As to the first part, it sounds (but maybe I’m misunderstanding) as if you are saying that having had “a few spiritual experiences” constitutes strong evidence against naturalism. It’s probably true that having “spiritual experiences” tends to make people less likely to be naturalists, but it’s not at all clear to me why they are strong evidence against naturalism. There’s nothing in naturalism to suggest that people shouldn’t have such experiences.
(Unless you mean outright miraculous experiences. Those might be very good evidence against naturalism. By an extraordinary coincidence, they also appear to be very rare and to evaporate when examined closely.)
The QS movement is an alternative to reductionism. As a concrete example I believe that we should fund trials for vitamin D3 in the morning vs. vitamin D3 in the evening based on self-reports that people found vitamin D3 in the morning to be more helpful. I think those empiric experience should drive research priorities instead of research priorities being driven by molecular biological findings.
QS profits a lot from better technical equipment. Additionally we likely want to get better at developing phenomelogical abilities of select individuals to perceive and write down what goes on in their own bodies. In addition to qualitative descriptions those people also should do quantitave predictions over various QS metrics and calibrate their credence on those metrics.
The position for which I’m arguing is empiricism. Letting real world feedback guide your actions instead of being committed to theories. I think that there are cases where committment to naturalism leads to people making worse predictions than people who are committed to empiricism and simply letting the data speak for itself.
If I take someone with a standard STEM background and put him in an enviroment conductive to spiritual experiences I think that the person who’s more open to updating their beliefs through data will make better predictions than one committed to his preconveived notions. At the process updating would optimally more about letting go off beliefs than about changing beliefs.
I think perhaps we mean very different things by “reductionism”. I see absolutely no conflict between the QS movement and reductionism.
Fine with me, at least in principle. (Whether I’d actually be on board with funding those trials would depend on how much money is available, what other promising things there are to spend money on, etc.; it could be that those other things have stronger evidence that they’re worth funding.)
I don’t see why we shouldn’t have both. Research should be directed at things that, on the basis of the available evidence, have the best chance of producing the most valuable results. Some of the available evidence comes from direct observation. Some comes from theoretical analysis or modelling of molecular-bio systems. Different kinds of evidence will be differentially relevant to different kinds of desired effects. (If you want to maximize your chance of living to 100, you may do best to look at lifestyles of different communities. If you want to maximize your chance of living to 200, you probably need something—no one has a very good idea what yet—for which direct empirical evidence doesn’t exist yet, because no one is living to anything like 200. Maybe what’s needed is some kind of funky nanotech. If so, it’s probably going to need those molecular biologists.)
Splendid. I’m all in favour of empiricism. But again, perhaps we mean different things by that word. You speak of not being committed to theories, but the further we go in that direction the less ability we have to generalize the things we discover empirically. To make any statement that goes beyond just repeating simple empirical observations we’ve already made, we need theories. Our attachment to our theories shouldn’t go beyond the evidence we have for them. We should be on the lookout for signs that our theories are wrong. But that doesn’t mean giving up on theories; it just means being rational about them.
If the evidence for (say) ghosts is good enough, I will (I hope) start believing in ghosts. If it’s not quite that good, I may start believing that the world behaves kinda as if there are ghosts—which is probably enough to generate those better predictions you say more open-minded people will have.
Right now, it looks to me as if quite-firmly-committed naturalism generates pretty good predictions. Would you like to be more specific about some things you think naturalists get wrong?
The question isn’t “why shouldn’t we have both” it’s rather “why don’t we have both in a way that reasonable founded”.
If you train calibration you can generalize without theories. Generalizing isn’t something that you need to do explicitely through theories. Phenomelogical investigation provides a way to have knowledge that your brain can generalize on system I level.
That not the direction in which I’m arguing. I’m arguing that you should focus on predictions instead of concepts like whether or not ghosts exists.
Being for empiricism is not the same thing as practicing it. Actually practicing means valuing experience higher than theories.
The framing “things that naturalists get wrong” suggests that I think “naturalists get belief X wrong and should believe Y instead”. That not the main position that I advocate. Studies consistently show that people get things wrong by being overconfident. The key is to become more open to accept that reality tends to unfold in ways that your theories wouldn’t predict.
OK, so your response to “system 1 makes a lot of big mistakes” is not “get system 2 in charge in those situations” but “try to train system 1 to do better”. Once again I have to ask: why not both?
Now, let’s apply some empiricism to your suggestion here. Making theories, making them precise, getting detailed predictions out of them and comparing with experiment has been at the heart of the scientific enterprise since, say, Galileo. It’s worked incredibly well. Not instead of empirical investigation; not instead of well-trained Systems 1 generating intuitive predictions and ideas.
What do we have on the other side? Perhaps “be more specific about some things naturalists get wrong” was the wrong challenge. But so far everything you’ve offered is, well, theories. Maybe you’d rather call them predictions. But what they clearly aren’t is empirical evidence.
First of all, if you read the sentence I wrote immediately after the one you quoted, you will see that I endorsed exactly that idea before you mentioned it: given substantial evidence for ghosts but not enough to justify a change of overall theory, I should adopt the belief that the world behaves in something like the way it would if it contained ghosts.
Second: it turns out that concepts are really useful. They are especially useful when more than one person is involved. Suppose I am good at predicting the weather. If all I have is a well-trained system 1, I can’t communicate my expertise to you at all; I can just demonstrate it and hope you catch on. If I have half-baked folk theories, I can say “when the sky is such-and-such a colour the weather the following day tends to be such-and-such”, and you can test how well those claims hold up and use them to predict a bit yourself. If I have a full-blown scientific theory, you can put it into a big computer and take lots of measurements and use them to predict where hurricanes will make landfall. This actually works pretty well considering what a big hairy system global weather is.
Type error.
What you should actually do is to pay attention to both experience and theories in proportion to how well established they are. You can be wrong about your experiences (especially your past experiences). You can be very wrong about your interpretation of your experiences. You can be even wronger about other people’s experiences. And, yes, theories can be badly wrong too. (And so can your deductions from them.)
This is all kinda obvious, and I suspect you aren’t really saying we should have no theories at all or that we should unquestioningly accept everything that comes dressed as empirical evidence. Rather, you think the balance is wrong. (Right?) But: whose balance? How do you know? E.g., it looks to me as if you are making unwarranted assumptions about my own relative valuation of theory and experience; for all I know, perhaps you’re doing the same to others and this whole thing is an exercise in knocking down straw men. “But I tell you, you should have an open mind and not assume your theories are always right!” “But I tell you, the sun does rise in the east!”
More than what? If the answer is “always more” then that seems to require that theories are completely valueless, which (see above) I think is an absurd position.
You have been saying a lot about how important it is to look at actual empirical evidence rather than just building theories. Good; let us do so. You are suggesting, in this thread, that people open to spiritual experiences will make better predictions than committed naturalists. Let’s have some empirical evidence. What better predictions are the spiritual-experience guys making? What worse predictions are the naturalists making? Give us some examples!
Or does your elevation of experience over theory only apply to other people’s theories?
There are multiple issues here. You can throw a bunch of weather data into a machine learning algorithm and get results even if you don’t have a good scientific theory. I don’t need a commitment to the underlying structure of the weather and decide whether it’s atoms or air/water/fire/earth. If the machine learning algorithm includes a node for which you don’t have any reductionist reason for the node being useful for predicting the weather, I don’t think you should cut that node when the model with the node fits the data better.
Secondly we have access to information about our bodies through perception in a way we don’t have perception of the weather.
I can’t give you experiences via this medium. When I speak about the value of experience I mean using actual experience.
More practically I can’t effectively tell you a story about a territory for which you don’t have a map. You can reach for maps that you know but in which you don’t believe like
ghosts did it
but that doesn’t help.Imagine I tell you a story of a card magician. His audience makes all sorts of predictions that turn out to be wrong. I could tell you about the experience the audience has and how things that violate their reductionist driven predictions constantly happen. Then you would tell me: “But the magician isn’t really doing it, that example doesn’t count”, “Please tell me what the magician is really doing”. If I would try to explain a card trick that wouldn’t shift your underlying beliefs at all.
If I tell you: “A workshop facilitator can hold the support point for the movement of the whole room”, then apart from “A workshop facilitator” you would likely get a different meaning for every following word than the one that’s intended because you lack the relevant mental map to make sense of the sentence.
No, it’s certainly also something I practice myself.
But can you get results as good as you can get with the theory? I notice that the world’s major meteorological offices all seem to have big simulations rather than throwing everything into a machine learning algorithm and hoping for the best, so I’m thinking probably not.
I’m not asking you to give me experiences. I’m asking you to be more specific about these allegedly better predictions you say people can make if they are less committed to naturalism.
I don’t think you have any idea what maps I have for what territories. But in any case I’m not asking you to tell me an effective story. I am asking you to give me some examples where being less committed to naturalism has led to better predictions.
If the only examples of better predictions you can find are ones that you can’t even describe without technical jargon, and whose technical jargon you are unable to explain to anyone who hasn’t had the same experiences you have, I hope I will be forgiven for being a bit skeptical about these alleged better predictions.
Please don’t tell me what I would do unless you actually know. It’s rude and it’s counterproductive.
I’d be interested, though, if you’d say a little more about this card magician example, because if you’re suggesting that some such example would support your argument here (which I appreciate you might not be) then again I wonder whether you’re using terms like “reductionist” differently from me, to denote some kind of straw-man naive reductionism that I think few people here would endorse. But maybe not; you haven’t exactly made it clear what you have in mind.
I regret to say that in this thread it doesn’t look that way. You are making claims and dispensing advice in the name of empiricism but completely refusing to give a shred of empirical evidence supporting either the claims or the advice.
I believe that making explicit predictions is very helpful and that the reason that one might be theoretically wrong shouldn’t stop prediction making.
The key here is the meaning of the word ‘skeptic’. If you use the word meaning that you don’t know whether the claims I’m making are true that’s completely fine. If you mean with it that you reject the claims than I think that’s a wrong conclusion to draw. I don’t think that’s true skepticism.
If you simply would believe in less stuff I would be okay with that outcome. You don’t need to believe that the specific claim that I made is true. There might be a contest in the future where you make experience that verify what I told you, but I’m okay with the fact that you haven’t yet made them.
When I’m saying: “Don’t reject theories because you believe them to be impossible based on reductionist reasoning and instead be open (with is something different than accepting)” I’m advocating skepticism.
I believe that emprirical evidence is about actually experiencing something and that’s not something I can give you. I would prefer to live in a world where I could transfer the evidence that I have for believing what I believe over the internet but I don’t believe I live in such a world.
I’m also okay with pluralism where other people don’t believe in what I believe.
While thinking about specific examples, what do you think will the average person say when I ask them: “Is it possible to perceive the sound of silence in a way that’s different from simply hearing the absence of sounds?”
Sure. I suggest that when you make explicit predictions about someone you are in conversation with, you take the trouble to (1) make your level of confidence explicit and (2) acknowledge that you are extrapolating and could be wrong. Because otherwise you are at risk of being obnoxiously rude, and you are likely to be wrong.
What I meant on this occasion is that (1) you have given me no reason to believe the confident-sounding claims you are making about better predictions, (2) I think it likely that if you had actual good support for those claims you would be showing some of it, and (3) on the whole I think it very likely that in fact those claims are false. But of course I don’t know they’re false.
(You made some remarks earlier about mental maps I allegedly don’t have. Here’s something you seem to be lacking: you write as if my only options are “believe true”, “believe false”, and “no opinion”, but in fact there are many more. If I think there’s a 40% chance that you actually have something a reasonable person could regard as good evidence that less-naturalist people make better predictions in any situations it’s reasonable to care about, and a 20% chance that in fact less-naturalist people do make better predictions in any situations it’s reasonable to care about—have I “rejected” your claims, or just “don’t know whether the claims are true”? I suggest: not exactly either.)
I’m afraid you are still failing to be clear. (Whether the problem is that you aren’t expressing yourself clearly, or that you aren’t thinking clearly, I don’t know.)
If “reject theories” and “believe them to be impossible” mean “consider them certainly false”, then: that’s just not a thing I do, and it’s not a thing the standard-issue LW position advocates, and it’s not something any good reasoner should be doing in any but the most extreme cases. If you’re arguing against that then you are fighting a straw man.
If those phrases mean “consider them at least a bit less likely”, then: Yup, I do that, and I endorse it, and I expect others around here to do so too—and nothing you have said has offered the slightest vestige of a reason to think there’s anything wrong with that.
If they mean something intermediate, then for what you say to be any use you need to give some indication of what intermediate thing they mean. You think reductionists (or naturalists, or whatever other term you prefer on any given occasion) are too confident about naturalism, that they’re giving too much weight to their theoretical understanding of the universe when making predictions. But you seem astonishingly unwilling to be any more specific than that. You won’t give examples. You won’t say what level of confidence, what degree of weight, might be appropriate. You certainly aren’t prepared to make any attempt at communicating any reasons you might have for thinking this. All you’re apparently willing to do is to say: “booo, these people are wronger than I am”.
What possible use is that to anyone else?
Let’s be clear here about what I was asking for. I’m not asking for you to transfer (say) some spiritual experience from your mind to mine. We’re one level of abstraction up from that. I’m asking for examples of predictions that more-naturalist, more-reductionist people get wronger than less-naturalist, less-reductionist people.
I don’t know. There aren’t many average people here. What I would say if asked that question is something like: “For sure there are multiple different possible experiences of not-sound—e.g., being in an anechoic chamber, having your eardrums destroyed, having the nerves joining ears to brain severed, being completely deaf from birth, maybe surrounding yourself with very predictable sound and training yourself not to notice it—and multiple different ways to experience any of those things—e.g., you can attend to things other than the soundlessness, or attend to the soundlessness in different ways. Whether I’d call any of the possibilities ‘perceiving the sound of silence’, I don’t know; would you care to say more about what you mean by that?”
And I would give maybe 60:40 odds in favour of your having something interesting to say about silence, or perception, or experience, or something, rather than merely emitting deep-sounding word salad.
Were you by any remote chance intending that this might lead to some actual examples of predictions that more-committed naturalists tend to get wronger? That would be interesting.
I think norms of conversation that prevent honest communication by labeling it as rude are not useful for discussions that are about learning about the world. You should express different beliefs because your beliefs are rude kills an atmosphere of learning.
Of course managing the resulting emotions with empathy is something that’s much easier in person and it might very well prevent anything positive to happen in this online conversation.
The problem is that I’m refering to concepts that are likely not in your map. I know that various people have taken months of in person teaching to get the concepts to which I’m refering, so it’s not suprising to me that the ideas don’t feel clear to you. If what I’m saying what feel clear to you, you would ignore what I’m saying. Successfully pointing somewhere that’s outside of your present map feels inherently unclear. For me it’s a success that you don’t feel like I’m meaning of those those things that are inside your map.
At one of the meditations I lead in an LW context I made the point to focus on perception of silence as something besides simply absence of sound. Afterwards I checked with the person in the room where I was predicting that they least likely got something from the experience and they did experience a silence that was distinct from the absence of sound.
It’s no big shiny effect, but I would suspect that many committed naturalists think
silence = absence of sound
and any suggestion that it isn’t isemitting deep-sounding word salad
. The person developed a new phenomological category forlistening to silence
that’s distinct fromnot hearing sounds
.Now, that’s an experience I gave the person in a 20 minute meditation and it wasn’t the only thing I did in that 20 minutes. In multiple days, especially with a teacher that has more skill than I have at the moment, more new experiences are possible.
Perhaps I wasn’t clear; I certainly wasn’t suggesting you should say things you don’t believe for fear of rudeness. I was suggesting you shouldn’t make baseless claims about other people for fear of rudeness. Actually, I think there are more important reasons than rudeness (making confident false statements can mislead others or even yourself), but your comments about making explicit predictions led me to suspect that you’d be unmoved by them.
Perhaps that’s the problem. Or perhaps the problem is that you aren’t even trying to be understood. “You guys are making worse predictions than you would if you thought like me.” Oh, that’s interesting; what predictions? “There’s no point saying; you don’t have the necessary concepts.” Oh, what concepts? “There’s no point saying; you wouldn’t understand.” Well, you might be right, but how can a conversation like this possibly be any use to anyone? If indeed you know ahead of time that no one who disagrees with you is capable of understanding what you say without lengthy in-person training, what is the point of saying it?
OK, so let’s take a look at what’s happened here. The question is, if I understand you right, whether committed LW-style naturalist reductionists make worse predictions than you do about whether there’s scope for listening in a quiet room to produce something subjectively different from mere not-hearing-sound.
We’ve got exactly two data points here. One: you. Unfortunately, you haven’t told us what your prediction ahead of time actually was, but you say that the person you thought least likely to have had that experience did in fact have it, which doesn’t sound like a big predictive success to me. (Though it could have been, if you thought they were 95% likely to have the experience and others in the room more like 99%.) Two: me. If you read what I wrote you will see that the first thing I said was “For sure there are multiple different possible experiences of not-sound”, and i commented specifically that attending to the not-sound in different ways makes a difference. That looks like a straightforwardly correct prediction to me. I said I wasn’t sure whether that was what you meant by “perceiving the sound of silence”; i.e., I kept my mind open about things I wasn’t in a position to know. That looks to me like what you’re claiming people should do and naturalists are bad at.
So, maybe I’m missing something, but so far this example doesn’t seem like a triumphant success for the “materialists make bad predictions” position.
First, maybe you should apply some of that empiricism you like to talk about and notice that when you actually put the question to a committed naturalist you didn’t get that response.
Second, it seems to me—in fact it seems obvious to me—that there’s no actual inconsistency between “silence is just the absence of sound” and “if you tell people to listen to silence they often find that a novel experience and say it’s more than the absence of sound”. Those are two almost completely unrelated propositions.
I do applied empiricism in the sense that I made a prediction that it’s worthless to try to give you a specific example and indeed I find that it’s worthless.
What sort of response would have been evidence of its not being worthless? What are you trying to achieve here?
As far as giving you the example, the goal of the example was giving you the example helps you to understand something that you haven’t before.
But generally writing more about the purpose of this conversation would only open more issues that I can’t fully explain.
Leading to the question
A little later...
It’s turtles all the way down
We’re all empiricists here, so let’s run an experiment. You’ve got this theory that gjm won’t understand if you try to explain. How ’bout you stop rehashing that, actually try to explain some of those technical terms you mentioned earlier, and see how your theory holds up?
That is a rather astonishing claim. What does achieving a 60% success rate on yes-no decisions when I am 60% confident have to do with extrapolation without theories?
Like when?
Reductionism doesn’t mean “is currently being explained by being reduced to simpler ideas”. It’s closer to “can potentially be explained by being reduced to simpler ideas”. Testing hypotheses in general is neither reductionist nor anti-reductionist, although there could be anti-reductionist ways of generating the hypotheses. If you think that differences in vitamin D3 ultimately will depend on some molecular cause, you’re fine. If you think differences in vitamin D3 will just depend on the time of day because there’s a special physical law dealing with vitamin D3 and time of day and this physical law has no components, you’re not.
In other words, you’re overstating what counts as anti-reductionist in order to make spiritual experiences, which actually are anti-reductionist in practice, look good.
You are hiding behind definitions of words while ignoring why our society funds things the way it does. I care about the predictions that people who are commited to certain ideas make. I don’t care about whether a position is justificable under rationalism with definition X.
Then let me phrase it without using definitions: You’re classifying “vitamin D3 response depends on time of day” with “spiritual experiences” in order to make spiritual experiences look good. They aren’t similar.
If you think I wanted to classify taking Vitamin at a different time of day as an spiritual experience than you haven’t understood my position.
You’re not classifying it as a spiritual experience, but you’re classifying it in the same category as a spiritual experience. You’re saying that both of them are “empiric”. You imply that since taking vitamin D3 at different times of day iis empiric, and nobody could object to that, and spiritual experiences are empiric too, nobody should object to them either.
But your category “empiric” is so broad that it includes things that aren’t really very similar.
No. There isn’t something inherently empiric of taking vitamin D3 at a specific time of the day. There’s something empiric about the way that advice get’s generated as opposed to theory driven drug development that only tests drug candidates where it has a biochemical target.
Objecting to spiritual experience is an interesting choice of words.
Do you mean that if people meditate in a spiritually framed setting, do you think they won’t have experiences? Do you mean you object in the sense that you think those are bad experiences and the people shouldn’t have those experience? The way people object to LSD and ban it, because it leads to objectionable experiences?
“Object” here means “object to the use of, as a way of determining things about reality”.
I don’t really care if you like triggering brain malfunctions, but don’t expect me to believe you when you tell me the hallucinations are of real things. And that’s equally true whether you triggered the brain malfunction through a drug or a “spiritual experience”. Billions of people believe that when Mohammed starved himself and went into the desert, the angel Gabriel that he saw really was there. I do not.
The question of whether the object of a hallucination is “real” is a question about having a theory about the world. I advocate against focusing on that question. I advocate to focus on whether you can make reliable predictions.
Yes, that’s not an easy concept to understand if you are bound up with thinking the important and meaningful question is whether or not the angel Gabriel was really there. It typical for the new atheist crowd to focus on those questions and because you are emotionally invested into that question you pattern-match myself into a category that’s not the position I advocate.
Whether the angel Gabriel was really there is inherently the most important and meaningful question because how people act based on that can leave me dead. Whether something leaves me dead is pretty important. You can’t just say it isn’t important and make it become unimportant.
Very few people act on whether or not the angel Gabriel was really there or not. A lot of people act on whether or not they think the angel Gabriel was really there. If James thinks that Gabriel was there, then James will act as if Gabriel had been there; if John think Gabriel wasn’t there, then John will act as if Gabriel hadn’t been there.
You are replying as though “X is an important question” means “the truth value of X has important effects”, but in this context it really means “knowing the truth value of X has important effects”. The fact that people will act based on what they think the answer is, rather than the actual answer, is irrelevant to the latter parsing.
Yes, you care about the question and it’s very meaningful to you.
At the same time is valuable to understand that there are other people who don’t care about the question and care about different things and that you won’t understand them if you project your own values about which questions are meaningful on them.
That’s a fully general argument—you could say it about the importance of anything.
It has nothing to do specifically with hallucinations.
If you just mean that it’s unimportant whether something is a hallucination because everything is unimportant to someone, then I can’t disagree. But you don’t seem to have meant that.
I haven’t used the word hallucinations or intentended to refer to that concept before you did. I also haven’t said atheists but new atheists, which is a term that refers to a subgroup of atheists.
If someone goes offtopic and you tell them that they are offtopic it’s indeed a quite general argument. That doesn’t make it wrong.
You don’t need to use a concept in order for what you say to have implications concerning that concept.
That’s not a standard term, so with no way to distinguish them, anything you say about it just ends up being a statement about atheists.
It is a standard term, the fact that you don’t know it doesn’t mean that it doesn’t have a regular usage. It’s standard in the way that it has a Wikipedia page: https://en.wikipedia.org/wiki/New_Atheism
If you can’t follow me when I talk about concepts that are easily understandable and well documented on Wikipedia, there no hope that you get a glimpse when I talk about things in this discussion that are not easy to understand. No hope for medium level concepts like the nature of modern drug development and the QS. No at all for hard concepts like living knowledge, body knowledge, beginners mind, support points, effects of ideology and phenomenological investigation.
Dude, get over yourself.
Okay, I just read that page. It’s odd, then, that I haven’t heard of “new atheism”, even though I have heard of most of the people mentioned on that page. It’s also odd that nobody on that page is quoted as calling themselves a new atheist. Is this a term used by people other than their detractors?
This link suggests that the term arose from “journalistic commentary on the contents and impacts of their books”—that is, they don’t call themselves that and it’s just a label attached by someone else. This doesn’t give me confidence that the label is used for more than just “people I don’t like”.
And while rationalwiki is untrustworthy for a lot of things, the article on new atheism there is decidedly lukewarm on it. “The term “New Atheism” is generally only used in blogs and opinion columns, and is more of a pejorative than a self-descriptor for the New Atheists”.
Do you object to the core idea? That there’s as Wikipedia describes:
If you would want to take a self-description you could use the term ‘militant atheist’. Richard Darwin used the phrase in his TED talk but I would expect that most people would understand it more pejoratively than “new atheism”.
It’s quite worthile to distinguish the cluster the cluster of new atheists from other atheists. The average atheist in Germany simply doesn’t believe in God. He doesn’t go around and argues that religion should be fought in the way Dawkins et al do. They average atheist in Germany does care very much for the question of whether “Whether the angel Gabriel was really there”. But people like you care about the question. It’s useful to have a term for that cluster of beliefs.
I object to the idea of someone claiming that his opponents are all part of the same group when the targets in question don’t actually identify as part of the same group. Labelling other people this way is highly prone to bias.
That’s a self-description of one person, not an assertion about how he should be grouped with other people.