Well, there’s also the fact that “true”[1] ontological updates can look like woo prior to the update. Since you can’t reliably tell ahead of time whether your ontology is too sparse for what you’re trying to understand, truth-seeking requires you find some way of dealing with frames that are “obviously wrong” without just rejecting them. That’s not simply a matter of salvaging truth from such frames.
Separate from that:
I think salvage epistemology is infohazardous to a subset of people, and we should use it less, disclaim it more, and be careful to notice when it’s leading people in over their heads.
I for one am totally against this kind of policy. People taking responsibility for one another’s epistemic states is saviorist fuckery that makes it hard to talk or think. It wraps people’s anxiety around what everyone else is doing and how to convince or compel them to follow rules that keep one from feeling ill at ease.
I like the raising of awareness here. That this is a dynamic that seems worth noticing. I like being more aware of the impact I have on the world around me.
I don’t buy the disrespectful frame though. Like people who “aren’t hyper-analytical programmers” are on par with children who need to be epistemically protected, and as though LWers are enlightened sages who just know better what these poor stupid people should or shouldn’t believe.
Like, maybe beliefs aren’t for true things for everyone, and that’s actually correct for their lives. That view isn’t the LW practice, sure. But I don’t think humanity’s CEV would have everyone being LW-style rationalists.
Scare quotes go around “true” here because it’s not obvious what that means when talking about ontological updates. You’d have to have some larger meta-ontology you’re using to evaluate the update. I’m not going to try to rigorously fix this here. I’m guessing the intuition I’m gesturing toward is clear enough.
Well, there’s also the fact that “true” ontological updates can look like woo prior to the update.
Do you think they often do, and/or have salient non-controversial examples? My guess prior to thinking about it is that it’s rare (but maybe the feeling of woo differs between us).
Past true ontological updates that I guess didn’t look like woo:
reductionism
atomism
special relativity
many worlds interpretation (guy who first wrote it up was quite dispositionally conservative)
belief that you could gain knowledge thru experiment and apply that to the world (IDK if this should count)
germ theory of disease
evolution by natural selection as the origin of humans
Past true ontological updates that seem like they could have looked like woo, details welcome:
AFAIK gravity was indeed considered at least woo-ish back in the day, e.g.:
Newton’s theory of gravity (developed in his Principia), for example, seemed to his contemporaries to assume that bodies could act upon one another across empty space, without touching one another, or without any material connection between them. This so-called action-at-a-distance was held to be impossible in the mechanical philosophy. Similarly, in the Opticks he developed the idea that bodies interacted with one another by means of their attractive and repulsive forces—again an idea which was dismissed by mechanical philosophers as non-mechanical and even occult.
And they were probably right about “action-at-a-distance” being impossible (i.e. locality), but it took General Relativity to get a functioning theory of gravity that satisfied locality.
(Incidentally, one of the main reasons I believe the many worlds interpretation is that you need something like that for quantum mechanics to satisfy locality.)
All interpretations of QM make the same predictions, so if “satisfying locality” is an empirically meaningful requirement, they are all equivalent.
But locality is more than one thing, because everything is more than one thing. Many interpretations allow nonlocal X where X might be a correlation ,but not an action or a signal.
Yeah, it’s not empirically meaningful over interpretations of QM (at least the ones which don’t make weird observable-in-principle predictions). Still meaningful as part of a simplicity prior, the same way that e.g. rejecting a simulation hypothesis is meaningful.
One example, maybe: I think the early 20th century behaviorists mistakenly (to my mind) discarded the idea that e.g. mice are usefully modeled as having something like (beliefs, memories, desires, internal states), because they lumped this in with something like “woo.” (They applied this also to humans, at least sometimes.)
The article Cognition all the way down argues that a similar transition may be useful in biology, where e.g. embryogenesis may be more rapidly modeled if biologists become willing to discuss the “intent” of a given cellular signal or similar. I found it worth reading. (HT: Adam Scholl, for showing me the article.)
I think “you should one-box on Newcomb’s problem” is probably an example. By the time it was as formalized as TDT it was probably not all that woo-y looking, but prior to that I think a lot of people had an intuition along the lines of “yes it would be tempting to one-box but that’s woo thinking that has me thinking that.”
Well… yes, but not for deep reasons. Just an impression. The cases where I’ve made shifts from “that’s woo” to “that’s true” are super salient, as are cases where I try to invite others to make the same update and am accused of fuzzy thinking in response. Or where I’ve been the “This is woo” accuser and later made the update and slapped my forehead.
Also, “woo” as a term is pretty strongly coded to a particular aesthetic. I don’t think you’d ever hear concern about “woo” in, say, Catholicism except to the extent the scientist/atheist/skeptic/etc. cluster is also present. But Catholics still slam into ontology updates that look obviously wrong beforehand and are obviously correct afterwards. Deconversion being an individual-scale example.
(Please don’t read me as saying “Deconversion is correct.” I could just as well have given the inverse example: Rationalists converting to Catholicism is also an ontological update that’s obviously wrong beforehand and obviously correct afterwards. But that update does look like “woo” beforehand, so it’s not an example of what I’m trying to name.)
Do you… have salient non-controversial examples?
I like the examples others have been bringing. I like them better than mine. But I’ll try to give a few anyway.
Speaking to one of your “maybe never woo” examples: if I remember right, the germ theory of disease was incredibly bizarre and largely laughed at when first proposed. “How could living creatures possibly be that small? And if they’re so small, how could they possibly create that much illness?” Prevailing theories for illness were things like bad air and demons. I totally expect lots of people thought the microbes theory was basically woo. So that’s maybe an example.
Another example is quantum mechanics. The whole issue Einstein took with it was how absurd it made reality. And it did in fact send people like Bohm into spiritual frenzy. This is actually an incomplete ontology update in that we have the mathematical models but people still don’t know what it means — and in physics at least they seem to deal with it by refusing to think about it. “If you do the math, you get the right results.” Things like the Copenhagen Interpretation or Many Worlds are mostly ways of talking about how to set up experiments. The LW-rationalist thing of taking Many Worlds deeply morally seriously is, as far as I can tell, pretty fringe and arguably woo.
You might recall that Bishop Berkeley had some very colorful things to say about Newton’s infinitesimals. “Are they the ghosts of departed quantities?” If he’d had the word “woo” I’m sure he would have used it. (Although this is an odd example because now mathematicians do a forgivable motte-and-bailey where they say infinitesimal thinking is shorthand for limits when asked. Meaning many of them are using an ontology that includes infinitesimals but quickly hide it when challenged. It’s okay because they can still do their formal proofs with limits, but I think most of them are unaware of the variousways to formalize infinitesimals as mathematical objects. So this is a case where many mathematicians are intentionally using an arguably woo fake framework and translating their conclusions afterwards instead of making the full available ontology update.)
Given that I’m basically naming the Semmelweis reflex, I think Semmelweis’s example is a pretty good one. “What?! You’re accusing me, an educated professional gentleman, of carrying filth on my hands?! Preposterous! How dare you?!” Obviously absurd and wrong at the time, but later vindicated as obviously correct.
Your examples seem plausible, altho I’d still be interested in more details on each one. Further notes:
“And it did in fact send people like Bohm into spiritual frenzy.”—do you mean Bohr, or is this a story/take I don’t know about?
Re: Semmelweis reflex, I think there’s a pretty big distinction between the “woo” taste and the “absurd” taste. For example, “all plants are conscious and radiate love all the time” sounds like woo to me. “The only reason anybody gets higher education is to find people to have kids with” and “there’s a small organ in the centre of the brain that regulates the temperature of the blood that nobody has found yet” sound absurd to me, but not like woo.
Can you say more about these for the benefit of folks like me who don’t know about them? What kind of “bad reception” or “controversial” was it? Was it woo-flavored, or something else?
Everett tried to express his ideas as drily as possible, and it didn’t entirely work—he was still accused of “theology” by Bohr.
But there were and are technical issues as well, notably the basis problem. It can be argued that if you reify the whole formalism, then you have to reify the basis, and that squares the complexity of multiverse—to every state in every basis. The argument actually was by JS Bell in
Modern approaches tend to assume the multiverse has a single “preferred” basis, which has its own problems. Which tells us that it hasn’t always been one exact theory.
I would amend the OP by saying that “salvage epistemology” is a bad idea for everyone, including “us” (for any value of “us”). I don’t much like labeling things as “infohazards” (folks around here are much too quick to do that, it seems to me), which obfuscates and imbues with an almost mystical air something that is fairly simple: epistemically, this is a bad idea, and reliably doesn’t work and makes our thinking worse.
As I’ve said before: avoiding toxic, sanity-destroying epistemologies and practices is not something you do when you’re insufficiently rational, it is how you stay sufficiently rational.
If you think that some kinds of ideas are probably harmful for some people to hear, is acting on that belief always saviorist fuckery or does there exist a healthy form of it?
It seems to me that, just as one can be mindful of one’s words and avoid being intentionally hurtful but also not take responsibility for other people’s feelings… one could also be mindful of the kinds of concepts one is spreading and acknowledge that there are likely prerequisites for being able to handle exposure to those concepts well, without taking responsibility for anyone’s epistemic state.
If you think that some kinds of ideas are probably harmful for some people to hear, is acting on that belief always saviorist fuckery or does there exist a healthy form of it?
I am not Valentine, but I would say: it is “saviorist fuckery” if your view is “these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”. If instead you’re saying “this is harmful for everyone to hear (indeed I wish I hadn’t heard it!), so I will not disseminate this to anyone”, well, that’s different. (To be clear, I disapprove of both scenarios, but it does seem plausible that the motivations differ between these two cases.)
Is part of your claim that such ideas do not exist? By “such ideas” I mean ideas that only some people can hear or learn about for some definition of “safely”.
Langford’s basilisk, but anyone who can write a working FizzBuzz is immune.
The truth about how the Illuminati are secretly controlling society by impurifying our precious bodily fluids.
Learning idea #1 is perfectly safe for anyone. That is, it’s safe for the hearer; it will do you no harm to learn this, whoever you are. That does not, however, mean that it’s safe for the general public to have this idea widely disseminated! Some ne’er-do-well might actually build the damn thing, and then—bad times ahead!
If we try to stop the dissemination of idea #1, nobody can accuse us of “saviorist fuckery”, paternalism, etc.; to such charges we can reply “never mind your safety—that’s your own business; but I don’t quite trust you enough to be sure of my safety, if you learn of this (nor the safety of others)!” (Of course, if it turns out that we are in possession of idea #1 ourselves, the subject might ask how comes it that we are so trustworthy as to be permitted this knowledge, but nobody else is!)
Ok, what about #2? This one’s totally unsafe. Anyone who learns it dies. There’s no question of us keeping this idea for ourselves; we’re as ignorant as anyone else (or we’d be dead). If we likewise keep others from learning this, it can only be purely altruistic. (On the other hand, what if we’re wrong about the danger? Who appointed us guardians against this threat, anyway? What gives us the right to deny people the chance to be exposed to the basilisk, if they choose it, and have been apprised of the [alleged] danger?)
#3 is pretty close to #2, except that the people we’re allegedly protecting now have reason to be skeptical of our motives. (Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…)
As for #4, it is not even a case of “saviorist fuckery”, but ordinary self-serving treachery. If we’re Illuminati members ourselves, then clearly we’ve got an interest in preventing the truth about our evil schemes from coming to light. If we’re not members, then we’r just cowardly collaborators. (What could we claim—that we, alone, have the good sense to meekly accept the domination of the Illuminati, and that we keep the secret because we fear that less enlightened folks might object so strenuously that they’d riot, revolt, etc.—and that we do this out of selfless altruism? The noble lie, in other words, except that we’re not the architects of the lie, nor have we cause to think that it’s particularly noble—we simply hold stability so dear that we’ll accept slavery to buy it, and ensure that our oblivious fellows remain unwitting slaves?)
So which of these examples is the closest to the sort of thing the OP talks about, in your view?
Hard to answer that question given how much work the clause ‘for some definition of “safely”’ is doing in that sentence.
I think the amount of work that clause does is part of what makes the question worth answering...or at least makes the question worth asking.
Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…
I’m not a fan of inserting this type of phrasing into an argument. I think it’d be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light. Of course, it’s also possibly literally accurate...maybe most claims of the type we’re talking about are erroneous and clung to because of the makes-us-feel-superior issue, but I don’t think that literally accurate aspect of the argument makes the argument more useful or less of an applause light.
In other words, I don’t have access to an argument that says both of these cannot exist:
Cases that just make Group A feel superior because Group A erroneously thinks they are the only ones who can know it safely.
Cases that make Group A feel superior because Group A accurately thinks they are the only ones who can know it safely.
In either case Group A comes across badly, but in case 2, Group A is right.
If we cannot gather any more information or make any more arguments, it seems likely that case #1 is going to usually be the reality we’re looking at. However, we can gather more information and make more arguments. Since that is so, I don’t think it’s useful to assume bad motives or errors on the part of Group A.
So which of these examples is the closest to the sort of thing the OP talks about, in your view?
I don’t really know. The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1. Maybe you have a good argument that case #2 cannot exist. (I’ve never heard of a good argument for that.)
eta: I’m not completely satisfied with this comment at this time as I don’t think it completely gets across the point I’m trying to make. That being said, I assign < 50% chance that I’ll finish rewriting it in some manner so I’m going to leave it as is and hope I’m being overly negative in my assessment of it or at least that someone will be able to extract some meaning from it.
Less so, but it just leads to the question of “why do you think it’s suspicious?”. If at all possible I’d prefer just engaging with whether the root claim is true or false.
That’s fair. I initially looked at (the root claim) as a very different move, which could use critique on different grounds.
‘Yet another group of people thinks they are immune to common bias. At 11, we will return to see if they, shockingly, walked right into it. When will people (who clearly aren’t immune) going to stop doing this?’
I’m not a fan of inserting this type of phrasing into an argument. I think it’d be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light.
Er… I think there’s been some confusion. I was presenting a hypothetical scenario, with hypothetical examples, and suggesting that some unspecified (but also hypothetical) people would likely react to a hypothetical claim in a certain way. All of this was for the purpose of illustrating and explaining the examples, nothing more. No mapping to any real examples was intended.
The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1. Maybe you have a good argument that case #2 cannot exist. (I’ve never heard of a good argument for that.)
My point is that before we can even get to the stage where we’re talking about which of your cases apply, we need to figure out what sort of scenario (from among my four cases, or perhaps others I didn’t list?) we’re dealing with. (For instance, the question of whether Group A is right or wrong that they’re the only ones who can know a given idea safely, is pretty obviously ridiculous in my scenario #4, either quite confused or extremely suspect in my case #1, etc. At any rate, scenario #1 and scenario #2—just to take one obviously contrasting pair—are clearly so different that aggregating them and discussing them as though they’re one thing, is absurd!)
So it’s hard to know how to take your question, in that light. Are you asking whether I think that things like Langford’s basilisk exist (i.e., my scenario #2), or can exist? (Almost certainly not, and probably not but who knows what’s possible, respectively…) Are you asking whether I think that my scenario #3 exists, or can exist? Even less likely…
(Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…)
It seemed to me like your parentheticals were you stepping out of the hypothetical and making commentary about the standpoint in your hypotheticals. I apologize if I interpreted that wrong.
My point is that before we can even get to the stage where we’re talking about which of your cases apply, we need to figure out what sort of scenario (from among my four cases, or perhaps others I didn’t list?) we’re dealing with.
Yeah, I think I understood that is what you’re saying, I’m saying I don’t think your point is accurate. I do not think you have to figure out which of your scenarios we’re dealing with. The scenario type is orthogonal to the question I’m asking.
I’m asking if you think it’s possible for these sort of ideas to exist in the real world:
“these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”
I’m confused about how what you’ve said has a bearing on the answerability of my root question.
Do you think that such things exist?
I...don’t know.
My prior is that they can exist. It’s doesn’t break any laws of physics. I don’t think it breaks any laws of logic. I think there are things that some people are better able to understand than others. It’s not insane to think that some people are less prone to manipulation than others. Just because believing something makes someone feel superior does not logically mean that the thing they believe is wrong.
As far as if they do exist: There are things that have happened on LW like Roko’s basilisk that raise my prior that there are things that some people can hold in their heads safely and others can’t. Of course, that could be down to quirks of individual minds instead of general features of some group. I’d be interested in someone exploring that idea further. When do we go from saying “that’s just a quirk” to “that’s a general feature”? I dunno.
It seemed to me like your parentheticals were you stepping out of the hypothetical and making commentary about the standpoint in your hypotheticals. I apologize if I interpreted that wrong.
That was indeed not my intention.
Yeah, I think I understood that is what you’re saying, I’m saying I don’t think your point is accurate. I do not think you have to figure out which of your scenarios we’re dealing with. The scenario type is orthogonal to the question I’m asking.
I don’t see how that can be. Surely, if you ask me whether some category of thing exists, it is not an orthogonal question, to break that category down into subcategories, and make the same inquiry of each subcategory individually? Indeed, it may be that the original question was intended to refer only to some of the listed subcategories—which we cannot get clear on, until we perform the decomposition!
I’m asking if you think it’s possible for these sort of ideas to exist in the real world:
“these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”
I’m confused about how what you’ve said has a bearing on the answerability of my root question.
The bearing is simple. Do you think my enumeration of scenarios exhausts the category you describe? If so, then we can investigate, individually, the existence or nonexistence of each scenario. Do you think that there are other sorts of scenarios that I did not list, but that fall into your described category? If so, then I invite you to comment on what those might be.
Just because believing something makes someone feel superior does not logically mean that the thing they believe is wrong.
True enough.
I agree that what you describe breaks no (known) laws of physics or logic. But as I understood it, we were discussing existence, not possibility per se. In that regard, I think that getting down to specifics (at least to the extent of examining the scenarios I listed, or others like them) is really the only fruitful way of resolving this question one way or the other.
I think I see a way towards mutual intelligibility on this, but unfortunately I don’t think I have the bandwidth to get to that point. I will just point out this:
But as I understood it, we were discussing existence, not possibility per se.
Well, there’s also the fact that “true”[1] ontological updates can look like woo prior to the update. Since you can’t reliably tell ahead of time whether your ontology is too sparse for what you’re trying to understand, truth-seeking requires you find some way of dealing with frames that are “obviously wrong” without just rejecting them. That’s not simply a matter of salvaging truth from such frames.
Separate from that:
I for one am totally against this kind of policy. People taking responsibility for one another’s epistemic states is saviorist fuckery that makes it hard to talk or think. It wraps people’s anxiety around what everyone else is doing and how to convince or compel them to follow rules that keep one from feeling ill at ease.
I like the raising of awareness here. That this is a dynamic that seems worth noticing. I like being more aware of the impact I have on the world around me.
I don’t buy the disrespectful frame though. Like people who “aren’t hyper-analytical programmers” are on par with children who need to be epistemically protected, and as though LWers are enlightened sages who just know better what these poor stupid people should or shouldn’t believe.
Like, maybe beliefs aren’t for true things for everyone, and that’s actually correct for their lives. That view isn’t the LW practice, sure. But I don’t think humanity’s CEV would have everyone being LW-style rationalists.
Scare quotes go around “true” here because it’s not obvious what that means when talking about ontological updates. You’d have to have some larger meta-ontology you’re using to evaluate the update. I’m not going to try to rigorously fix this here. I’m guessing the intuition I’m gesturing toward is clear enough.
Do you think they often do, and/or have salient non-controversial examples? My guess prior to thinking about it is that it’s rare (but maybe the feeling of woo differs between us).
Past true ontological updates that I guess didn’t look like woo:
reductionism
atomism
special relativity
many worlds interpretation (guy who first wrote it up was quite dispositionally conservative)
belief that you could gain knowledge thru experiment and apply that to the world (IDK if this should count)
germ theory of disease
evolution by natural selection as the origin of humans
Past true ontological updates that seem like they could have looked like woo, details welcome:
‘force fields’ like gravity
studying arguments and logic as things to analyse
the basics of the immune system
calculus
AFAIK gravity was indeed considered at least woo-ish back in the day, e.g.:
And they were probably right about “action-at-a-distance” being impossible (i.e. locality), but it took General Relativity to get a functioning theory of gravity that satisfied locality.
(Incidentally, one of the main reasons I believe the many worlds interpretation is that you need something like that for quantum mechanics to satisfy locality.)
All interpretations of QM make the same predictions, so if “satisfying locality” is an empirically meaningful requirement, they are all equivalent.
But locality is more than one thing, because everything is more than one thing. Many interpretations allow nonlocal X where X might be a correlation ,but not an action or a signal.
Yeah, it’s not empirically meaningful over interpretations of QM (at least the ones which don’t make weird observable-in-principle predictions). Still meaningful as part of a simplicity prior, the same way that e.g. rejecting a simulation hypothesis is meaningful.
Zero was considered weird and occult for a while
One example, maybe: I think the early 20th century behaviorists mistakenly (to my mind) discarded the idea that e.g. mice are usefully modeled as having something like (beliefs, memories, desires, internal states), because they lumped this in with something like “woo.” (They applied this also to humans, at least sometimes.)
The article Cognition all the way down argues that a similar transition may be useful in biology, where e.g. embryogenesis may be more rapidly modeled if biologists become willing to discuss the “intent” of a given cellular signal or similar. I found it worth reading. (HT: Adam Scholl, for showing me the article.)
I think “you should one-box on Newcomb’s problem” is probably an example. By the time it was as formalized as TDT it was probably not all that woo-y looking, but prior to that I think a lot of people had an intuition along the lines of “yes it would be tempting to one-box but that’s woo thinking that has me thinking that.”
I like this inquiry. Upvoted.
Well… yes, but not for deep reasons. Just an impression. The cases where I’ve made shifts from “that’s woo” to “that’s true” are super salient, as are cases where I try to invite others to make the same update and am accused of fuzzy thinking in response. Or where I’ve been the “This is woo” accuser and later made the update and slapped my forehead.
Also, “woo” as a term is pretty strongly coded to a particular aesthetic. I don’t think you’d ever hear concern about “woo” in, say, Catholicism except to the extent the scientist/atheist/skeptic/etc. cluster is also present. But Catholics still slam into ontology updates that look obviously wrong beforehand and are obviously correct afterwards. Deconversion being an individual-scale example.
(Please don’t read me as saying “Deconversion is correct.” I could just as well have given the inverse example: Rationalists converting to Catholicism is also an ontological update that’s obviously wrong beforehand and obviously correct afterwards. But that update does look like “woo” beforehand, so it’s not an example of what I’m trying to name.)
I like the examples others have been bringing. I like them better than mine. But I’ll try to give a few anyway.
Speaking to one of your “maybe never woo” examples: if I remember right, the germ theory of disease was incredibly bizarre and largely laughed at when first proposed. “How could living creatures possibly be that small? And if they’re so small, how could they possibly create that much illness?” Prevailing theories for illness were things like bad air and demons. I totally expect lots of people thought the microbes theory was basically woo. So that’s maybe an example.
Another example is quantum mechanics. The whole issue Einstein took with it was how absurd it made reality. And it did in fact send people like Bohm into spiritual frenzy. This is actually an incomplete ontology update in that we have the mathematical models but people still don’t know what it means — and in physics at least they seem to deal with it by refusing to think about it. “If you do the math, you get the right results.” Things like the Copenhagen Interpretation or Many Worlds are mostly ways of talking about how to set up experiments. The LW-rationalist thing of taking Many Worlds deeply morally seriously is, as far as I can tell, pretty fringe and arguably woo.
You might recall that Bishop Berkeley had some very colorful things to say about Newton’s infinitesimals. “Are they the ghosts of departed quantities?” If he’d had the word “woo” I’m sure he would have used it. (Although this is an odd example because now mathematicians do a forgivable motte-and-bailey where they say infinitesimal thinking is shorthand for limits when asked. Meaning many of them are using an ontology that includes infinitesimals but quickly hide it when challenged. It’s okay because they can still do their formal proofs with limits, but I think most of them are unaware of the various ways to formalize infinitesimals as mathematical objects. So this is a case where many mathematicians are intentionally using an arguably woo fake framework and translating their conclusions afterwards instead of making the full available ontology update.)
Given that I’m basically naming the Semmelweis reflex, I think Semmelweis’s example is a pretty good one. “What?! You’re accusing me, an educated professional gentleman, of carrying filth on my hands?! Preposterous! How dare you?!” Obviously absurd and wrong at the time, but later vindicated as obviously correct.
Your examples seem plausible, altho I’d still be interested in more details on each one. Further notes:
“And it did in fact send people like Bohm into spiritual frenzy.”—do you mean Bohr, or is this a story/take I don’t know about?
Re: Semmelweis reflex, I think there’s a pretty big distinction between the “woo” taste and the “absurd” taste. For example, “all plants are conscious and radiate love all the time” sounds like woo to me. “The only reason anybody gets higher education is to find people to have kids with” and “there’s a small organ in the centre of the brain that regulates the temperature of the blood that nobody has found yet” sound absurd to me, but not like woo.
Received such a bad reception that Everett left academic physics.
Didn’t seem crazy to the Greeks, but was controversial when reintroduced by Boltzman.
A lot of things can be pretty controversial but not woo-ish.
Can you say more about these for the benefit of folks like me who don’t know about them? What kind of “bad reception” or “controversial” was it? Was it woo-flavored, or something else?
https://www.scientificamerican.com/article/hugh-everett-biography/
Everett tried to express his ideas as drily as possible, and it didn’t entirely work—he was still accused of “theology” by Bohr.
But there were and are technical issues as well, notably the basis problem. It can be argued that if you reify the whole formalism, then you have to reify the basis, and that squares the complexity of multiverse—to every state in every basis. The argument actually was by JS Bell in
Modern approaches tend to assume the multiverse has a single “preferred” basis, which has its own problems. Which tells us that it hasn’t always been one exact theory.
I would amend the OP by saying that “salvage epistemology” is a bad idea for everyone, including “us” (for any value of “us”). I don’t much like labeling things as “infohazards” (folks around here are much too quick to do that, it seems to me), which obfuscates and imbues with an almost mystical air something that is fairly simple: epistemically, this is a bad idea, and reliably doesn’t work and makes our thinking worse.
As I’ve said before: avoiding toxic, sanity-destroying epistemologies and practices is not something you do when you’re insufficiently rational, it is how you stay sufficiently rational.
If you think that some kinds of ideas are probably harmful for some people to hear, is acting on that belief always saviorist fuckery or does there exist a healthy form of it?
It seems to me that, just as one can be mindful of one’s words and avoid being intentionally hurtful but also not take responsibility for other people’s feelings… one could also be mindful of the kinds of concepts one is spreading and acknowledge that there are likely prerequisites for being able to handle exposure to those concepts well, without taking responsibility for anyone’s epistemic state.
I am not Valentine, but I would say: it is “saviorist fuckery” if your view is “these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”. If instead you’re saying “this is harmful for everyone to hear (indeed I wish I hadn’t heard it!), so I will not disseminate this to anyone”, well, that’s different. (To be clear, I disapprove of both scenarios, but it does seem plausible that the motivations differ between these two cases.)
Is part of your claim that such ideas do not exist? By “such ideas” I mean ideas that only some people can hear or learn about for some definition of “safely”.
Hard to answer that question given how much work the clause ‘for some definition of “safely”’ is doing in that sentence.
EDIT: This sort of thing always comes down to examples and reference classes, doesn’t it? So let’s consider some hypothetical examples:
Instructions for building a megaton thermonuclear bomb in your basement out of parts you can get from a mail-order catalog.
Langford’s basilisk.
Langford’s basilisk, but anyone who can write a working FizzBuzz is immune.
The truth about how the Illuminati are secretly controlling society by impurifying our precious bodily fluids.
Learning idea #1 is perfectly safe for anyone. That is, it’s safe for the hearer; it will do you no harm to learn this, whoever you are. That does not, however, mean that it’s safe for the general public to have this idea widely disseminated! Some ne’er-do-well might actually build the damn thing, and then—bad times ahead!
If we try to stop the dissemination of idea #1, nobody can accuse us of “saviorist fuckery”, paternalism, etc.; to such charges we can reply “never mind your safety—that’s your own business; but I don’t quite trust you enough to be sure of my safety, if you learn of this (nor the safety of others)!” (Of course, if it turns out that we are in possession of idea #1 ourselves, the subject might ask how comes it that we are so trustworthy as to be permitted this knowledge, but nobody else is!)
Ok, what about #2? This one’s totally unsafe. Anyone who learns it dies. There’s no question of us keeping this idea for ourselves; we’re as ignorant as anyone else (or we’d be dead). If we likewise keep others from learning this, it can only be purely altruistic. (On the other hand, what if we’re wrong about the danger? Who appointed us guardians against this threat, anyway? What gives us the right to deny people the chance to be exposed to the basilisk, if they choose it, and have been apprised of the [alleged] danger?)
#3 is pretty close to #2, except that the people we’re allegedly protecting now have reason to be skeptical of our motives. (Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…)
As for #4, it is not even a case of “saviorist fuckery”, but ordinary self-serving treachery. If we’re Illuminati members ourselves, then clearly we’ve got an interest in preventing the truth about our evil schemes from coming to light. If we’re not members, then we’r just cowardly collaborators. (What could we claim—that we, alone, have the good sense to meekly accept the domination of the Illuminati, and that we keep the secret because we fear that less enlightened folks might object so strenuously that they’d riot, revolt, etc.—and that we do this out of selfless altruism? The noble lie, in other words, except that we’re not the architects of the lie, nor have we cause to think that it’s particularly noble—we simply hold stability so dear that we’ll accept slavery to buy it, and ensure that our oblivious fellows remain unwitting slaves?)
So which of these examples is the closest to the sort of thing the OP talks about, in your view?
I think the amount of work that clause does is part of what makes the question worth answering...or at least makes the question worth asking.
I’m not a fan of inserting this type of phrasing into an argument. I think it’d be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light. Of course, it’s also possibly literally accurate...maybe most claims of the type we’re talking about are erroneous and clung to because of the makes-us-feel-superior issue, but I don’t think that literally accurate aspect of the argument makes the argument more useful or less of an applause light.
In other words, I don’t have access to an argument that says both of these cannot exist:
Cases that just make Group A feel superior because Group A erroneously thinks they are the only ones who can know it safely.
Cases that make Group A feel superior because Group A accurately thinks they are the only ones who can know it safely.
In either case Group A comes across badly, but in case 2, Group A is right.
If we cannot gather any more information or make any more arguments, it seems likely that case #1 is going to usually be the reality we’re looking at. However, we can gather more information and make more arguments. Since that is so, I don’t think it’s useful to assume bad motives or errors on the part of Group A.
I don’t really know. The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1. Maybe you have a good argument that case #2 cannot exist. (I’ve never heard of a good argument for that.)
eta: I’m not completely satisfied with this comment at this time as I don’t think it completely gets across the point I’m trying to make. That being said, I assign < 50% chance that I’ll finish rewriting it in some manner so I’m going to leave it as is and hope I’m being overly negative in my assessment of it or at least that someone will be able to extract some meaning from it.
Do you have the same reaction to:
“This claim is suscpicious.”
Less so, but it just leads to the question of “why do you think it’s suspicious?”. If at all possible I’d prefer just engaging with whether the root claim is true or false.
That’s fair. I initially looked at (the root claim) as a very different move, which could use critique on different grounds.
‘Yet another group of people thinks they are immune to common bias. At 11, we will return to see if they, shockingly, walked right into it. When will people (who clearly aren’t immune) going to stop doing this?’
Er… I think there’s been some confusion. I was presenting a hypothetical scenario, with hypothetical examples, and suggesting that some unspecified (but also hypothetical) people would likely react to a hypothetical claim in a certain way. All of this was for the purpose of illustrating and explaining the examples, nothing more. No mapping to any real examples was intended.
My point is that before we can even get to the stage where we’re talking about which of your cases apply, we need to figure out what sort of scenario (from among my four cases, or perhaps others I didn’t list?) we’re dealing with. (For instance, the question of whether Group A is right or wrong that they’re the only ones who can know a given idea safely, is pretty obviously ridiculous in my scenario #4, either quite confused or extremely suspect in my case #1, etc. At any rate, scenario #1 and scenario #2—just to take one obviously contrasting pair—are clearly so different that aggregating them and discussing them as though they’re one thing, is absurd!)
So it’s hard to know how to take your question, in that light. Are you asking whether I think that things like Langford’s basilisk exist (i.e., my scenario #2), or can exist? (Almost certainly not, and probably not but who knows what’s possible, respectively…) Are you asking whether I think that my scenario #3 exists, or can exist? Even less likely…
Do you think that such things exist?
I was referring to this part of your text:
It seemed to me like your parentheticals were you stepping out of the hypothetical and making commentary about the standpoint in your hypotheticals. I apologize if I interpreted that wrong.
Yeah, I think I understood that is what you’re saying, I’m saying I don’t think your point is accurate. I do not think you have to figure out which of your scenarios we’re dealing with. The scenario type is orthogonal to the question I’m asking.
I’m asking if you think it’s possible for these sort of ideas to exist in the real world:
I’m confused about how what you’ve said has a bearing on the answerability of my root question.
I...don’t know.
My prior is that they can exist. It’s doesn’t break any laws of physics. I don’t think it breaks any laws of logic. I think there are things that some people are better able to understand than others. It’s not insane to think that some people are less prone to manipulation than others. Just because believing something makes someone feel superior does not logically mean that the thing they believe is wrong.
As far as if they do exist: There are things that have happened on LW like Roko’s basilisk that raise my prior that there are things that some people can hold in their heads safely and others can’t. Of course, that could be down to quirks of individual minds instead of general features of some group. I’d be interested in someone exploring that idea further. When do we go from saying “that’s just a quirk” to “that’s a general feature”? I dunno.
That was indeed not my intention.
I don’t see how that can be. Surely, if you ask me whether some category of thing exists, it is not an orthogonal question, to break that category down into subcategories, and make the same inquiry of each subcategory individually? Indeed, it may be that the original question was intended to refer only to some of the listed subcategories—which we cannot get clear on, until we perform the decomposition!
The bearing is simple. Do you think my enumeration of scenarios exhausts the category you describe? If so, then we can investigate, individually, the existence or nonexistence of each scenario. Do you think that there are other sorts of scenarios that I did not list, but that fall into your described category? If so, then I invite you to comment on what those might be.
True enough.
I agree that what you describe breaks no (known) laws of physics or logic. But as I understood it, we were discussing existence, not possibility per se. In that regard, I think that getting down to specifics (at least to the extent of examining the scenarios I listed, or others like them) is really the only fruitful way of resolving this question one way or the other.
I think I see a way towards mutual intelligibility on this, but unfortunately I don’t think I have the bandwidth to get to that point. I will just point out this:
Hmm, I was more interested in the possibility.