On “Epistemic Trespassing” and Intellectual Modesty
Epistemic status: I’m moderately confident in the positions I endorse here, and this series is the product of several months’ research, but it only skims the surface of the literature on these questions.
Bookkeeping: This post is part of a short series reviewing and commenting on papers in epistemology and philosophy of science concerning research norms. You can find the other posts in this series here and here. The sources used for these posts were suggested to me by my professor as a somewhat representative sample of the work on these subjects. The summaries and views expressed are my own unless otherwise stated. I have read the papers in the bibliography; I have not read the papers in the “See Also” section, but they are relevant to the discussion, and I encourage anyone interested to give them a shot. Many of the papers mentioned in this series are publicly available on philpapers.org.
Introduction
Linus Pauling, the brilliant chemist and energetic proponent of peace, won two Nobel Prizes—one for his work in chemistry, and another for his activism against atomic weapons. Later, Pauling asserted that mega-doses of vitamin C could effectively treat diseases such as cancer and cure ailments like the common cold. Pauling was roundly dismissed as a crackpot by the medical establishment after researchers ran studies and concluded that high-dose vitamin C therapies did not have the touted health effects. Pauling accused the establishment of fraud and careless science. This trespasser did not want to be moved aside by the real experts. (Ballantyne, p. 367).
Experts drift over a highly-visible boundary line and into a domain where they lack either the relevant evidence or the skills to interpret the evidence well. But they keep talking nonetheless. Experts on a public stage are cast in the role of the ‘public intellectual’ or ‘celebrity academic’. … So what do you have to say about philosophy, Neil deGrasse Tyson? And what about arguments for the existence of God, Professor Dawkins?. (Ballantyne, p. 369).
The above cases outline Nathan Ballantyne’s meaning when he refers to “epistemic trespassing”; Linus Pauling doesn’t understand medicine, and public scientists like Neil deGrasse Tyson and Richard Dawkins tend to misunderstand what philosophy is and/or how to use it (according to philosophers). But epistemic trespassing isn’t a practice limited to public figures.
Ballantyne’s theses are that “trespassing is a widespread problem that crops up especially in the practice of interdisciplinary research,” and that “reflecting on trespassing should lead us to have greater intellectual modesty, in the sense that we will have good reason to be far less confident we have the right answers to many important questions.”
How to spot epistemic trespassing
Let’s look at some useful terms for this discussion. Interdisciplinary research in this context is research across multiple fields, and in Ballantyne’s sense a field encompasses “an extremely narrow set of questions” and nothing more. “Expertise is a status of thinkers and it is relative to a field at a particular time,” so right now Richard Dawkins is an expert on biology, but that doesn’t mean he was an expert when he was a baby (and his expertise may waver if he stops keeping up with the field), and it doesn’t mean he’s an expert in general. What it does mean is that he has both “enough relevant evidence to answer reliably or responsibly [his] field’s questions,” and “enough relevant skills to evaluate or interpret the field’s evidence well.” (p. 371). Finally, epistemic trespassing is when someone fails to answer questions reliably or responsibly due to a lack of relevant expertise.
Thus, when Richard Dawkins “fails to engage with the genuine issues and sets up strawmen as his dialectical opponents” in religious arguments, he trespasses due to a lack of philosophical (and presumably theological) argumentative skills. When Neil deGrasse Tyson says “that philosophy is ‘useless’” he apparently goes on to demonstrate a limited or inaccurate understanding of what philosophy is, so he trespasses due to a lack of evidence about philosophy, let alone the questions that make up the field. More subtly, I might be considered an expert on data science, but I definitely should not be considered an expert on operating systems, and an interdisciplinary question that touched on both “fields” could provoke me to overstep the bounds of my expertise.
Ballantyne sees three situations in which a question would be interdisciplinary/”hybridized”:
the evidence required to answer a question reliably or responsibly comes from two or more fields;
the skills required to evaluate the evidence well come from two or more fields;
both the relevant evidence and the relevant skills required to answer a question reliably or responsibly come from two or more fields (p. 372)
So, for example, the question of how best to transfer America’s electricity grids to renewables is hybridized, because answering it properly requires skills and evidence from infrastructure engineering, climatology, and other fields. Someone with expertise in just one of these fields would probably feel well equipped to answer this question one way or another, and they would certainly be more qualified to do so (in an epistemic sense) than a total layperson, but without expertise in all of the relevant fields, the expert would still be trespassing. This may seem to imply that only polymaths can answer such questions properly, but, as Ballantyne indicates, a hybridized question could be answered “reliably and responsibly [using] cross-field resources” in general such as via collaboration between experts from different fields (Ballantyne, p. 372).
When epistemic trespassing is okay
There are some cases in which it might be okay for you to trespass with your views on some proposition p, assuming you’re already an expert in one of the relevant fields:
(D1) I am trespassing on another field, but that field does not feature any relevant evidence or skills that bear on my view about p;
(D2) I am trespassing on another field, but my own field’s evidence conclusively establishes that p is true;
(D3) I am trespassing on another field, but my own field’s skills successfully ‘transfer’ to the other field. (p. 379)
(D1) covers cases in which the evidence from the other fields could only either strengthen or have no effect on your view. For example, a currency expert might think Bitcoin is unlikely to see widespread use based on its slow transaction speeds, and they wouldn’t need to consult experts for information about blockchain energy consumption to verify that because they know that it definitely uses more energy than established fiat currencies, and that could only hurt the adoption of Bitcoin (or, at best, have a neutral effect.
(D1) also covers the realm of pseudoscience: “I believe the substantive claims of astrologers are false … But I’ll admit that astrologers have evidence and skills that I lack. My considered view, however, is that astrologers’ evidence and skills do not constitute a reliable method for establishing their claims, and so I am justified in dismissing their claims.” (Ballantyne, 380). We don’t have to wait to hear what pseudoscience has to say about a topic, because pseudoscience wouldn’t tell us anything useful anyway.
Of course, we’d be reckless to apply (D1) indiscriminately; we should have good reason to believe that we’ve accurately summarized the field(s) upon which we trespass. Ballantyne points out that it’s easy for researchers to be “unduly dismissive about research programs they do not contribute to” (p. 380), or to otherwise misunderstand them, so it seems like “reasonably accepting (D1) will typically require considerable effort” and consultation with experts from the appropriate fields (Ballantyne, p. 381).
(D2) seems like it would require less work on the part of the trespasser, but the situations it applies to are few and far between. If your view can be established as the uncontroversial truth without any input from the other relevant fields, then it seems like someone should have figured that out and settled the matter already. “To accept (D2) reasonably, you need an account for why the discussion grinds on—as it shouldn’t, on the assumption that (D2) is reasonable for the relevant disputants.” (Ballantyne, p. 381). In a way, Ballantyne’s dismissal of (D2) is a kind of anthropic principle of arguments; if the argument were so trivial that we could dismiss it this way, we would have dismissed it as trivial already (unless you think there’s some reason we wouldn’t have) so we wouldn’t be having this argument.
Justifying trespassing using (D3) requires little in the way of extra work or exceptional circumstances, so it’s probably the most likely excuse to come up. As Ballantyne points out, Richard Dawkins “suggests that he does not see what expertise philosophers of religion could possibly have that scientists like him would lack; in his own eyes, his scientific competence apparently transfers to a new context where he can appropriately answer questions about arguments for and against God’s existence” (p. 381). If it’s true that his scientific skills transfer to philosophy as well as he thinks, then it seems as if Dawkins has a serviceable defense of his trespassing.
However, Ballantyne presents two weaknesses of (D3). The first is that even if your skills transfer to the new field, you still won’t have any of the field-specific evidence. Case in point, even if Richard Dawkins’s skills transfer well enough the philosophy of religion that he’s on equal footing skill-wise with other philosophers, those philosophers will also have an expert-level familiarity with philosophical texts and arguments related to religion, whereas Dawkins may have the default, amateur level of evidence in that area. Thus, (D3) is useless to a trespasser unless it’s not just reasonable to accept, but also joined by justification for how the trespasser is on equal evidential footing to the field experts. (Ballantyne, pp. 381-2).
The second weakness of (D3), according to Ballantyne, is that it’s hard for the trespasser to acquire a reason to accept it. His justification for this point is that our best chance at determining when it’s okay to accept (D3) is by consulting empirical research on skill transference and metacognition. The evidence from the former is intended to show that skill transfer is difficult and that we often overestimate the success of our skill transference due to a lack of applicable track records in the territory where we trespass. The evidence from the latter is meant to show that while experts can employ (and would hope to transfer) “metacognitive heuristics such as ‘consider both sides of an issue’ or ‘generate alternative explanations for the evidence’,” these strategies are useless without enough relevant evidence to build accurate pictures in the first place (p. 386).
Thus, if Ballantyne is right, it’s difficult to justify accepting (D3). Even if we can accept it, we need further justification to ensure that we also have enough evidence to make proper use of our transferred skills. Alongside the issues with applying (D1) and (D2), it seems like there are few situations in which epistemic trespassing is okay. To explain why people would choose to trespass with confidence anyway, Ballantyne cites the Dunning-Kruger effect, which (for anyone who hasn’t heard of it) says that those who are competent tend to doubt their confidence, while those who are incompetent tend to possess inflated confidence.
In light of the difficulty of avoiding trespassing, and the separate challenge of trespassing safely, Ballantyne thinks our best bet for solving the big, multidisciplinary problems in the world is to collaborate in multidisciplinary environments.
I’ll come back to the psychology research in the next section, but the gist is that skill transference is hard.
My Objections
I think the second weakness of (D3) is the least justified part of Ballantyne’s argument. The empirical evidence cited mainly consists of decades-old studies from a field, psychology, known for its replication crisis. I probably haven’t studied psychology as much as Ballantyne, but I don’t think these papers help his argument.
Also, Ballantyne follows up his research summary to say that “[t]ransfer failures are unsurprising in view of disheartening findings from contemporary educational research. The development of critical thinking skills is a central goal of modern education, but researchers say critical thinking does not easily generalize across domains.” He then presents “Linus Pauling and company” as “poster children for the perils of trespassing,” which seems true, but concludes that “they are cautionary tales for how exemplary critical thinking in one field does not generalize to others.” (p. 385).
While it’s true that a failure to transfer critical thinking skills from their original fields to those that they trespassed in would be sufficient to explain their perils, I don’t think it’s necessary. As Ballantyne said in the same section, they could have transferred their skills perfectly and fallen short due to a lack of evidence. I do think it’s more likely that Dawkins and Tyson failed to transfer their skills in the first place, given their apparent misunderstandings about the fields in which they trespass, but Ballantyne’s paper fails to support this point.
To explain/justify the importance of a track record, he gives an example of a classically trained pianist claiming to be able to play bebop jazz piano with no background jazz piano experience. He claims that “we should think the pianist’s claim needs to be backed up by a satisfactory jazz performance” (p. 386). While it’s true that the pianist’s claim would need more evidence to be believed responsibly, the category of potentially worthy track records seems broader to me than Ballantyne indicates. For example, perhaps the pianist has an exceptional track record for picking up new styles by ear. In this case, the trespasser’s versatility is an adequate substitute for field-specific experience, and the pianist may defend themself with (D3).
Similarly, a strong understanding of statistics and experimental design coupled with a bit of background knowledge seems like an adequate level of expertise to correctly interpret some areas of medicine and most social sciences. Assuming that background knowledge consists of things like desirable effect sizes, which wouldn’t be too hard or time-consuming to look up in the course of trespassing anyway, it seems that both requirements for transferability are easier to meet than Ballantyne said they were, and (D3) is a realistic/common scenario after all. Maybe in Linus Pauling’s day research was less accessible or something, because nowadays it seems like the skills I mentioned above and some metacognitive ones for detecting bias and conflicts of interest are all you’d need to avoid making the same mistake.
That said, I am not a psychologist, statistician, doctor, or other social scientist. As Ballantyne notes, “researchers can be unduly dismissive about research programs they do not contribute to,” (p. 380) and I could be overestimating the ability of my skills to transfer to these areas for the very reasons I’m dismissing. This may be cause to take my rebuttal with a grain of salt.
In any case, epistemic trespassing seems to me like a trap that’s worth avoiding, such as by qualifying statements like “if [uncertain empirical claim] obtains, then [philosophical argument] follows,” rather than attempting to verify the claims on one’s own (as mentioned by Ballantyne in a footnote (p. 375)). In particular, this seems important when it comes to trespassing between fields that draw on quite different skill sets; if a historian, a moral philosopher, and a nuclear physicist walk into a bar, they probably wouldn’t be able to leave with the information and skills necessary to form responsible beliefs about questions in each other’s fields. However, I think this paper’s account of transference is dated, and thus trespassing may be reasonable more often than Ballantyne claims.
Bibliography
Ballantyne, Nathan. “Epistemic Trespassing.” Mind, vol. 128, no. 510, Apr. 2019, pp. 367–95. DOI.org (Crossref), doi:10.1093/mind/fzx042. Available at https://www.academia.edu/34743123/Epistemic_Trespassing
Levy, Neil. “Radically Socialized Knowledge and Conspiracy Theories.” Episteme, vol. 4, no. 2, June 2007, pp. 181–92. DOI.org (Crossref), doi:10.3366/epi.2007.4.2.181.
See Also
Anderson, Elizabeth. “Democracy, Public Policy, and Lay Assessments of Scientific Testimony.” Episteme, vol. 8, no. 2, June 2011, pp. 144–64. DOI.org (Crossref), doi:10.3366/epi.2011.0013.
Hazlett, Allan. “The Social Value of Non-Deferential Belief.” Australasian Journal of Philosophy, vol. 94, no. 1, Jan. 2016, pp. 131–51. DOI.org (Crossref), doi:10.1080/00048402.2015.1049625.
This doesn’t seem convincing from outside, unless you already believe that pseudoscience has nothing useful to say. Imagine using the same approach on religion, postmodernism, or critical theory. If you believe they are full of hot air, you can use your expertise in e.g. psychology to explain it all as cognitive biases or status-seeking behavior. But if you take them seriously, you will be like “smart people who studied this for their entire lives wrote literally thousands of books on this topic, and this guy is not even familiar with 1% of it”.
If I’m understanding you correctly, it seems like your worry with applying (D1) to pseudoscience is that it feeds into confirmation bias by making you feel like you’re right to dismiss something you already don’t think is useful (in a way that you wouldn’t dismiss it if you did think it was useful). As I summarize in the next paragraph, Ballantyne agrees with you that it’s easy to apply (D1) too often, but maybe even this case that’s supposed to be an example of using (D1) correctly is problematic.
Being charitable to Ballantyne, we can imagine that his “considered view” that “astrologers’ evidence and skills do not constitute a reliable method for establishing their claims” is supported by testimony from trusted, reliable, and relevant experts (physicists, astronomers, etc.) who have debunked astronomy without controversy. Thus, there’s no reason to check horoscope when trying to predict whether a date he’s planning will go well (for example), because he has good reason to believe that there’s nothing valuable to learn from it.
Religion, postmodernism, and critical theory all seem more controversial to me than things like astrology. Without the broad rejection by the educated public that astrology has, it seems like religion and the rest would (and should) appear more difficult to trespass upon safely. In other words, pseudoscience is an edge case not just because we already believe it’s useless, but because almost everyone thinks it’s useless, and there are plenty of trustworthy and accessible resources explaining why. This is unusual though, so when it comes to religion, postmodernism, and critical theory, “reasonably accepting (D1) will typically require considerable effort,” as it should.
That said, I’m not super familiar with debates about postmodernism and even less so with critical theory, so I may have mischaracterized the debates on those fields’ usefulness.
Many of the key issues discussed here have that property. Various branches of science are highly relevant to issues like consciousness and free will , but not sufficient to solve them, because typically philosophical reasoning is needed to ensure that the right concept is being explained, or identify which concept is being explained.
When scientists claim to have solved some longstanding philosophical problem, they often haven’t understood it in the first place , or they have solved their personal “head cannon” version.
That’s a problem that’s much less recognised than its mirror image, the philosopher who can decide how the universe works “from the armchair”.
I find the relationship between these 2 quotes interesting in that they both concern Epistemic Trespassing as it relates to answering questions outside one’s field of expertise, but not necessarily posing questions concerning evidence from fields you’re not an expert in.
You’re right that Ballantyne mainly focuses on epistemic trespassing as something related to question answering rather than question posing. I think this is related to his definition of a field as “an extremely narrow set of questions”; obviously trying to answer a set of questions without any of the relevant evidence and skills (that someone who works in the field has) would be trespassing. On the other hand, asking questions you’re not qualified to answer seems a lot more benign; there’s no expectation of reliability and little expectation of responsibility.
I suppose it’s still possible to cause harm in a way that resembles epistemic trespassing by asking questions. For example, a 9/11 truther with any Twitter following could sow confusion by asking “What’s the temperature at which steel beams melt?” when a plane crash investigator would dismiss that and instead ask “What’s the temperature at which steel beams lost most of their structural integrity?” What makes these questions important is not just their relation to the relevant field(s) of expertise, but their relation to the facts: the answer to the former question is a temperature higher than that at which jet fuel burns, and the answer to the latter is of course lower than the temperature of burning jet fuel. By asking the former and not the latter, the 9/11 truther uses their ignorance to portray an event with clear causes and explanations as fraught with mystery and open questions.
It seems like a strange conclusion however to say that many people are unqualified to ask many questions (that is, questions that relate to fields they haven’t studied). Maybe it would be more accurate to say that the reason the 9/11 truther is trespassing (and not merely curious) is that they’re asking questions in front of an audience (their Twitter followers) that sees them as an expert on that sort of question. Thus, the truther is irresponsibility speaking on behalf of the crash investigators, just as Linus Pauling spoke irresponsibility on behalf of the medical establishment.
Epistemic Status: My liberal arts education didn’t specialize in philosophy, although I’ve read a lot in a lot of different fields over the years, although I don’t have a list of ready sources to pull from aside from what I remember from my reading and youtube videos. Plus I grew up in a circle of highly respected academics and researchers, so I think I know things.
I’m pretty sure I understand what you mean, and I think I agree with your conclusions. I Still have some small questions.
My original question was initially about trespassing between established academic domains, and the business or governmental and non-governmental organizations that use their research. But I think the route you’ve gone is also incredibly important to consider.
Well said too. I think I understand your point well, and it seems very reasonable, and you’ve given me some room to think about and respond to what you’ve written, which I appreciate.
My original question was about whether Epistemic Trespassing also includes instances of asking questions of experts outside the questioners field of expertise (FOE?), in addition to providing answers outside the questioners FOE.
I might be jumping the gun, but to me this sort of seems ambiguous, in that the 9/11 truther who focuses on 9/11 might be said to also have an extremely narrow set of questions if they focus strictly on the metallurgical properties concerned (maybe?)
But additionally, if it’s the case that a field also requires answers, then it seems to be by necessity that fields also require experts to provide the answers, and if an authoritative field is populated by experts whose answers to the fields questions, provide some proof of truth (POT), than what would be considered pseudoscience would be a field whose experts provide answers which don’t reliably provide some POT?
So maybe it is also the sorts of questions being asked, and not just the answers provided, which could be taken to be pseudoscience? But not all Epistemic Trespassing comes from Pseudoscience. When it comes from other established and credible domains, is the classification of trespassing then only applied if the questions asked and the answers provided by the trespassing expert don’t provide POT? Or can merely asking questions which might be answered differently by the trespassing expert than the expert in the FOE being questioned, be considered trespassing even if they provide POT?
My second question has to do with this statement in the first paragraph:
As it relates in general to consideration of instances of trespassing (outside of academia for instance) I think there’s wiggle room in requiring only “any of the relevant evidence and skills”.
Obviously different areas of academic research often require specialized skills of the type I think you’re referring to (that someone who works in the field has), but there is also a lot of crossover between fields in terms of skills: critical thinking, footnoting, researching primary sources, etc. etc.
With a lot of the leaking going on around the world through sites like Wikileaks and other sources, a lot of evidence that would normally be kept from the public is being published for the public. Q-anon supporters for instance take this idea of top secret leaks and claim they have evidence which provides them with information that is ‘potentially more relevant’ than the evidence the real experts have or acknowledge the existence of.
So in that case it’s what many rational people can reasonably assume to be a false appeal to the relevant evidence criteria.
...what I think of as misinformation if unintentional, and disinformation if intentional.
I agree again. But if that was the conclusion, and it was shared by many academics, I’d say that’s rationale for the academic Ivory Tower so many people are concerned about.
Back before University level education was widely available, I think an Ivory Tower would serve to archive knowledge that might be lost, whereas today, it might be seen as an impediment (along with many other things) to the dissemination and integration of useful research into the broader society. At the very least a gate keeping system of sorts. Still, there’s a lot more people clamoring at the gate these days, how to handle that is a different topic though.
This is where I think my original concern possibly comes from as it relates to asking questions of experts. As was pointed out earlier, Interdisciplinary Research is difficult for a number of reasons, Epistemic Trespassing being one of them. But what I’m picking up on now is that there is an approach vector of reasonably asking questions of experts, namely being curious. (I’m trying to be methodical in my thinking, not sarcastic btw.)
And what separates Epistemic Trespassing from Curiousity, is potentially the intentions of the questioner. Trying to assume the persona of Expert on a subject by ‘irresponsibly’ questioning those in the academic or professional fields concerned, especially in front of an audience, is qualitatively different than having strong, yet reasonable reservations about some of the answers supplied by the Expert being questioned—or the field in general—by individuals looking for answers besides the ones they’ve become uncomfortable with being asked to accept (in the hope of finding some better solutions than what is currently available, potentially) despite the forum for the discussion.
I think of SpaceX in this ‘Curious’ category, as the work they’ve done with Starship in just a dozen or so flight tests has come up with incredibly different approaches to Rocket Design, Launch, and Retrieval, which seem to potentially be the future of space travel, at least for the West. That’s not to say that SpaceX is reinventing the entire space program, only that they seem to be making progress on getting larger things to and from space, more cost effectively (hopefully safer as well) - while the endeavor of making and using those things that SpaceX puts up there, like satellites, potentially orbiters, rovers, and the associated instrumentation associated with scientific research remains mostly unchanged (for now).
So if I’m understanding correctly, since the continued success of the Starship prototyping seems to provide POT, then it wouldn’t be considered a case of Epistemic Trespassing in the field of rocketry. Essentially more to the point, the questions that SpaceX asked, eventually led to answering the question of “How do we get heavy things into and out of orbit safely and cost effectively?”, so that if they hadn’t asked the question, the progress made would not have happened.
Likely many of the questions they asked were the same, but some of them were qualitatively different than the ones NASA asked—or potentially the questions were the same, but the answers were different.
Maybe more to my point, I wonder how—as someone outside the fields of expertise I have an interest in positively influencing—do I have more intentional and productive influence through discussion with experts who might need some convincing, without being guilty of Epsitemic Trespassing.
Also, I think it needs to be said that I quite like what Ballentyne says here :
My point being that because Western Tradition has continually broken down and fragmented areas of research and study, the accumulated knowledge of humanity is like the silk threads in a spiders web—very strong and attached to some stable objects, as well as laid out with some impressive mathematical precision for a spider—but also like a spiders web in that the apparent utility of the web is to allow most of the reality of the world pass through it while only catching a very specific kind of thing.
While useful for the purposes of catching insects, filling the gaps in- between those incredibly strong yet finite threads of knowledge to find some sort of Unified Theory of Everything is an inhuman task apparently. So it seems like the job will fall not to individual researchers in diverse fields working together to negate unhelpful aspects of Epistemic Trespassing, but rather to AGI that will essentially use the developing IT technologies to weave those threads together outside of the purview of the vast majority of spiders.
It makes sense to me to instead use AGI to help identity and negotiate areas of negligent Epistemic Trespassing in order to help human researchers do human scale research, within our abilities to perceive it.
How are the fields of AI and ML defined so that they can be held to some sort of standard of Epistemic Consideration of expertise outside their domains? Or are the concerns I bring up even relevant?