definition from lesswrong’s A-Z glossary:
Concerning knowledge.
huh? So then, what is an “epistemic state”? Is it a collection of ideas? Is it a combination of knowledge a brain can have? What is an “epistemic status”? Is it the current epistemic state someone is in? Is it their amount of knowledge about something? Is it their opinion of whether or something is true?
Here’s more answer than you probably wanted.
First up, the word “epistemic” solves a limitation of the word “knowledge” in that it doesn’t easily turn into an adjective. Yes, like all nouns in English it can be used like an adjective in the creation of noun phrases, but “knowledge state” and “knowledge status” don’t sound as good.
But more importantly there’s a strong etymological reason to prefer the word “epistemic” in these cases. “Epistemic” comes from “episteme”, one of Greek’s words for knowledge[1]. Episteme is knowledge that is justified by observation and reason, and importantly is known because the knower was personally convinced of the justification, as opposed to gnosis, where the only justification is experience, or doxa, which is second-hand knowledge[2].
Thus “epistemic” carries with it the connotation of being related to justified beliefs. An “epistemic state” or “epistemic status” implies a state or status related to how justified one’s beliefs are.
“Knowledge” is cognate with another Greek word for knowledge, “gnosis”, but the two words evolved along different paths from PIE *gno-, meaning “know”.
We call doxa “hearsay” in English, but because of that word’s use in legal contexts, it carries some pejorative baggage related to how hearsay is treated in trials. To get around this we often avoid the word “hearsay” and instead focus on our level of trust in the person we learned something from, but won’t make a clear distinction between hearsay and personally justified knowledge.
Welcome!
The short and informal version is that epistemics covers all the stuff surrounding the direct claims. Things like credence levels, confidence intervals, probability estimates, etc are the clearest indicators. It also includes questions like where the information came from, how it is combined with other information, what other information we would like to have but don’t, etc.
The most popular way you’ll see this expressed on LessWrong is through Bayesian probability estimates and a description of the model (which is to say the writer’s beliefs about what causes what).
The epistemic status statement you see at the top of a lot of posts is for setting the expectation. This lets the OP write complete thoughts without the expectation that they demonstrate full epistemic rigor, or even that they endorse the thought per se.
It helps to know the contexts in which the term is often used, how it is motivated, and where the term probably came from. Thus, it is important to know the term in relation to other terms.
I haven’t heard the term “epistemic state” used often in the community, but depending on the context in which the person used the term, I imagine the person using it meant their overall “epistemic status” for a collection of claims (or their confidence in any given claim).
An epistemic status describes how confident a person is in a particular argument or idea. If the claim is important enough it is denoted with a percentage value that represents the probability that the cliam is true (from the person’s point of view). This is a good practice of reasoning transparency, because “plausible” can often be interpreted to mean different degrees of confidence.
As an example of how lacking transparency can lead to falure, when JFK questioned the Joint Chief of Staff about the chances of his plan’s success for the invasion of Cuba, the response was a “fair chance” of success, so JFK invaded. It was only later that “fair chance” was explained to mean a 1⁄3 chance of success.
As for the word epistemic more generally, it is indeed related to knowledge, but within the context of the Effective Altruism and lesswrong community, we care about the quality of the knowledge, not just quantity (since an effectiveness mindset leads to more positive change). This is why Epistemic hygiene (I hear epistemics used synonymously within the community, and it’s generally the word I use more) is emphasised so much (and why the sequences are so popular). It is very important for accurate beliefs to spread, so figuring out how to develop a set of community values that encourages accurate beliefs is what lesswrong is largely about.
I would say that the community is focused on how to have good epistemics in practice, rather than exploring foundations, although it does have that. The foundational roots of the term epistemics likely comes from the field of epistemology (this link leaves out some stuff, but it’s an okay starting point).
Epistemic status is not a measure of confidence but of reasons for confidence. “It’s written in a textbook” is an epistemic status of a claim but does not say how confident I’m in a claim.
When people start their posts with “Epistemic status: …” they usually are not listing probabilities.
I tend to disagree, although I could be wrong. My epistemic status for the claim is at 65% ;). I have not spent as much time on lesswrong so it wouldn’t surprise me.
Before posting that comment I double checked with this article. Here is a quote which the author got from urban dictionary (the author said the definition is accurate, and from my experience I agreed):
I do agree with you that people do not typically list probabilities when listing their epistemic status, although I have seen at least several people list probabilities in their epistemic status section. It is not unheard of, and I still think it is good practice for especially important claims.
You are not wrong that the epistemic status often includes reasons for claims, but an epistemic status usually includes the level of confidence in a claim.
Of course, it’s just a definition, so it’s not the end of the world if we disagree over it. It’s not a crux 🤷♂️.
The linked article has a bunch of examples in the section “A bunch of examples”. None of those have any probabilities attached to it. The post does not list “your confidence” in the section “Valuable information to list in an epistemic status”.
Some of them do have words that are about confidence but even a status like “Pretty confident. But also, enthusiasm on the verge of partisanship” is more than just the author sharing their confidence. That was the first in the list.
The second is: “I have worked for 1 year in a junior role at a large consulting company. Most security experts have much more knowledge regarding the culture and what matters in information security. My experiences are based on a sample size of five projects, each with different clients. It is therefore quite plausible that consulting in information security is very different from what I experienced. Feedback from a handful of other consultants supports my views.”
This statement makes it transparent to the reader, why the author believes what they believe. It’s up for the reader to come up with how confident they are in what they author is saying based on that statement. That’s similar to how the information in about the partisanship that was communicate in the first case is about transparency so that the reader can make better decision instead of just trusting the author.
The act of sharing the information with the reader is an act of respecting the agency of the reader. In debates like https://www.lesswrong.com/posts/u9a8RFtsxXwKaxWAa/why-i-quit-effective-altruism-and-why-timothy-telleen-lawton respecting the epistemic agency of other people is a big deal.
Let me expand the quote I gave from before to add context
It’s a “surprisingly good explanation”, but does not including everything. The author did not mention attaching confidence in the epistemic status again because it was already covered, and they were merely adding points that were not covered by the definition (their “elaboration” as they put it).
You are right that none of the examples attach probabilites to the claims, but that does not mean attaching probabilities is incompatible with a good epistemic status. As far as I know that is not a controversial position, but if I am wrong about that then I will keep that in mind next time when explaining it to others. My intuition was that explicitly mentioning the probability attached to your beliefs is a good methodological tenet of reasoning transparency, and is also a practice of superforcasters. Mentioning the probability also helps maintain a culture of embracing uncertainty.
You’re right. An epistemic status should and usually does include information relevant to helping the reader judge for themselves whether the person’s reasons are credible, but 10⁄12 of the examples from that post mention their level of confidence and uncertainty in the claims they make, including the two examples you just gave. Language like “quite plausible that consulting in information security is very different from what I experienced.” is an explicit mention of the level of confidence.
I agree that we should include everything you mention, but disagree that we should leave out mentioning confidence, or that it is not important. I want the person I am reading to be calibrated with the relevant probabilities. In other words, if they believe something is 25% likely, it should happen 25% of the time. If this is not the case then something is wrong with their models of the world.
Again, nothing wrong with sharing information. I agree with almost everything you write here.
An epistemic status is a statement of how confident the writer / speaker is in what they are saying, and why. E.g. this post about the use of epistemic status on Lesswrong . Google’s definition of epistemic is “relating to knowledge or to the degree of its validation”.