It helps to know the contexts in which the term is often used, how it is motivated, and where the term probably came from. Thus, it is important to know the term in relation to other terms.
I haven’t heard the term “epistemic state” used often in the community, but depending on the context in which the person used the term, I imagine the person using it meant their overall “epistemic status” for a collection of claims (or their confidence in any given claim).
An epistemic status describes how confident a person is in a particular argument or idea. If the claim is important enough it is denoted with a percentage value that represents the probability that the cliam is true (from the person’s point of view). This is a good practice of reasoning transparency, because “plausible” can often be interpreted to mean different degrees of confidence.
As an example of how lacking transparency can lead to falure, when JFK questioned the Joint Chief of Staff about the chances of his plan’s success for the invasion of Cuba, the response was a “fair chance” of success, so JFK invaded. It was only later that “fair chance” was explained to mean a 1⁄3 chance of success.
As for the word epistemic more generally, it is indeed related to knowledge, but within the context of the Effective Altruism and lesswrong community, we care about the quality of the knowledge, not just quantity (since an effectiveness mindset leads to more positive change). This is why Epistemic hygiene (I hear epistemics used synonymously within the community, and it’s generally the word I use more) is emphasised so much (and why the sequences are so popular). It is very important for accurate beliefs to spread, so figuring out how to develop a set of community values that encourages accurate beliefs is what lesswrong is largely about.
I would say that the community is focused on how to have good epistemics in practice, rather than exploring foundations, although it does have that. The foundational roots of the term epistemics likely comes from the field of epistemology (this link leaves out some stuff, but it’s an okay starting point).
Epistemic status is not a measure of confidence but of reasons for confidence. “It’s written in a textbook” is an epistemic status of a claim but does not say how confident I’m in a claim.
When people start their posts with “Epistemic status: …” they usually are not listing probabilities.
I tend to disagree, although I could be wrong. My epistemic status for the claim is at 65% ;). I have not spent as much time on lesswrong so it wouldn’t surprise me.
Before posting that comment I double checked with this article. Here is a quote which the author got from urban dictionary (the author said the definition is accurate, and from my experience I agreed):
The epistemic status is a short disclaimer at the top of a post that explains how confident the author is in the contents of the post, how much reputation the author is willing to stake on it, what sorts of tests the thesis has passed.
It should give a reader a sense of how seriously they should take the post.
I do agree with you that people do not typically list probabilities when listing their epistemic status, although I have seen at least several people list probabilities in their epistemic status section. It is not unheard of, and I still think it is good practice for especially important claims.
You are not wrong that the epistemic status often includes reasons for claims, but an epistemic status usually includes the level of confidence in a claim.
Of course, it’s just a definition, so it’s not the end of the world if we disagree over it. It’s not a crux 🤷♂️.
The linked article has a bunch of examples in the section “A bunch of examples”. None of those have any probabilities attached to it. The post does not list “your confidence” in the section “Valuable information to list in an epistemic status”.
Some of them do have words that are about confidence but even a status like “Pretty confident. But also, enthusiasm on the verge of partisanship” is more than just the author sharing their confidence. That was the first in the list.
The second is: “I have worked for 1 year in a junior role at a large consulting company. Most security experts have much more knowledge regarding the culture and what matters in information security. My experiences are based on a sample size of five projects, each with different clients. It is therefore quite plausible that consulting in information security is very different from what I experienced. Feedback from a handful of other consultants supports my views.”
This statement makes it transparent to the reader, why the author believes what they believe. It’s up for the reader to come up with how confident they are in what they author is saying based on that statement. That’s similar to how the information in about the partisanship that was communicate in the first case is about transparency so that the reader can make better decision instead of just trusting the author.
The linked article has a bunch of examples in the section “A bunch of examples”. None of those have any probabilities attached to it. The post does not list “your confidence” in the section “Valuable information to list in an epistemic status”.
Let me expand the quote I gave from before to add context
“The epistemic status is a short disclaimer at the top of a post that explains how confident the author is in the contents of the post, how much reputation the author is willing to stake on it, what sorts of tests the thesis has passed.
It should give a reader a sense of how seriously they should take the post.”
I think that’s a surprisingly good explanation. Commenters might be able to add to it, in which case I’ll add an elaboration.
It’s a “surprisingly good explanation”, but does not including everything. The author did not mention attaching confidence in the epistemic status again because it was already covered, and they were merely adding points that were not covered by the definition (their “elaboration” as they put it).
You are right that none of the examples attach probabilites to the claims, but that does not mean attaching probabilities is incompatible with a good epistemic status. As far as I know that is not a controversial position, but if I am wrong about that then I will keep that in mind next time when explaining it to others. My intuition was that explicitly mentioning the probability attached to your beliefs is a good methodological tenet of reasoning transparency, and is also a practice of superforcasters. Mentioning the probability also helps maintain a culture of embracing uncertainty.
Some of them do have words that are about confidence but even a status like “Pretty confident. But also, enthusiasm on the verge of partisanship” is more than just the author sharing their confidence. That was the first in the list.
The second is: “I have worked for 1 year in a junior role at a large consulting company. Most security experts have much more knowledge regarding the culture and what matters in information security. My experiences are based on a sample size of five projects, each with different clients. It is therefore quite plausible that consulting in information security is very different from what I experienced. Feedback from a handful of other consultants supports my views.”
This statement makes it transparent to the reader, why the author believes what they believe. It’s up for the reader to come up with how confident they are in what they author is saying based on that statement. That’s similar to how the information in about the partisanship that was communicate in the first case is about transparency so that the reader can make better decision instead of just trusting the author.
You’re right. An epistemic status should and usually does include information relevant to helping the reader judge for themselves whether the person’s reasons are credible, but 10⁄12 of the examples from that post mention their level of confidence and uncertainty in the claims they make, including the two examples you just gave. Language like “quite plausible that consulting in information security is very different from what I experienced.” is an explicit mention of the level of confidence.
I agree that we should include everything you mention, but disagree that we should leave out mentioning confidence, or that it is not important. I want the person I am reading to be calibrated with the relevant probabilities. In other words, if they believe something is 25% likely, it should happen 25% of the time. If this is not the case then something is wrong with their models of the world.
It helps to know the contexts in which the term is often used, how it is motivated, and where the term probably came from. Thus, it is important to know the term in relation to other terms.
I haven’t heard the term “epistemic state” used often in the community, but depending on the context in which the person used the term, I imagine the person using it meant their overall “epistemic status” for a collection of claims (or their confidence in any given claim).
An epistemic status describes how confident a person is in a particular argument or idea. If the claim is important enough it is denoted with a percentage value that represents the probability that the cliam is true (from the person’s point of view). This is a good practice of reasoning transparency, because “plausible” can often be interpreted to mean different degrees of confidence.
As an example of how lacking transparency can lead to falure, when JFK questioned the Joint Chief of Staff about the chances of his plan’s success for the invasion of Cuba, the response was a “fair chance” of success, so JFK invaded. It was only later that “fair chance” was explained to mean a 1⁄3 chance of success.
As for the word epistemic more generally, it is indeed related to knowledge, but within the context of the Effective Altruism and lesswrong community, we care about the quality of the knowledge, not just quantity (since an effectiveness mindset leads to more positive change). This is why Epistemic hygiene (I hear epistemics used synonymously within the community, and it’s generally the word I use more) is emphasised so much (and why the sequences are so popular). It is very important for accurate beliefs to spread, so figuring out how to develop a set of community values that encourages accurate beliefs is what lesswrong is largely about.
I would say that the community is focused on how to have good epistemics in practice, rather than exploring foundations, although it does have that. The foundational roots of the term epistemics likely comes from the field of epistemology (this link leaves out some stuff, but it’s an okay starting point).
Epistemic status is not a measure of confidence but of reasons for confidence. “It’s written in a textbook” is an epistemic status of a claim but does not say how confident I’m in a claim.
When people start their posts with “Epistemic status: …” they usually are not listing probabilities.
I tend to disagree, although I could be wrong. My epistemic status for the claim is at 65% ;). I have not spent as much time on lesswrong so it wouldn’t surprise me.
Before posting that comment I double checked with this article. Here is a quote which the author got from urban dictionary (the author said the definition is accurate, and from my experience I agreed):
I do agree with you that people do not typically list probabilities when listing their epistemic status, although I have seen at least several people list probabilities in their epistemic status section. It is not unheard of, and I still think it is good practice for especially important claims.
You are not wrong that the epistemic status often includes reasons for claims, but an epistemic status usually includes the level of confidence in a claim.
Of course, it’s just a definition, so it’s not the end of the world if we disagree over it. It’s not a crux 🤷♂️.
The linked article has a bunch of examples in the section “A bunch of examples”. None of those have any probabilities attached to it. The post does not list “your confidence” in the section “Valuable information to list in an epistemic status”.
Some of them do have words that are about confidence but even a status like “Pretty confident. But also, enthusiasm on the verge of partisanship” is more than just the author sharing their confidence. That was the first in the list.
The second is: “I have worked for 1 year in a junior role at a large consulting company. Most security experts have much more knowledge regarding the culture and what matters in information security. My experiences are based on a sample size of five projects, each with different clients. It is therefore quite plausible that consulting in information security is very different from what I experienced. Feedback from a handful of other consultants supports my views.”
This statement makes it transparent to the reader, why the author believes what they believe. It’s up for the reader to come up with how confident they are in what they author is saying based on that statement. That’s similar to how the information in about the partisanship that was communicate in the first case is about transparency so that the reader can make better decision instead of just trusting the author.
The act of sharing the information with the reader is an act of respecting the agency of the reader. In debates like https://www.lesswrong.com/posts/u9a8RFtsxXwKaxWAa/why-i-quit-effective-altruism-and-why-timothy-telleen-lawton respecting the epistemic agency of other people is a big deal.
Let me expand the quote I gave from before to add context
It’s a “surprisingly good explanation”, but does not including everything. The author did not mention attaching confidence in the epistemic status again because it was already covered, and they were merely adding points that were not covered by the definition (their “elaboration” as they put it).
You are right that none of the examples attach probabilites to the claims, but that does not mean attaching probabilities is incompatible with a good epistemic status. As far as I know that is not a controversial position, but if I am wrong about that then I will keep that in mind next time when explaining it to others. My intuition was that explicitly mentioning the probability attached to your beliefs is a good methodological tenet of reasoning transparency, and is also a practice of superforcasters. Mentioning the probability also helps maintain a culture of embracing uncertainty.
You’re right. An epistemic status should and usually does include information relevant to helping the reader judge for themselves whether the person’s reasons are credible, but 10⁄12 of the examples from that post mention their level of confidence and uncertainty in the claims they make, including the two examples you just gave. Language like “quite plausible that consulting in information security is very different from what I experienced.” is an explicit mention of the level of confidence.
I agree that we should include everything you mention, but disagree that we should leave out mentioning confidence, or that it is not important. I want the person I am reading to be calibrated with the relevant probabilities. In other words, if they believe something is 25% likely, it should happen 25% of the time. If this is not the case then something is wrong with their models of the world.
Again, nothing wrong with sharing information. I agree with almost everything you write here.