I dislike the idea, although it may be because I find the whole notion of “epistemic status” as front-matter a little off-putting because it’s weird and often feels to me like copying a think Scott does. That said, I’ve been known to start my writing with an apology that I use to set expectations in general and that might include setting expectations about my confidence of the enclosed ideas.
So now that I say all that I think a fair read of my opinion is “just be a better writer and address those things within the body of the text instead of adding metadata”. That may not be, of course, be in line with the direction you want to take LW.
Scott took the idea from gwern, who, in turn, took the idea from muflax.
Muflax’s system is a set of belief tags, “strongly believed”, “partially believed”, and “not believed”, which indicate how strongly he believes in a post. In addition to the belief tags, he has other tags, like “fiction” or “log”, which indicate that a post doesn’t contain any real claims, but is commentary or opinion.
Gwern took muflax’s system and formalized it further by using a variant of Kesselmann’s estimative words, a list of words from National Intelligence Estimates that are used by analysts to indicate how probable they believe a particular event is likely to be. To the list of estimative words, he added “log”, which indicates that a particular piece of writing is intended to document an existing text or event, and is not intended to create predictions.
Scott, in turn, took gwern’s version and turned it into a more freeform text, which, so far as I can tell, he really only uses as a disclaimer on posts that are wildly speculative. Other people in the rationality community took Scott’s version of free-form epistemic status and took it as a license to engage in witticism and signalling.
Of the three implementations above, the implementation described in OP most resembles Muflax’s version—a set of coarse-grained categories that range from “I’m totally sure of this, and it would rock my world to be proven wrong,” to “This is interesting, but I’m not at all sure that it’s actually true.” While I would prefer gwern’s version, with a rigorous set of epistemic words which are standardized across posts, these coarse grained categories are certainly better than the chaos that we have today.
One of the motivators here was actually something in a recent Sarah Constantin post (I think the monopoly one), where the initial post was “written confidently”, but where her actual level of confidence was much lower. Some people complained about this, and she noted that she found it harder to think when regulating her words through a “what will people find sufficiently modest?” lens.
And I think this is a fairly common thing in the rationalsphere – people who are doing the “Strong Opinions Weakly Held” thing which helps them build out models concrete enough to be wrong but which come across sounding as if they think they’ve found the One True Way.
And one thing you could do is ask everyone to get way better at writing, but another thing you can do is separate out the expression of confidence from the writing.
Man, mind-space is big: having to model how other people will perceive what I write is the thing that helps me think and the exercise of trying to give epistemically appropriate words to my thoughts is what helps me figure out better what I really mean rather than what I just vaguely believe might be true.
I dislike the idea, although it may be because I find the whole notion of “epistemic status” as front-matter a little off-putting because it’s weird and often feels to me like copying a think Scott does. That said, I’ve been known to start my writing with an apology that I use to set expectations in general and that might include setting expectations about my confidence of the enclosed ideas.
So now that I say all that I think a fair read of my opinion is “just be a better writer and address those things within the body of the text instead of adding metadata”. That may not be, of course, be in line with the direction you want to take LW.
Scott took the idea from gwern, who, in turn, took the idea from muflax.
Muflax’s system is a set of belief tags, “strongly believed”, “partially believed”, and “not believed”, which indicate how strongly he believes in a post. In addition to the belief tags, he has other tags, like “fiction” or “log”, which indicate that a post doesn’t contain any real claims, but is commentary or opinion.
Gwern took muflax’s system and formalized it further by using a variant of Kesselmann’s estimative words, a list of words from National Intelligence Estimates that are used by analysts to indicate how probable they believe a particular event is likely to be. To the list of estimative words, he added “log”, which indicates that a particular piece of writing is intended to document an existing text or event, and is not intended to create predictions.
Scott, in turn, took gwern’s version and turned it into a more freeform text, which, so far as I can tell, he really only uses as a disclaimer on posts that are wildly speculative. Other people in the rationality community took Scott’s version of free-form epistemic status and took it as a license to engage in witticism and signalling.
Of the three implementations above, the implementation described in OP most resembles Muflax’s version—a set of coarse-grained categories that range from “I’m totally sure of this, and it would rock my world to be proven wrong,” to “This is interesting, but I’m not at all sure that it’s actually true.” While I would prefer gwern’s version, with a rigorous set of epistemic words which are standardized across posts, these coarse grained categories are certainly better than the chaos that we have today.
One of the motivators here was actually something in a recent Sarah Constantin post (I think the monopoly one), where the initial post was “written confidently”, but where her actual level of confidence was much lower. Some people complained about this, and she noted that she found it harder to think when regulating her words through a “what will people find sufficiently modest?” lens.
And I think this is a fairly common thing in the rationalsphere – people who are doing the “Strong Opinions Weakly Held” thing which helps them build out models concrete enough to be wrong but which come across sounding as if they think they’ve found the One True Way.
And one thing you could do is ask everyone to get way better at writing, but another thing you can do is separate out the expression of confidence from the writing.
Man, mind-space is big: having to model how other people will perceive what I write is the thing that helps me think and the exercise of trying to give epistemically appropriate words to my thoughts is what helps me figure out better what I really mean rather than what I just vaguely believe might be true.