In the context of LW, all those terms are pretty closely related unless some more specific context makes it clear that they’re not. X-rationality is a term coined to distinguish the LW methodology (which is too complicated to describe in a paragraph, but the tagline on the front page does a decent job) from rationality in the colloquial sense, which is a much fuzzier set of concepts; when someone talks about “rationality” here, though, they usually mean the former and not the latter. This is the post where the term originates, I believe.
A “rationalist” as commonly used in LW is one who pursues (and ideally attempts to improve on) some approximation of LW methodology. “Aspiring rationalist” seems to be the preferred term among some segments of the userbase, but it hasn’t achieved fixation yet. Personally, I try to avoid both.
A “Bayesian rationalist” is simply a LW-style rationalist as defined above, but the qualification usually indicates that some contrast is intended. A contrast with rationalism in the philosophical sense is probably the most likely; that’s quite different) and in some ways mutually exclusive with LW epistemology, which is generally closer to philosophical empiricism.
As far as I understand, a “Bayesian Rationalist” is someone who bases their beliefs (and thus decisions) on Bayesian probability, as opposed to ye olde frequentist probability. An X-rationalist is someone who embraces both epistemic and instrumental rationality (the Bayesian kind) in order to optimize every aspect of his life.
You mean explicitly base their every day life beliefs and decisions on Bayesian probability? That strikes me as highly impractical… Could you give some specific examples?
As best I can tell it is impractical as an actual decision-making procedure for more complex cases, at least assuming well-formalized priors. As a limit to be asymptotically approached it seems sound, though—and that’s probably the best we can do on our hardware anyway.
I thought I could, but Yvain kind of took the wind out of my sails with his post that Nornagest linked to, above. That said, Eliezer does outline his vision of using Bayesian rationality in daily life here, and in that whole sequence of posts in general.
Before I get more involved here, could someone explain me what is
1) x-rationality (extreme rationality) 2) a rationalist 3) a bayesian rationalist
(I know what rationalism and Bayes theorem are but I’m not sure what the terms above refer to in the context of LW)
In the context of LW, all those terms are pretty closely related unless some more specific context makes it clear that they’re not. X-rationality is a term coined to distinguish the LW methodology (which is too complicated to describe in a paragraph, but the tagline on the front page does a decent job) from rationality in the colloquial sense, which is a much fuzzier set of concepts; when someone talks about “rationality” here, though, they usually mean the former and not the latter. This is the post where the term originates, I believe.
A “rationalist” as commonly used in LW is one who pursues (and ideally attempts to improve on) some approximation of LW methodology. “Aspiring rationalist” seems to be the preferred term among some segments of the userbase, but it hasn’t achieved fixation yet. Personally, I try to avoid both.
A “Bayesian rationalist” is simply a LW-style rationalist as defined above, but the qualification usually indicates that some contrast is intended. A contrast with rationalism in the philosophical sense is probably the most likely; that’s quite different) and in some ways mutually exclusive with LW epistemology, which is generally closer to philosophical empiricism.
AFAIK there’s actually a user by that name, so I’d avoid the term just to minimize confusion.
As far as I understand, a “Bayesian Rationalist” is someone who bases their beliefs (and thus decisions) on Bayesian probability, as opposed to ye olde frequentist probability. An X-rationalist is someone who embraces both epistemic and instrumental rationality (the Bayesian kind) in order to optimize every aspect of his life.
You mean explicitly base their every day life beliefs and decisions on Bayesian probability? That strikes me as highly impractical… Could you give some specific examples?
As best I can tell it is impractical as an actual decision-making procedure for more complex cases, at least assuming well-formalized priors. As a limit to be asymptotically approached it seems sound, though—and that’s probably the best we can do on our hardware anyway.
I thought I could, but Yvain kind of took the wind out of my sails with his post that Nornagest linked to, above. That said, Eliezer does outline his vision of using Bayesian rationality in daily life here, and in that whole sequence of posts in general.