Even assuming you’re correct, postrationalism won’t help with any of that because it’s nothing but systematized self-delusion. Rationality may not have benefits as huge as one’d naively expect, but it is still substantially better than deliberately turning your back on even attempting to be rational, which is what postrationalism does—intentionally!
I think it would be hard to find a postrationalist who hasn’t outright said so themself, though generally with a wording that’s spun to make it sound better.
(note: self-downvoted because I don’t really want to prolong the discussion)
I think it’s more like “this entire conversation lives in Simulacrum Level 3, where words are just about tribal affiliation, where empirical material evidence doesn’t even matter”. I’m kinda annoyed at the whole conversation, despite probably sharing some of Czynski’s frustrations with postrationalism, because it’s purely tribal, and everyone involved should know better.
The reason I’m annoyed at Postrationalism is that it didn’t add anything new beyond things basically already in the sequences (but often describe themselves as such). Most of the point of postrationalism to me seems to be “LW style rationality, but with somewhat different aesthetics and weights/priors about what sort of things are most likely to be useful.” (Those different weights/priors are meaningful, but don’t seem quantitatively more significant that the weights/priors various camps of rationalists have)
I think it’s actually quite fine to be “rationality but with different aesthetics and a particular set of weights/priors on what’s important” (seems like a fine thing to form some group identity around), but I’m annoyed when postrationalists seem to argue there’s a deeper philosophical divide than there is, and annoyed at people who argue against postrationality for getting sucked into the trap of arguing on the tribal level.
...
Note: Insofar as I have an actual disagreement with the post-rationalist-cluster, it’s a lack of common agreement about “what is supposed to happen to the fake frameworks over time.”
Over the long term, there [should be] an expectation that if Fake Frameworks stick around, they will get grounded out into “real” frameworks, or at least the limits of the framework is more clearly spelled out. This often takes lots of exploration, experimentation, modeling, and explanatory work, which can often take years. It makes sense to have a shared understanding that it takes years (esp. because often it’s not people’s full time job to be writing this sort of thing up), but I think it’s pretty important to the intellectual culture for people to trust that that’s part of the longterm goal (for things discussed on LessWrong anyhow)
I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.
I generally prefer to handle things with “escalating rewards and recognition” rather than rules that crimp people’s ability to brainstorm, or write things that explain things to people with some-but-not-all-of-a-set-of-prequisites.
So one of the things I’m pretty excited about for the review process is creating a more robust system for (and explicit answer to the question of) “when/how do we re-examine things that aren’t rigorously grounded?“.
Gordon’s comment is from ~ 1.5 years ago, before the LW Review process. The LW Review process is less than a year old, and will probably be another ~3 years before it’s matured enough that everyone can trust it. But the basic idea seems like a straightforward solution to the problem of “are we allowed to explore weird ideas or not that don’t match the genre/aesthetic that LWers tend to trust more?”.
The answer is “yes, but eventually you are going to need to ground out the idea into scalable propositional knowledge, and if you can’t, it’s eventually going to need to be acknowledged at least as non-scalable knowledge, if not outright false. It’s okay if it takes years because we have years-long timescales to evaluate things.”
(Note: I’m making some claims here that are not yet consensus, and expect my own opinion to subtly shift over the next few years. This is just me-as-an-individual laying out why I find this conversation boring and missing the point)
The reason I’m annoyed at Postrationalism is that it didn’t add anything new beyond things basically already in the sequences (but often describe themselves as such)
Well it does, because “you can’t do everything with Bayes” is an incompatible opposite to “you can do everything with Bayes”. Likewise , pluralistic ontology, of the kind Valentineadvises, is fundamentally incompatible with reductionism. Even if valentine doesn’t realise it.
Those who say that you can’t do everything with Bayes have not been very forthcoming about what you can’t do with Bayes, and even less so about what you can’t do with Bayes that you can do with other means. David Chapman, for example, keeps on taking a step back for every step forwards.
“Bayes” here I take to be a shorthand for the underlying pattern of reality which forces uncertainty to follow the Bayesian rules even when you don’t have numbers to quantify it.
And “everything” means “everything to do with action in the face of uncertainty.” (All quantifiers are bounded, even when the bound is not explicitly stated.)
Those who say that you can’t do everything with Bayes have not been very forthcoming about what you can’t do with Bayes,
I found Chapman on Bayes to be pretty clear and decisive. 1) you can’t generate novel hypotheses with Bayes 2) Bayes doesn’t give you any guidance in when to backtrack of paradigm-shft 3) you, as a human, don’t have enough Compute to execute Bayes over nontrivial domains.
“Bayes” here I take to be a shorthand for the underlying pattern of reality which forces uncertainty to follow the Bayesian rules even when you don’t have numbers to quantify it
That’s not what it means—even here. Here uncertainty is in the mind of the beholder.
If you have a personal philosophy of metaphysical uncertainty, please create an unambiguous name for it.
Edit: it’s not as if Yudkowsky originally characterised Bayes as a true-but-not-useful thing. Chapman addresses his original version.
>That’s not what it means—even here. Here uncertainty is in the mind of the beholder.
Well, yes. I was not suggesting otherwise. The uncertainty still has to follow the Bayesian pattern if it is to be resolved in the direction of more accurate beliefs and not less.
Even assuming you’re correct, postrationalism won’t help with any of that because it’s nothing but systematized self-delusion. Rationality may not have benefits as huge as one’d naively expect, but it is still substantially better than deliberately turning your back on even attempting to be rational, which is what postrationalism does—intentionally!
You have made a sweeping claim without any evidence.
I think it would be hard to find a postrationalist who hasn’t outright said so themself, though generally with a wording that’s spun to make it sound better.
If they say so in plain terms, your claim could easily be supported by citations. As for “spin”...that tends to be in the eye of the beholder.
(note: self-downvoted because I don’t really want to prolong the discussion)
I think it’s more like “this entire conversation lives in Simulacrum Level 3, where words are just about tribal affiliation, where empirical material evidence doesn’t even matter”. I’m kinda annoyed at the whole conversation, despite probably sharing some of Czynski’s frustrations with postrationalism, because it’s purely tribal, and everyone involved should know better.
The reason I’m annoyed at Postrationalism is that it didn’t add anything new beyond things basically already in the sequences (but often describe themselves as such). Most of the point of postrationalism to me seems to be “LW style rationality, but with somewhat different aesthetics and weights/priors about what sort of things are most likely to be useful.” (Those different weights/priors are meaningful, but don’t seem quantitatively more significant that the weights/priors various camps of rationalists have)
I think it’s actually quite fine to be “rationality but with different aesthetics and a particular set of weights/priors on what’s important” (seems like a fine thing to form some group identity around), but I’m annoyed when postrationalists seem to argue there’s a deeper philosophical divide than there is, and annoyed at people who argue against postrationality for getting sucked into the trap of arguing on the tribal level.
...
Note: Insofar as I have an actual disagreement with the post-rationalist-cluster, it’s a lack of common agreement about “what is supposed to happen to the fake frameworks over time.”
During the 2018 review I discussed this with Val a bit:
Gordon’s comment is from ~ 1.5 years ago, before the LW Review process. The LW Review process is less than a year old, and will probably be another ~3 years before it’s matured enough that everyone can trust it. But the basic idea seems like a straightforward solution to the problem of “are we allowed to explore weird ideas or not that don’t match the genre/aesthetic that LWers tend to trust more?”.
The answer is “yes, but eventually you are going to need to ground out the idea into scalable propositional knowledge, and if you can’t, it’s eventually going to need to be acknowledged at least as non-scalable knowledge, if not outright false. It’s okay if it takes years because we have years-long timescales to evaluate things.”
(Note: I’m making some claims here that are not yet consensus, and expect my own opinion to subtly shift over the next few years. This is just me-as-an-individual laying out why I find this conversation boring and missing the point)
Well it does, because “you can’t do everything with Bayes” is an incompatible opposite to “you can do everything with Bayes”. Likewise , pluralistic ontology, of the kind Valentineadvises, is fundamentally incompatible with reductionism. Even if valentine doesn’t realise it.
Those who say that you can’t do everything with Bayes have not been very forthcoming about what you can’t do with Bayes, and even less so about what you can’t do with Bayes that you can do with other means. David Chapman, for example, keeps on taking a step back for every step forwards.
“Bayes” here I take to be a shorthand for the underlying pattern of reality which forces uncertainty to follow the Bayesian rules even when you don’t have numbers to quantify it.
And “everything” means “everything to do with action in the face of uncertainty.” (All quantifiers are bounded, even when the bound is not explicitly stated.)
I found Chapman on Bayes to be pretty clear and decisive. 1) you can’t generate novel hypotheses with Bayes 2) Bayes doesn’t give you any guidance in when to backtrack of paradigm-shft 3) you, as a human, don’t have enough Compute to execute Bayes over nontrivial domains.
That’s not what it means—even here. Here uncertainty is in the mind of the beholder.
If you have a personal philosophy of metaphysical uncertainty, please create an unambiguous name for it.
Edit: it’s not as if Yudkowsky originally characterised Bayes as a true-but-not-useful thing. Chapman addresses his original version.
>That’s not what it means—even here. Here uncertainty is in the mind of the beholder.
Well, yes. I was not suggesting otherwise. The uncertainty still has to follow the Bayesian pattern if it is to be resolved in the direction of more accurate beliefs and not less.
That still isn’t the original claim, it’s falling back to a more defensible position.