I disagree. they seem to be vaguely trying to gesture at something that I would simply put as “use system 1 well”. they’re just trying speak the language of system 1 to communicate the concept, and getting lost in concept space when attempting to.
I don’t really find the S1/S2 ontology sufficiently precise to be happy using it, but I agree that in those terms you could say that one aspect of what I’m talking about is developing rationality as an S1 rather than an S2 skill. As I understand it this is what CFAR and friends are working towards, but my impression of this approach is that it’s flawed because S1/S2 creates confused categories that will ultimately limit what can be accomplished that way. I can go into more depth but that’s the gloss.
There are people productively engaging with that concept. They have none of these problems. Even if it’s true that that is important and what they’re trying to convey, it is harmful to accept their framing, more harmful than could justify potential benefits from them instead of sticking to people who are grounded in true things instead of nice things.
What kind of evidence do you see, that leads you to believe that people who accept Chapmans framing got harmed? Do you see anything that’s distinct from you not understanding arguments that those people make while they discuss under that framing?
Additionally, it’s interesting that you suggest that this post makes exactly the same framing that Chapman uses. To me the this post seems to break things down in different ontological concepts and thus implies a different frame. The ability to see that those are two different frames would be one of those things metarationality would supposely help with (as it’s about how systems relate to each other).
Chapman’s entire shtick is pretending to be wise, but even worse he’s good enough that people take his ideas seriously. And then spend months or years of effort building a superstructure of LW-shaped mysterianness on top of it, losing sight of actual ability to distinguish true things from false things and/or accomplish goals.
The basic deal is that it professes to include all the goals and prpose of rationality, while also using other methods. But those other methods are thinly-disguised woo, which are attractive because they’re easy and comfortable, and comfortable because they’re not bound to effectiveness and accuracy. It keeps the style and language of the rationalist community—the bad parts—while pushing the simple view of truth so far back in its priorities that it’s just lip service.
I’ll grant that this isn’t quite the same flavor of anti-truth woo as Chapman. But the difference is unimportant to me.
I disagree. they seem to be vaguely trying to gesture at something that I would simply put as “use system 1 well”. they’re just trying speak the language of system 1 to communicate the concept, and getting lost in concept space when attempting to.
I don’t really find the S1/S2 ontology sufficiently precise to be happy using it, but I agree that in those terms you could say that one aspect of what I’m talking about is developing rationality as an S1 rather than an S2 skill. As I understand it this is what CFAR and friends are working towards, but my impression of this approach is that it’s flawed because S1/S2 creates confused categories that will ultimately limit what can be accomplished that way. I can go into more depth but that’s the gloss.
There are people productively engaging with that concept. They have none of these problems. Even if it’s true that that is important and what they’re trying to convey, it is harmful to accept their framing, more harmful than could justify potential benefits from them instead of sticking to people who are grounded in true things instead of nice things.
What kind of evidence do you see, that leads you to believe that people who accept Chapmans framing got harmed? Do you see anything that’s distinct from you not understanding arguments that those people make while they discuss under that framing?
Additionally, it’s interesting that you suggest that this post makes exactly the same framing that Chapman uses. To me the this post seems to break things down in different ontological concepts and thus implies a different frame. The ability to see that those are two different frames would be one of those things metarationality would supposely help with (as it’s about how systems relate to each other).
Chapman’s entire shtick is pretending to be wise, but even worse he’s good enough that people take his ideas seriously. And then spend months or years of effort building a superstructure of LW-shaped mysterianness on top of it, losing sight of actual ability to distinguish true things from false things and/or accomplish goals.
The basic deal is that it professes to include all the goals and prpose of rationality, while also using other methods. But those other methods are thinly-disguised woo, which are attractive because they’re easy and comfortable, and comfortable because they’re not bound to effectiveness and accuracy. It keeps the style and language of the rationalist community—the bad parts—while pushing the simple view of truth so far back in its priorities that it’s just lip service.
I’ll grant that this isn’t quite the same flavor of anti-truth woo as Chapman. But the difference is unimportant to me.