Regarding “problems we don’t understand”, you pointed out an important meta-systematic skill: figuring out when different systems apply and don’t apply (by applying new systems learned to a list of 20 or so big problems).
The new post you’re eluding to sounds interesting, but rationality is a loaded term. Do you have specific skills of rationality in mind for that post?
Do you have specific skills of rationality in mind for that post?
No, which is part of the point. I do intend to start from the sequences-esque notion of the term (i.e. “rationality is systematized winning”), but I don’t necessarily intend to point to the kinds of things LW-style “rationality” currently focuses on. Indeed, there are some things LW-style “rationality” currently focuses on which I do not think are particularly useful for systematized winning, or are at least overemphasized.
Oh, I mean that part of the point of the post is to talk about what relative advantages/disadvantages rationality should have, in principle, if we’re doing it right—as opposed to whatever specific skills or strategies today’s rationalist community happens to have stumbled on. It’s about the relative advantages of the rationality practices which we hopefully converge to in the long run, not necessarily the rationality practices we have today.
Listing pros and cons of current rationalist techniques could then be compared to your ideal version of rationality to see what’s lacking (or points out holes in the “ideal version”). Also, “current rationality techniques” is ill-defined in my head and the closest I can imagine is the CFAR manual, though that is not the list I would’ve made.
Regarding “problems we don’t understand”, you pointed out an important meta-systematic skill: figuring out when different systems apply and don’t apply (by applying new systems learned to a list of 20 or so big problems).
The new post you’re eluding to sounds interesting, but rationality is a loaded term. Do you have specific skills of rationality in mind for that post?
No, which is part of the point. I do intend to start from the sequences-esque notion of the term (i.e. “rationality is systematized winning”), but I don’t necessarily intend to point to the kinds of things LW-style “rationality” currently focuses on. Indeed, there are some things LW-style “rationality” currently focuses on which I do not think are particularly useful for systematized winning, or are at least overemphasized.
I don’t know what point you’re referring to here. Do you mean that listing specific skills of rationality is bad for systematized winning?
I also want to wrangle more specifics from you, but I can just wait for your post:)
Oh, I mean that part of the point of the post is to talk about what relative advantages/disadvantages rationality should have, in principle, if we’re doing it right—as opposed to whatever specific skills or strategies today’s rationalist community happens to have stumbled on. It’s about the relative advantages of the rationality practices which we hopefully converge to in the long run, not necessarily the rationality practices we have today.
Oh! That makes sense as a post on it’s own.
Listing pros and cons of current rationalist techniques could then be compared to your ideal version of rationality to see what’s lacking (or points out holes in the “ideal version”). Also, “current rationality techniques” is ill-defined in my head and the closest I can imagine is the CFAR manual, though that is not the list I would’ve made.
Yup, exactly.