Although I don’t agree with everything in this site, I found this cluster of knowledge related advice (learning abstractions) and the rest of the site (made by a LW’er IIRC) very interesting if not helpful thus far; it seems to have advocated that:
Forced learning/too fast pacing (cramming) can be counterproductive since you’re no longer learning for the sake of learning (mostly true in my experience).
Abstract knowledge (math) tends to be most useful since it can be applied fruitfully. And you can actually readily use those abstractions for practical things, through honing intuitions about how to approach a lot of technical problems, mainly by mapping subproblems to mathematical abstractions. Those problems (coding/calculation) are made harder to forget how to solve.
Being curiosity driven is instrumentally useful (since it does help with future learning, delaying aging, etc.), and is of course rational.
Spaced repetition seems to work well for math and algorithms and is self-reinforcing if done in a curiosity driven approach. However, instead of using specific software to “gamify” this, I personally just recall certain key principles in my head, ask myself the motivations behind certain concepts, and keep a list of summarized points/derivations/copied diagrams around in a simple Notes document to review things “offline”. (But I’ll need to check out Anki sometime.)
That’s most of what I took away from the resources that the site offered.
Some disclaimers/reservations (strictly opinions) based on personal experiences, followed by some open questions:
I don’t think the “forgetting curve” is as important as the site makes it sound, particularly when it comes to abstractions, but this curve might have been about “general” knowledge, i.e. learning facts in general. The situation with abstract knowledge seems to be the opposite.
Hence, forgetting might not be as “precious” with abstractions, and might in fact impair ability to learn in the future. Abstractions, including lessons in rationality, are (IMO) meant to help with learning, not always for communicating/framing concepts.
It might require a fair bit of object level experiences (recallable from long term memory) to integrate abstract knowledge meaningfully and efficiently. Otherwise that knowledge isn’t grounded in experience, and we know that that’s just as disadvantageous for humans as AI.
Q1: It remains unclear whether there exists a broader applicable scope here (in terms of other ways that knowledge itself can be used to build competence) except by honing rationality, Bayesianism, and general mathematical knowledge. Would it make sense if there was or wasn’t?
Q2: It seems important to be able to figure out (on a self-supervised, intuitive level) when a learned abstraction is interfering with learning something new or being competent, in the sense that one has to detect whether it is being misapplied or is complicating the representation of knowledge more so than simplifying. Appropriate & deep knowledge of the motivations behind abstractions, their situations, and invariances would seem to help at first glance, in addition to prioritizing first-principles priors when approaching a problem instead of rigid assumptions.
Q3: Doing this may not suit everyone who isn’t a student or full-time autodidact, (and reads textbooks for fun, and has a technical background). Also, I haven’t come across an example of someone who prolonged their useful careers, earned millions of dollars, etc., as a provable result of abstraction. Conversely, practitioners develop a lot of skills that directly help within a specialized economy. There still remain very obvious reasons to condense a whole bunch of mathy (and some computer-sciency) abstractions as flashcards and whatnot to save time.
Although I don’t agree with everything in this site, I found this cluster of knowledge related advice (learning abstractions) and the rest of the site (made by a LW’er IIRC) very interesting if not helpful thus far; it seems to have advocated that:
Forced learning/too fast pacing (cramming) can be counterproductive since you’re no longer learning for the sake of learning (mostly true in my experience).
Abstract knowledge (math) tends to be most useful since it can be applied fruitfully. And you can actually readily use those abstractions for practical things, through honing intuitions about how to approach a lot of technical problems, mainly by mapping subproblems to mathematical abstractions. Those problems (coding/calculation) are made harder to forget how to solve.
Being curiosity driven is instrumentally useful (since it does help with future learning, delaying aging, etc.), and is of course rational.
Spaced repetition seems to work well for math and algorithms and is self-reinforcing if done in a curiosity driven approach. However, instead of using specific software to “gamify” this, I personally just recall certain key principles in my head, ask myself the motivations behind certain concepts, and keep a list of summarized points/derivations/copied diagrams around in a simple Notes document to review things “offline”. (But I’ll need to check out Anki sometime.)
That’s most of what I took away from the resources that the site offered.
Some disclaimers/reservations (strictly opinions) based on personal experiences, followed by some open questions:
I don’t think the “forgetting curve” is as important as the site makes it sound, particularly when it comes to abstractions, but this curve might have been about “general” knowledge, i.e. learning facts in general. The situation with abstract knowledge seems to be the opposite.
Hence, forgetting might not be as “precious” with abstractions, and might in fact impair ability to learn in the future. Abstractions, including lessons in rationality, are (IMO) meant to help with learning, not always for communicating/framing concepts.
It might require a fair bit of object level experiences (recallable from long term memory) to integrate abstract knowledge meaningfully and efficiently. Otherwise that knowledge isn’t grounded in experience, and we know that that’s just as disadvantageous for humans as AI.
Q1: It remains unclear whether there exists a broader applicable scope here (in terms of other ways that knowledge itself can be used to build competence) except by honing rationality, Bayesianism, and general mathematical knowledge. Would it make sense if there was or wasn’t?
Q2: It seems important to be able to figure out (on a self-supervised, intuitive level) when a learned abstraction is interfering with learning something new or being competent, in the sense that one has to detect whether it is being misapplied or is complicating the representation of knowledge more so than simplifying. Appropriate & deep knowledge of the motivations behind abstractions, their situations, and invariances would seem to help at first glance, in addition to prioritizing first-principles priors when approaching a problem instead of rigid assumptions.
Q3: Doing this may not suit everyone who isn’t a student or full-time autodidact, (and reads textbooks for fun, and has a technical background). Also, I haven’t come across an example of someone who prolonged their useful careers, earned millions of dollars, etc., as a provable result of abstraction. Conversely, practitioners develop a lot of skills that directly help within a specialized economy. There still remain very obvious reasons to condense a whole bunch of mathy (and some computer-sciency) abstractions as flashcards and whatnot to save time.