This post is useful to me because 1) it helped me think more clearly about whether and how exactly philosophy is useful, 2) I can read it later and again get the benefits of (1).
The problem
Doing philosophy and reading/writing is often so impractical, people do it just for the sake of doing it. When you write or read a bunch of stuff about X, get categories and lists and definitions, it feels like you’re making progress on X, but are you really?
Joseph Goldstein (meditation teacher) at the beginning of his lecture about Mindfulness, jokes that they’ll do an hour and a half of meditation, then after pausing for laughter, points out that that would actually be more useful than anything he could say on the subject.
Criteria
The way to tell if philosophy is useful is, if it actually influences the future, if you: - directly use the information for an action or decision - use the material OR the wordless intuitions gained from the material in your future thinking (times the probability that you’ll use it for a future action or decision) - refute the bad ideas that you read or write (pruning is progress too!)
(slight caveat: reading and writing and thinking, makes you better at those things and it creates positive habits, even if it’s not “object level useful”. But still! I want to train my skills while attempting to be productive—I don’t take my mental energy for granted.)
Lost in time / deeper mental models / information bottlenecks
One really simplified way to measure utility, is whether you can remember philosophy that you did. But there’s something way deeper to this. If you force yourself to do useful philosophy, given that you can’t remember a lot, a solution that arises naturally is that you create deeper, or higher level, representations of things that you know.
The more abstract they get, the more information they can capture.
A way to view it (basically compression and indexing lol): you simply replace the “main table” that was previously storing the information, with an index that point to the information (because the “main table” runs out of space, and then later an index to the indices, etc., where the main table gets increasingly abstract as the number of “leaves” at the end of the node (meaning explicit ideas and pieces of content and pieces of reasoning), keep increasing. But the “main table” is what you’re mostly working with, in terms of what you perceive as your thoughts. So from your perspective, as you learn, you are simply getting increasingly abstract representations, and it takes longer to retrieve information or to put into words what you think you know, because you are spending more cycles on traversing and indexing the graph and translating between forms of representation. (Memory being reconstructive is loosely related to this but I haven’t dug into that topic at all. Also isn’t this analogous to how auto-encoders work?)
(Balaji Srinivasan about memory management, paraphrase: “if you just have a giant mental tree that you attach concepts to over time, you can have compounding learning and you can remember everything because it is all connected”—similar to memory palace)
Format / readability / retrievability
This is subtle and hard to put into words but is in practice very impactful and keeps surprising me, which is that, how easy things are to find and how easy it is to read them once you find them, matters a lot. If you have a diary in Notepad++, it’s essentially a flat list, which is super annoying to retrieve things from, since you can only scroll or do ctrl-f. Fancier systems can make retrieval easier and they allow formatting, but require more initial energy to start writing notes in.
Youtube comments contain almost zero back-and-forth, Twitter has more, Reddit has a lot more, 4chan has longer chains than Reddit but is weaker in terms of structure.
LessWrong is actually really good for this, because you can read a preview on-hover, and do it recursively for preview panels—Gwern-style. It has great formatting and is pleasant to edit. Ease of discussion via comments is better than all the above, except Reddit.
About retrieval on LW: this is my first time seeing it, but this is actually pretty cool—though it’s still not super great. Because retrieval is very much a UI/UX problem that is super limited by this existing on the browser. Notice how you can’t use the keyboard to affect how results are presented, affect filter/sort or change page—this goes for any search engine I’ve ever seen.
It’s hard to make dynamically changing and editable UIs. You can design nice UIs, but you can’t really design a superset of UIs that is traversable with hotkeys. Because it has to work on, like, 9000 different browsers and devices and screen sizes.
Retrievability transformed programming
I believe that Substack+Google deeply transformed what it means to be a programmer. Think of the meme about expectation vs reality, that programmers spend almost all their time Googling things. It’s because the search engine is so powerful, that it outsourced almost all of the memorization requirement to, Googling skills + ability to parse Substack posts + trying out suggestions + figuring out whether a suggestion will work for you and how to reshape it to your codebase.
When is philosophy useful?
Meta
This post is useful to me because 1) it helped me think more clearly about whether and how exactly philosophy is useful, 2) I can read it later and again get the benefits of (1).
The problem
Doing philosophy and reading/writing is often so impractical, people do it just for the sake of doing it. When you write or read a bunch of stuff about X, get categories and lists and definitions, it feels like you’re making progress on X, but are you really?
Joseph Goldstein (meditation teacher) at the beginning of his lecture about Mindfulness, jokes that they’ll do an hour and a half of meditation, then after pausing for laughter, points out that that would actually be more useful than anything he could say on the subject.
Criteria
The way to tell if philosophy is useful is, if it actually influences the future, if you:
- directly use the information for an action or decision
- use the material OR the wordless intuitions gained from the material in your future thinking (times the probability that you’ll use it for a future action or decision)
- refute the bad ideas that you read or write (pruning is progress too!)
(slight caveat: reading and writing and thinking, makes you better at those things and it creates positive habits, even if it’s not “object level useful”. But still! I want to train my skills while attempting to be productive—I don’t take my mental energy for granted.)
Lost in time / deeper mental models / information bottlenecks
One really simplified way to measure utility, is whether you can remember philosophy that you did. But there’s something way deeper to this. If you force yourself to do useful philosophy, given that you can’t remember a lot, a solution that arises naturally is that you create deeper, or higher level, representations of things that you know.
The more abstract they get, the more information they can capture.
A way to view it (basically compression and indexing lol):
you simply replace the “main table” that was previously storing the information, with an index that point to the information (because the “main table” runs out of space, and then later an index to the indices, etc., where the main table gets increasingly abstract as the number of “leaves” at the end of the node (meaning explicit ideas and pieces of content and pieces of reasoning), keep increasing. But the “main table” is what you’re mostly working with, in terms of what you perceive as your thoughts. So from your perspective, as you learn, you are simply getting increasingly abstract representations, and it takes longer to retrieve information or to put into words what you think you know, because you are spending more cycles on traversing and indexing the graph and translating between forms of representation. (Memory being reconstructive is loosely related to this but I haven’t dug into that topic at all. Also isn’t this analogous to how auto-encoders work?)
(Balaji Srinivasan about memory management, paraphrase: “if you just have a giant mental tree that you attach concepts to over time, you can have compounding learning and you can remember everything because it is all connected”—similar to memory palace)
Format / readability / retrievability
This is subtle and hard to put into words but is in practice very impactful and keeps surprising me, which is that, how easy things are to find and how easy it is to read them once you find them, matters a lot. If you have a diary in Notepad++, it’s essentially a flat list, which is super annoying to retrieve things from, since you can only scroll or do ctrl-f. Fancier systems can make retrieval easier and they allow formatting, but require more initial energy to start writing notes in.
Youtube comments contain almost zero back-and-forth, Twitter has more, Reddit has a lot more, 4chan has longer chains than Reddit but is weaker in terms of structure.
LessWrong is actually really good for this, because you can read a preview on-hover, and do it recursively for preview panels—Gwern-style. It has great formatting and is pleasant to edit. Ease of discussion via comments is better than all the above, except Reddit.
About retrieval on LW: this is my first time seeing it, but this is actually pretty cool—though it’s still not super great. Because retrieval is very much a UI/UX problem that is super limited by this existing on the browser. Notice how you can’t use the keyboard to affect how results are presented, affect filter/sort or change page—this goes for any search engine I’ve ever seen.
It’s hard to make dynamically changing and editable UIs. You can design nice UIs, but you can’t really design a superset of UIs that is traversable with hotkeys. Because it has to work on, like, 9000 different browsers and devices and screen sizes.
Retrievability transformed programming
I believe that Substack+Google deeply transformed what it means to be a programmer. Think of the meme about expectation vs reality, that programmers spend almost all their time Googling things. It’s because the search engine is so powerful, that it outsourced almost all of the memorization requirement to, Googling skills + ability to parse Substack posts + trying out suggestions + figuring out whether a suggestion will work for you and how to reshape it to your codebase.