Day 1 of forced writing with an accountability partner (for context: I plan to write at least 500 words on some topic every day/weekday for the next few weeks… I occasionally rely on Chat-GPT to turn outlines into paragraphs):
Title: Can we Make a Better Concept Learning System Than Lists and Tag Libraries?
I enjoy finding concrete concepts that are valuable and which I can clearly delineate between knowing and not knowing. For example, Schelling points refer to the ability or tendency of people to coordinate their actions around certain salient or focal points, even in the absence of explicit communication; Survivor bias is the tendency to focus on successful individuals or outcomes while ignoring those who were unsuccessful; R&D externalities refer to the positive spillover effects of research and development activities, and can better explain why businesses choose not to invest in seemingly valuable research/technology (as opposed to narratives such as “shareholders are irrationally short-sighted or risk-averse”).
One might argue that there are already many lists out there that provide similar information, so why is this different and better? There are a few reasons why the system I have in mind may outperform a traditional “list of valuable concepts”, but many of these boil down to aggregation, curation, and tailoring: there are potentially hundreds or even thousands of concepts and audiences may have diverse intellectual backgrounds, so you probably want better systems for filtering or recommending concepts for users rather than a “one-size fits all” list. At the same time, you also probably want to bring multiple lists into one place. There are a few ways in which this might be better achieved with a more advanced platform of the type I have in mind:
Machine learning and pattern prediction: readers will often find that some claims are already familiar, overly complex (e.g., they require some prerequisite knowledge), or irrelevant to their work. Given the potentially hundreds or even thousands of potential concepts, it would be good to have a system that can make some initial predictions and recommendations based on how you’ve rated other concepts. (For example, a system should be able to predict that someone who is not familiar with some major principles in economics is more likely to not know other principles in economics.)
Simple rating search: Users could manually filter for those concepts which tend to have high novelty, importance, and/or learnability scores.
Improved categorization (tagging) capabilities: Unlike traditional hierarchical formats (e.g., bullet point lists) that you might see on blog posts, a specialized platform like this would allow better tagging. (Admittedly, sites like the EA Forum allow users to tag overall posts, but they are filled with plenty of unrelated content, and it seems that the dominant source of these “lists of concepts you ought to learn” thus far has been on aggregatory posts.)
Peer-based search/filtering: Users could potentially even manually “friend”/”follow” other users that they epistemically identify with or respect to see their learning habits. (“Episte-migos” if you will.)
There is also a potential argument to make for dynamically crowdsourcing these ideas (rather than relying on a single author and/or at a fixed point in time), although this probably has some limitations.
Moving forward, there are a few things to consider.
Is there already a system like this in existence?
How much user data would be required before the system can make reliable recommendations that are worth using?
How much of the system’s value lies in its user interface, and how can this be optimized to ensure that users get the most out of it?
By addressing these issues, we can create a system that provides real value to individuals looking to expand their knowledge and decision-making abilities.
Day 1 of forced writing with an accountability partner (for context: I plan to write at least 500 words on some topic every day/weekday for the next few weeks… I occasionally rely on Chat-GPT to turn outlines into paragraphs):
Title: Can we Make a Better Concept Learning System Than Lists and Tag Libraries?
I enjoy finding concrete concepts that are valuable and which I can clearly delineate between knowing and not knowing. For example, Schelling points refer to the ability or tendency of people to coordinate their actions around certain salient or focal points, even in the absence of explicit communication; Survivor bias is the tendency to focus on successful individuals or outcomes while ignoring those who were unsuccessful; R&D externalities refer to the positive spillover effects of research and development activities, and can better explain why businesses choose not to invest in seemingly valuable research/technology (as opposed to narratives such as “shareholders are irrationally short-sighted or risk-averse”).
One might argue that there are already many lists out there that provide similar information, so why is this different and better? There are a few reasons why the system I have in mind may outperform a traditional “list of valuable concepts”, but many of these boil down to aggregation, curation, and tailoring: there are potentially hundreds or even thousands of concepts and audiences may have diverse intellectual backgrounds, so you probably want better systems for filtering or recommending concepts for users rather than a “one-size fits all” list. At the same time, you also probably want to bring multiple lists into one place. There are a few ways in which this might be better achieved with a more advanced platform of the type I have in mind:
Machine learning and pattern prediction: readers will often find that some claims are already familiar, overly complex (e.g., they require some prerequisite knowledge), or irrelevant to their work. Given the potentially hundreds or even thousands of potential concepts, it would be good to have a system that can make some initial predictions and recommendations based on how you’ve rated other concepts. (For example, a system should be able to predict that someone who is not familiar with some major principles in economics is more likely to not know other principles in economics.)
Simple rating search: Users could manually filter for those concepts which tend to have high novelty, importance, and/or learnability scores.
Improved categorization (tagging) capabilities: Unlike traditional hierarchical formats (e.g., bullet point lists) that you might see on blog posts, a specialized platform like this would allow better tagging. (Admittedly, sites like the EA Forum allow users to tag overall posts, but they are filled with plenty of unrelated content, and it seems that the dominant source of these “lists of concepts you ought to learn” thus far has been on aggregatory posts.)
Peer-based search/filtering: Users could potentially even manually “friend”/”follow” other users that they epistemically identify with or respect to see their learning habits. (“Episte-migos” if you will.)
There is also a potential argument to make for dynamically crowdsourcing these ideas (rather than relying on a single author and/or at a fixed point in time), although this probably has some limitations.
Moving forward, there are a few things to consider.
Is there already a system like this in existence?
How much user data would be required before the system can make reliable recommendations that are worth using?
How much of the system’s value lies in its user interface, and how can this be optimized to ensure that users get the most out of it?
By addressing these issues, we can create a system that provides real value to individuals looking to expand their knowledge and decision-making abilities.