Haven’t read your entire post yet but agree broadly with the idea. Unsure of your methodology but I think knowledge has to be built from the ground-up. Lack of understanding leads to frustration. Upvote systems encourage that difficult concepts must not simply be described but also taught/explained thoroughly rather than just ‘pointed at’.
For example, I can understand on some level if someone tries to explain to me why object oriented design patterns in programming are inferior to procedural, but if I’ve never made programs with either methodology, I will only understand the broadest strokes, none of the examples given or reasoning will really resonate with me.
On average, when describing any concept, a certain number of people will have the necessary ‘base understanding’ to grok it based on the explanation, and an additional number of people will need significantly more explanation to understand.
I think on one side of the extreme, you have an explanation from someone with an extremely autistic brain, going into far more detail than one might need, assuming the listener is lacking all relevant information.
On the other side, you have the schizophrenic or manic brained explanation, which describes things completely intuitively, assuming that the listener understands all of the unspoken elements without needing them to be explained. Most people would think that it just sounds like complete gibberish.
I think the perfect middle ground is the ‘highly esteemed teacher-brained explanation’, someone who describes things both basically and intuitively in perfect amounts, so the widest audience is capable of understanding even some amount of the concept. Imagine the best teacher you’ve ever had in college, whoever was able to really convey difficult concepts in a way you immediately understood on a fundamental level, allowing you to then develop more complex understanding. I think upvote based systems, at their best, encourage this sort of information.
I think at their WORST, upvote systems discourage valuable discourse that requires an understanding of the subject matter so that you can intuitively grok a difficult, novel piece of information.
This then causes the content to trend towards being easily comprehensible but lower overall quality, novelty and complexity. This is often referred to as speaking to the ‘lowest common denominator’ when referred to derisively. This is the ‘endless summer’ of internet communities. The larger and less specified a demographic is, the less unique, interesting, and high quality it becomes, as the content valued by the average user is different than the content valued by the informed, experienced, insular user.
If your system intends to solve these problems, I support it strongly. I think that a website/app can support a large community without also being lowered in quality. I think the endless summer effect is not an inevitability of all systems of this type, but a symptom of describing the ‘most valuable information’ as the ‘most upvoted or engaged-with information’ which is frequently not the case! I mean, that’s clearly evident to anyone who’s used Reddit.
The Story Node design that I suggested permits spinning up such nodes independently, with gated input from the users. The result could be effectively the same as Lightcone attempted with LessWrong and Alignment Forum separation, just with less gatekeeping: Alignment Forum gates not only votes but also posts and comments, the latter seems particularly unreasonable and elitist to me: as if, comments on LessWrong are not worthy of attention. This, I suspect, leads to that even if somebody still uses Alignment Forum frontpage as the entry point to AI safety discussion, which I myself long abandoned in favour of LW, reading comments on Alignment Forum strictly doesn’t make sense to anyone. The design that I propose, on the other hand, separates the concerns of “who can post the content” and the usefulness/signal calculation of the content.
And this system is more general: apart from prioritising “high quality AI safety content”, more than one person wrote here that there is too much AI safety and x-risk content for them. Filtering specific tags is brittle because people apply tags rather inconsistently to their writing. So, another Story Node could be founded by these people who want to read and upvote primarily “good old rationality content” and not AI safety content.
Haven’t read your entire post yet but agree broadly with the idea. Unsure of your methodology but I think knowledge has to be built from the ground-up. Lack of understanding leads to frustration. Upvote systems encourage that difficult concepts must not simply be described but also taught/explained thoroughly rather than just ‘pointed at’.
For example, I can understand on some level if someone tries to explain to me why object oriented design patterns in programming are inferior to procedural, but if I’ve never made programs with either methodology, I will only understand the broadest strokes, none of the examples given or reasoning will really resonate with me.
On average, when describing any concept, a certain number of people will have the necessary ‘base understanding’ to grok it based on the explanation, and an additional number of people will need significantly more explanation to understand.
I think on one side of the extreme, you have an explanation from someone with an extremely autistic brain, going into far more detail than one might need, assuming the listener is lacking all relevant information.
On the other side, you have the schizophrenic or manic brained explanation, which describes things completely intuitively, assuming that the listener understands all of the unspoken elements without needing them to be explained. Most people would think that it just sounds like complete gibberish.
I think the perfect middle ground is the ‘highly esteemed teacher-brained explanation’, someone who describes things both basically and intuitively in perfect amounts, so the widest audience is capable of understanding even some amount of the concept. Imagine the best teacher you’ve ever had in college, whoever was able to really convey difficult concepts in a way you immediately understood on a fundamental level, allowing you to then develop more complex understanding. I think upvote based systems, at their best, encourage this sort of information.
I think at their WORST, upvote systems discourage valuable discourse that requires an understanding of the subject matter so that you can intuitively grok a difficult, novel piece of information.
This then causes the content to trend towards being easily comprehensible but lower overall quality, novelty and complexity. This is often referred to as speaking to the ‘lowest common denominator’ when referred to derisively. This is the ‘endless summer’ of internet communities. The larger and less specified a demographic is, the less unique, interesting, and high quality it becomes, as the content valued by the average user is different than the content valued by the informed, experienced, insular user.
If your system intends to solve these problems, I support it strongly. I think that a website/app can support a large community without also being lowered in quality. I think the endless summer effect is not an inevitability of all systems of this type, but a symptom of describing the ‘most valuable information’ as the ‘most upvoted or engaged-with information’ which is frequently not the case! I mean, that’s clearly evident to anyone who’s used Reddit.
The Story Node design that I suggested permits spinning up such nodes independently, with gated input from the users. The result could be effectively the same as Lightcone attempted with LessWrong and Alignment Forum separation, just with less gatekeeping: Alignment Forum gates not only votes but also posts and comments, the latter seems particularly unreasonable and elitist to me: as if, comments on LessWrong are not worthy of attention. This, I suspect, leads to that even if somebody still uses Alignment Forum frontpage as the entry point to AI safety discussion, which I myself long abandoned in favour of LW, reading comments on Alignment Forum strictly doesn’t make sense to anyone. The design that I propose, on the other hand, separates the concerns of “who can post the content” and the usefulness/signal calculation of the content.
And this system is more general: apart from prioritising “high quality AI safety content”, more than one person wrote here that there is too much AI safety and x-risk content for them. Filtering specific tags is brittle because people apply tags rather inconsistently to their writing. So, another Story Node could be founded by these people who want to read and upvote primarily “good old rationality content” and not AI safety content.