(This is my first post so please kindly point me to my misconceptions if there are any)
This is seeking a technological solution to a social problem.
It is still strange to me that people say this as if it were a criticism.
It is not that strange when dealing with technological solutions to problems that we haven’t yet understood. You define your goal as creating a “commons of knowledge”. Consider a few points:
[1] There seems to be a confusion between information and knowledge. I know that the LW community is attempting to provide a rational methodology towards knowledge but I have not seen this been done in any way that is substantially different. It is discussion as always with more commitment towards rationality (which is great!).
[2] We do not have an efficent way of representing arguments. Argument mapping is an attempt to that direction. I personally tend to use a numbering convention inspired by Wittgenstein (I am using it here as an example). The bottom line is that discussions tend to be quite unordered and opinions tend to be conflated with truths (see [1]).
[3] From [1] and [2] as an outsider I do not understand what the root group represents. Are these the people that are more rational? Who has decided that?
So maybe that is what Plethora meant. I am myself really interested in this problem and have been thinking about it for some time. My recommendation would be to focus first in smaller issues such as how to represent an argument in a way that can extract a truth rating. But even that is too ambitious at the moment. How about a technological solution for representing arguments with clarity so that both sides:
can see what is being said in clearly labeled propositions.
can identify errors in logic and mark them down.
can weed out opinions from experimentally confirmed scientific facts.
can link to sources and have a way to recursively examine their ‘truth rating’ down to the most primary source.
These are just a few indicative challenges. There are also issues with methods for source verification exemplified by the ongoing scandals with data forging in psychology and neuroscience and the list goes on..
(This is my first post so please kindly point me to my misconceptions if there are any)
It is not that strange when dealing with technological solutions to problems that we haven’t yet understood. You define your goal as creating a “commons of knowledge”. Consider a few points:
[1] There seems to be a confusion between information and knowledge. I know that the LW community is attempting to provide a rational methodology towards knowledge but I have not seen this been done in any way that is substantially different. It is discussion as always with more commitment towards rationality (which is great!).
[2] We do not have an efficent way of representing arguments. Argument mapping is an attempt to that direction. I personally tend to use a numbering convention inspired by Wittgenstein (I am using it here as an example). The bottom line is that discussions tend to be quite unordered and opinions tend to be conflated with truths (see [1]).
[3] From [1] and [2] as an outsider I do not understand what the root group represents. Are these the people that are more rational? Who has decided that?
So maybe that is what Plethora meant. I am myself really interested in this problem and have been thinking about it for some time. My recommendation would be to focus first in smaller issues such as how to represent an argument in a way that can extract a truth rating. But even that is too ambitious at the moment. How about a technological solution for representing arguments with clarity so that both sides:
can see what is being said in clearly labeled propositions.
can identify errors in logic and mark them down.
can weed out opinions from experimentally confirmed scientific facts.
can link to sources and have a way to recursively examine their ‘truth rating’ down to the most primary source.
These are just a few indicative challenges. There are also issues with methods for source verification exemplified by the ongoing scandals with data forging in psychology and neuroscience and the list goes on..
I would like to vote up this recommendation:
This is an un-explored area, and seems to me like it would have a higher ROI than a deep dive into variations on voting/rating/reputation systems.