So I guess this is as good a place as any to express that view
Meta point: seems sad to me if many arguments on topics are spread across many posts in a way that’d be hard for a person to track down e.g. all the arguments regarding generalization/not-generalization.
This makes me want something like the Arbital wiki vision where you can find not just settled facts, but also the list of arguments/considerations in either direction on disputed topics.
Plausibly the existing LW/AF wiki-tag system could do this as far as format/software goes, we just need to get people creating pages for all the concepts/disagreements and then properly tagging things and distilling things. This is an addition to better pages for relatively more settled ideas like “inner alignment”.
All of this is a plausible thing for the LW team to try to make happen. [Focus for the next month or two is (a) ensuring that amidst all the great discussion of AI, LessWrong doesn’t lose its identity as a site for Rationality/epistemics/pursuing truth/accurate models across all domains, (b) fostering epistemics in the new wave of alignment researchers (and community builders), though I am quite uncertain about many aspects of this goal/plan.]
I have just recently been wondering where we stand on the very basic description of the problem criteria for productive conversations. Of late our conversations seem to have more of the flavor of proposal for solution → criticism of solution, which of course is fine if we have the problem described; but if that were the case why do so many criticisms take the form of disagreements over the nature of the problem?
A very reasonable objection is that there are too many unknowns at work, so people are working on those. But this feels like one meta-problem, so the same reasoning should apply and we want a description of the meta-problem.
I suppose it might be fair to say we are currently working on competing descriptions of the meta-problem. Note to self: doing another survey of the recent conversations with this in mind might be clarifying.
Meta point: seems sad to me if many arguments on topics are spread across many posts in a way that’d be hard for a person to track down e.g. all the arguments regarding generalization/not-generalization.
This makes me want something like the Arbital wiki vision where you can find not just settled facts, but also the list of arguments/considerations in either direction on disputed topics.
Plausibly the existing LW/AF wiki-tag system could do this as far as format/software goes, we just need to get people creating pages for all the concepts/disagreements and then properly tagging things and distilling things. This is an addition to better pages for relatively more settled ideas like “inner alignment”.
All of this is a plausible thing for the LW team to try to make happen. [Focus for the next month or two is (a) ensuring that amidst all the great discussion of AI, LessWrong doesn’t lose its identity as a site for Rationality/epistemics/pursuing truth/accurate models across all domains, (b) fostering epistemics in the new wave of alignment researchers (and community builders), though I am quite uncertain about many aspects of this goal/plan.]
I have just recently been wondering where we stand on the very basic description of the problem criteria for productive conversations. Of late our conversations seem to have more of the flavor of proposal for solution → criticism of solution, which of course is fine if we have the problem described; but if that were the case why do so many criticisms take the form of disagreements over the nature of the problem?
A very reasonable objection is that there are too many unknowns at work, so people are working on those. But this feels like one meta-problem, so the same reasoning should apply and we want a description of the meta-problem.
I suppose it might be fair to say we are currently working on competing descriptions of the meta-problem. Note to self: doing another survey of the recent conversations with this in mind might be clarifying.
Stampy’s QA format might be a reasonable fit, given that we’re aiming to become an single point of access for alignment.