The community seems to not update on ideas and concepts that didn’t originate here.
This seems obviously wrong to me, so I probably don’t understand what you mean. Once you remove the ideas of Hofstadter, Jaynes, Drescher, Kahneman, Pearl, Dawkins, Asimov, Nozick, etc… from Less Wrong, there isn’t a whole lot left. Am I wrong?
These ideas where all presented in the sequences. New ideas outside of LessWrong when not packaged in a semi-original syntehsis post by a LWer don’t propagate. If you think about it that way this seems a waste. Why duplicate effort if the original author of the idea is did it right?
Also in the past two years while we do link to new ideas, they don’t seem to propagate and are hardly ever referenced a year later. Sequence posts however are.
Also in the past two years while we do link to new ideas, they don’t seem to propagate and are hardly ever referenced a year later. Sequence posts however are.
E.g., despite two to five posts on the matter and many comments, there seems to be a huge disconnect between how folk like Wei_Dai, cousin_it, Vladimir_Nesov, &c. interpret Solomonoff induction, and how average LW commenters interpret Solomonoff induction, with the latter group echoing a naive, broken interpretation of the math and thus giving newer people mistaken ideas. It’s frustrating because probability theory is one of few externally-credible things that sets LW’s epistemology apart and yet a substantial fraction of LW folk who bring up algorithmic probability do so for bad reasons and in completely inappropriate contexts. Furthermore because they think they understand the math they also think they have special insight into why the person they disagree with is wrong.
E.g., despite two to five posts on the matter and many comments, there seems to be a huge disconnect between how folk like Wei_Dai, cousin_it, Vladimir_Nesov, &c. interpret Solomonoff induction, and how average LW commenters interpret Solomonoff induction, with the latter group echoing a naive, broken interpretation of the math and thus giving newer people mistaken ideas.
For example? (I don’t recall average users mentioning the subject all that much, right or wrong.)
Furthermore because they think they understand the math they also think they have special insight into why the person they disagree with is wrong.
I haven’t seen this as applied to Solomonoff induction.
(I don’t recall average users mentioning the subject all that much, right or wrong.)
I suppose I meant “relatively average”. Anyway I don’t know where to find examples off the top of my head, sorry.
I haven’t seen this as applied to Solomonoff induction.
IIRC I’ve seen it two to five times, so this specifically is not a big deal in any case.
I’ve seen more general errors pertaining to algorithmic probability much more often than that, sometimes committed by high-status folk like lukeprog, who wrote a post (sequence?) allegedly explaining Solomonoff induction.
Thank you. While I don’t recall the examples myself I believe your testimony regarding the two to five examples you’ve noticed. I expect I am much more likely to notice such comments in the future given the prompting and so take more care when parsing.
I’ve seen more general errors pertaining to algorithmic probability much more often than that, sometimes committed by high-status folk like lukeprog, who wrote a post (sequence?) allegedly explaining Solomonoff induction.
This seems obviously wrong to me, so I probably don’t understand what you mean. Once you remove the ideas of Hofstadter, Jaynes, Drescher, Kahneman, Pearl, Dawkins, Asimov, Nozick, etc… from Less Wrong, there isn’t a whole lot left. Am I wrong?
These ideas where all presented in the sequences. New ideas outside of LessWrong when not packaged in a semi-original syntehsis post by a LWer don’t propagate. If you think about it that way this seems a waste. Why duplicate effort if the original author of the idea is did it right?
Also in the past two years while we do link to new ideas, they don’t seem to propagate and are hardly ever referenced a year later. Sequence posts however are.
E.g., despite two to five posts on the matter and many comments, there seems to be a huge disconnect between how folk like Wei_Dai, cousin_it, Vladimir_Nesov, &c. interpret Solomonoff induction, and how average LW commenters interpret Solomonoff induction, with the latter group echoing a naive, broken interpretation of the math and thus giving newer people mistaken ideas. It’s frustrating because probability theory is one of few externally-credible things that sets LW’s epistemology apart and yet a substantial fraction of LW folk who bring up algorithmic probability do so for bad reasons and in completely inappropriate contexts. Furthermore because they think they understand the math they also think they have special insight into why the person they disagree with is wrong.
For example? (I don’t recall average users mentioning the subject all that much, right or wrong.)
I haven’t seen this as applied to Solomonoff induction.
I suppose I meant “relatively average”. Anyway I don’t know where to find examples off the top of my head, sorry.
IIRC I’ve seen it two to five times, so this specifically is not a big deal in any case.
I’ve seen more general errors pertaining to algorithmic probability much more often than that, sometimes committed by high-status folk like lukeprog, who wrote a post (sequence?) allegedly explaining Solomonoff induction.
Thank you. While I don’t recall the examples myself I believe your testimony regarding the two to five examples you’ve noticed. I expect I am much more likely to notice such comments in the future given the prompting and so take more care when parsing.
I can see why that would be disconcerting.
Yes, but they did not “originate here” (which is what I was responding to in the part I quoted).