I don’t feel like going looking for the original post, someone can move this there if they want, but responding to Godot’s complaint about “cached thoughts”, it is now apparent that they should more accurately be called “habitual thoughts”, thoughts that automatically re-occur in response to a particular stimulus.
It helps to keep in mind that the sequences are not polished works of brilliance, but things written as first drafts for a book as part of a two-year blog-a-day marathon and that will never be revised. So as long as that “sequences” link is up there, we’re stuck with the unpolished bits.
Of course. That is why I wrote “now apparent”; it didn’t occur to me very long ago, largely as a result of some research I did on habits a few months ago.
By non-LW rationalists I mean the people whom promote science, for instance.
edit: On the rationality, the issue is that IMO breaking down the improvement into two sub improvements of ‘having the most unbiased selection of propositions’ and ‘performing most accurate Bayesian updates on them’ simply doesn’t result in most win for computationally bounded agents compared to the status quo of trying to generate more of most useful hypotheses (at the expense of not generating less useful ones), and propagating certainty between hypotheses in such a way that the biases arising from cherry-picked selection of hypotheses (as consequence of pruning) are not too harmful. I’d dare to guess that if you do generate hypotheses as usual (with the usual pruning) and then do updates on them in a new way you’d probably self-sabotage (you end up updating on N propositions that support or nonsupport proposition A , and then superfluously get very confident in A or ~A because N is a small, biased sample out of M>>>N hypotheses). Roko incident looks like rather amusing instance of such.
Do we need to predict? http://rationalwiki.org/wiki/LessWrong & http://rationalwiki.org/wiki/Thread:User_talk:WaitingforGodot/Criticisms_of_LessWrong—and keep in mind this is with David Gerard watering it down.
And Tetronian.
I’ll note yet again that, in the general case, if you’re worrying about your image on RationalWiki, then you’re bottoming out the obscurity scale.
You’re missing the point...
Well, it is a reaction. I’m cautioning against overtraining on a single datum.
Agreed. Even on RationalWiki, there are no more than 5 people who care enough about LessWrong to talk about it regularly, excluding you and me.
I don’t feel like going looking for the original post, someone can move this there if they want, but responding to Godot’s complaint about “cached thoughts”, it is now apparent that they should more accurately be called “habitual thoughts”, thoughts that automatically re-occur in response to a particular stimulus.
Copied with some editing to the Sequence Rerun of Cached Thoughts post
It helps to keep in mind that the sequences are not polished works of brilliance, but things written as first drafts for a book as part of a two-year blog-a-day marathon and that will never be revised. So as long as that “sequences” link is up there, we’re stuck with the unpolished bits.
Of course. That is why I wrote “now apparent”; it didn’t occur to me very long ago, largely as a result of some research I did on habits a few months ago.
You didn’t yet gain enough prominence...
By non-LW rationalists I mean the people whom promote science, for instance.
edit: On the rationality, the issue is that IMO breaking down the improvement into two sub improvements of ‘having the most unbiased selection of propositions’ and ‘performing most accurate Bayesian updates on them’ simply doesn’t result in most win for computationally bounded agents compared to the status quo of trying to generate more of most useful hypotheses (at the expense of not generating less useful ones), and propagating certainty between hypotheses in such a way that the biases arising from cherry-picked selection of hypotheses (as consequence of pruning) are not too harmful. I’d dare to guess that if you do generate hypotheses as usual (with the usual pruning) and then do updates on them in a new way you’d probably self-sabotage (you end up updating on N propositions that support or nonsupport proposition A , and then superfluously get very confident in A or ~A because N is a small, biased sample out of M>>>N hypotheses). Roko incident looks like rather amusing instance of such.