I wonder what will we hear from non-LW rationalists about the SIAI when it gains enough prominence. I think its pretty easy to predict…
I don’t really want to watch that live though—some day some genuine technological danger, AI related or not, may be actually foreseen—and then actions of boys who read in fiction about Chupacabra and then cry wolf (and get candy on request for the clarity of their cries) are going to ever so slightly raise existential risk (if widely rebutted).
Note that cryonics is pretty incidental to rationality. If anything, having people live forever is likely to slow down the progress due to increased odds of gerontocracy, and thus be detrimental to the survival of the human species. EY reflects on this in his “3 worlds collide” story.
I don’t feel like going looking for the original post, someone can move this there if they want, but responding to Godot’s complaint about “cached thoughts”, it is now apparent that they should more accurately be called “habitual thoughts”, thoughts that automatically re-occur in response to a particular stimulus.
It helps to keep in mind that the sequences are not polished works of brilliance, but things written as first drafts for a book as part of a two-year blog-a-day marathon and that will never be revised. So as long as that “sequences” link is up there, we’re stuck with the unpolished bits.
Of course. That is why I wrote “now apparent”; it didn’t occur to me very long ago, largely as a result of some research I did on habits a few months ago.
By non-LW rationalists I mean the people whom promote science, for instance.
edit: On the rationality, the issue is that IMO breaking down the improvement into two sub improvements of ‘having the most unbiased selection of propositions’ and ‘performing most accurate Bayesian updates on them’ simply doesn’t result in most win for computationally bounded agents compared to the status quo of trying to generate more of most useful hypotheses (at the expense of not generating less useful ones), and propagating certainty between hypotheses in such a way that the biases arising from cherry-picked selection of hypotheses (as consequence of pruning) are not too harmful. I’d dare to guess that if you do generate hypotheses as usual (with the usual pruning) and then do updates on them in a new way you’d probably self-sabotage (you end up updating on N propositions that support or nonsupport proposition A , and then superfluously get very confident in A or ~A because N is a small, biased sample out of M>>>N hypotheses). Roko incident looks like rather amusing instance of such.
I wonder what will we hear from non-LW rationalists about the SIAI when it gains enough prominence. I think its pretty easy to predict…
I don’t really want to watch that live though—some day some genuine technological danger, AI related or not, may be actually foreseen—and then actions of boys who read in fiction about Chupacabra and then cry wolf (and get candy on request for the clarity of their cries) are going to ever so slightly raise existential risk (if widely rebutted).
Note that cryonics is pretty incidental to rationality. If anything, having people live forever is likely to slow down the progress due to increased odds of gerontocracy, and thus be detrimental to the survival of the human species. EY reflects on this in his “3 worlds collide” story.
Do we need to predict? http://rationalwiki.org/wiki/LessWrong & http://rationalwiki.org/wiki/Thread:User_talk:WaitingforGodot/Criticisms_of_LessWrong—and keep in mind this is with David Gerard watering it down.
And Tetronian.
I’ll note yet again that, in the general case, if you’re worrying about your image on RationalWiki, then you’re bottoming out the obscurity scale.
You’re missing the point...
Well, it is a reaction. I’m cautioning against overtraining on a single datum.
Agreed. Even on RationalWiki, there are no more than 5 people who care enough about LessWrong to talk about it regularly, excluding you and me.
I don’t feel like going looking for the original post, someone can move this there if they want, but responding to Godot’s complaint about “cached thoughts”, it is now apparent that they should more accurately be called “habitual thoughts”, thoughts that automatically re-occur in response to a particular stimulus.
Copied with some editing to the Sequence Rerun of Cached Thoughts post
It helps to keep in mind that the sequences are not polished works of brilliance, but things written as first drafts for a book as part of a two-year blog-a-day marathon and that will never be revised. So as long as that “sequences” link is up there, we’re stuck with the unpolished bits.
Of course. That is why I wrote “now apparent”; it didn’t occur to me very long ago, largely as a result of some research I did on habits a few months ago.
You didn’t yet gain enough prominence...
By non-LW rationalists I mean the people whom promote science, for instance.
edit: On the rationality, the issue is that IMO breaking down the improvement into two sub improvements of ‘having the most unbiased selection of propositions’ and ‘performing most accurate Bayesian updates on them’ simply doesn’t result in most win for computationally bounded agents compared to the status quo of trying to generate more of most useful hypotheses (at the expense of not generating less useful ones), and propagating certainty between hypotheses in such a way that the biases arising from cherry-picked selection of hypotheses (as consequence of pruning) are not too harmful. I’d dare to guess that if you do generate hypotheses as usual (with the usual pruning) and then do updates on them in a new way you’d probably self-sabotage (you end up updating on N propositions that support or nonsupport proposition A , and then superfluously get very confident in A or ~A because N is a small, biased sample out of M>>>N hypotheses). Roko incident looks like rather amusing instance of such.