I upvoted you because you caused this response to be generated, which was informing to read, and I like informative things, and whatever generates informative things can’t be all bad <3
Thank you for that! :-)
...
However, I strongly disagree with your claim that LW’s audience is “uninformed” except in the generalized sense that nearly all humans are ignorant about nearly all detailed topics, and: yes, nearly all of the contributors to Lesswrong are humans and thus ignorant in general by default.
Based on my personal experiences, however, most people on Lesswrong are unusually well informed relative to numerous plausible baselines on a variety of topics relevant to good judgement and skilled prediction and computer science and similar topics.
...
Also, it seems like you used the word “alarmist” as though it deserved negative connotations, whereas I feel that having well designed methods for raising alarm and responding to real emergencies is critical to getting good outcomes in life, overall, in light of the non-Gaussian distribution of outcomes over events that is common to real world dynamical processes. So… “alarmism” could, depending on details, be good or bad or in between.
I think the generally disastrously incompetent response, by the world, to covid-19′s escape from a lab, and subsequent killing of millions of people, is a vivid illustration of a lack of competent admirable “alarmism” in the ambient culture. Thus I see Lesswrong as helpfully “counter-culture” here, and a net positive.
...
Also, even if the typical reader on Lesswrong is “more than normally uninformed and unskillfully alarmist” that does not coherently imply that exposing the audience to short, interesting, informative content about AI advances is a bad idea.
I think, in this sense, your model of discussion and decision making and debate assumes that adults can’t really discuss things productively, and so perhaps everything on the internet should proceed as if everyone is incompetent and only worthy of carefully crafted and highly manipulative speech?
And then perhaps the post above was not “cautiously manipulative enough” to suit your tastes?
Maybe I’m wrong in imputing this implicit claim to you?
And maybe I’m wrong to reject this claim in the places that I sometimes find it?
I’d be open to discussion here :-)
Finally, your claim that “you” (who actually? which people specifically?) somehow “have an AGI death cult going here” seems like it might be “relatively uninformed and relatively alarmist”?
Or maybe your own goal is to communicate an ad hominem and then feel good about it somehow? If you are not simply emoting, but actually have a robust model here then I’d be interested in hearing how it unpacks!
For example, it is useful and healthy (in my opinion) to regularly survey one’s own beliefs and those of others using a lens where one ASSUMES (for the sake of exploratory discovery) that some of the beliefs exist to generate plausible IOUs for the delivery of goods that are hard-to-impossible to truly acquire and then to protect those beliefs from socially vivid falsification via the manipulation of tolerated rhetoric and social process. I regularly try to pop such bubbles in a human and gentle way when I see them starting to form in my ambient social community. If this is unwelcome I sometimes leave the community… and I’m here for now… and maybe I’m “doing it wrong” (which is totally possible) but if so then I would hope people explain to me what I’m doing wrong so I can learn n’stuff.
Every couple years I have run the Bonewits Checklist and it has never returned a score that was so high as to be worrisome (except for maybe parts of the F2F community in Berkeley two or three years on either side of Trump’s election maybe?) and many manymany things in modern society get higher scores, as near as I can tell :-(
For example, huge swaths of academia seem to be to be almost entirely bullshit, and almost entirely to exist to maintain false compensators for the academics and those who fund them.
Also, nearly any effective political movement flirts with worryingly high Bonewits scores.
Also, any non-profit not run essentially entirely on the interest of a giant endowment will flirt with a higher Bonewits score.
Are you against all non-engineering academic science, and all non-profits, and all politics? Somehow I doubt this...
In general, I feel your take here is just not well formed to be useful, and if you were going to really put in the intellectual and moral elbow grease to sharpen the points into something helpfully actionable, you might need to read some, and actually think for a while?
Finally finally, the “death cult” part doesn’t even make sense… If you insist on using the noun “cult” then it is, if anything an instance of an extended and heterogeneous community opposed todangerous robots and in favor of life.
Are you OK? A hypothesis here is that you might be having a bad time :-(
It feels to me like your comment here was something you could predict would not be well received and you posted it anyway.
Thus, from an emotional perspective, you have earned a modicum of my admiration for persisting through social fear into an expression of concern for the world’s larger wellbeing! I think that this core impulse is a source of much good in the world. As I said at the outset: I upvoted!
Please do not take my direct challenges to your numerous semi-implicit claims to be an attack. I’m trying to see if your morally praiseworthy impulses have a seed of epistemic validity, and help you articulate it better if it exists. First we learn, then we plan, then we act! If you can’t unpack your criticism into something cogently actionable, then maybe by talking it out we can improve the contents of our minds? :-)
You made interesting points. In particular, I did not know about the Cult checklist, which is really interesting. I’d be interested in your evaluation of LW based on that list.
I also like that you really engage with the points made in the comment. Moreover, I agree that posting a comment even though you can predict that it will not be well-received is something that should be encouraged, given that you are convinced of the comment’s value.
However, I think you are interpreting unfairly much into the comment at one point: “Are you OK? A hypothesis here is that you might be having a bad time :-(” seems a bit out of place, because it seems to suggests that speculating about alleged motivations is helpful.
I upvoted you because you caused this response to be generated, which was informing to read, and I like informative things, and whatever generates informative things can’t be all bad <3
Thank you for that! :-)
...
However, I strongly disagree with your claim that LW’s audience is “uninformed” except in the generalized sense that nearly all humans are ignorant about nearly all detailed topics, and: yes, nearly all of the contributors to Lesswrong are humans and thus ignorant in general by default.
Based on my personal experiences, however, most people on Lesswrong are unusually well informed relative to numerous plausible baselines on a variety of topics relevant to good judgement and skilled prediction and computer science and similar topics.
...
Also, it seems like you used the word “alarmist” as though it deserved negative connotations, whereas I feel that having well designed methods for raising alarm and responding to real emergencies is critical to getting good outcomes in life, overall, in light of the non-Gaussian distribution of outcomes over events that is common to real world dynamical processes. So… “alarmism” could, depending on details, be good or bad or in between.
I think the generally disastrously incompetent response, by the world, to covid-19′s escape from a lab, and subsequent killing of millions of people, is a vivid illustration of a lack of competent admirable “alarmism” in the ambient culture. Thus I see Lesswrong as helpfully “counter-culture” here, and a net positive.
...
Also, even if the typical reader on Lesswrong is “more than normally uninformed and unskillfully alarmist” that does not coherently imply that exposing the audience to short, interesting, informative content about AI advances is a bad idea.
I think, in this sense, your model of discussion and decision making and debate assumes that adults can’t really discuss things productively, and so perhaps everything on the internet should proceed as if everyone is incompetent and only worthy of carefully crafted and highly manipulative speech?
And then perhaps the post above was not “cautiously manipulative enough” to suit your tastes?
Maybe I’m wrong in imputing this implicit claim to you?
And maybe I’m wrong to reject this claim in the places that I sometimes find it?
I’d be open to discussion here :-)
Finally, your claim that “you” (who actually? which people specifically?) somehow “have an AGI death cult going here” seems like it might be “relatively uninformed and relatively alarmist”?
Or maybe your own goal is to communicate an ad hominem and then feel good about it somehow? If you are not simply emoting, but actually have a robust model here then I’d be interested in hearing how it unpacks!
My own starting point in these regards tends to be the Bainbridge & Stark’s sociological model of cults from The Future Of Religion. Since positive cultural innovation has cult formation as a known negative attractor it is helpful, if one’s goal is to create positive-EV cultural innovations, to actively try to detect and ameliorate such tendencies.
For example, it is useful and healthy (in my opinion) to regularly survey one’s own beliefs and those of others using a lens where one ASSUMES (for the sake of exploratory discovery) that some of the beliefs exist to generate plausible IOUs for the delivery of goods that are hard-to-impossible to truly acquire and then to protect those beliefs from socially vivid falsification via the manipulation of tolerated rhetoric and social process. I regularly try to pop such bubbles in a human and gentle way when I see them starting to form in my ambient social community. If this is unwelcome I sometimes leave the community… and I’m here for now… and maybe I’m “doing it wrong” (which is totally possible) but if so then I would hope people explain to me what I’m doing wrong so I can learn n’stuff.
Every couple years I have run the Bonewits Checklist and it has never returned a score that was so high as to be worrisome (except for maybe parts of the F2F community in Berkeley two or three years on either side of Trump’s election maybe?) and many many many things in modern society get higher scores, as near as I can tell :-(
For example, huge swaths of academia seem to be to be almost entirely bullshit, and almost entirely to exist to maintain false compensators for the academics and those who fund them.
Also, nearly any effective political movement flirts with worryingly high Bonewits scores.
Also, any non-profit not run essentially entirely on the interest of a giant endowment will flirt with a higher Bonewits score.
Are you against all non-engineering academic science, and all non-profits, and all politics? Somehow I doubt this...
In general, I feel your take here is just not well formed to be useful, and if you were going to really put in the intellectual and moral elbow grease to sharpen the points into something helpfully actionable, you might need to read some, and actually think for a while?
Finally finally, the “death cult” part doesn’t even make sense… If you insist on using the noun “cult” then it is, if anything an instance of an extended and heterogeneous community opposed to dangerous robots and in favor of life.
Are you OK? A hypothesis here is that you might be having a bad time :-(
It feels to me like your comment here was something you could predict would not be well received and you posted it anyway.
Thus, from an emotional perspective, you have earned a modicum of my admiration for persisting through social fear into an expression of concern for the world’s larger wellbeing! I think that this core impulse is a source of much good in the world. As I said at the outset: I upvoted!
Please do not take my direct challenges to your numerous semi-implicit claims to be an attack. I’m trying to see if your morally praiseworthy impulses have a seed of epistemic validity, and help you articulate it better if it exists. First we learn, then we plan, then we act! If you can’t unpack your criticism into something cogently actionable, then maybe by talking it out we can improve the contents of our minds? :-)
You made interesting points. In particular, I did not know about the Cult checklist, which is really interesting. I’d be interested in your evaluation of LW based on that list.
I also like that you really engage with the points made in the comment. Moreover, I agree that posting a comment even though you can predict that it will not be well-received is something that should be encouraged, given that you are convinced of the comment’s value.
However, I think you are interpreting unfairly much into the comment at one point: “Are you OK? A hypothesis here is that you might be having a bad time :-(” seems a bit out of place, because it seems to suggests that speculating about alleged motivations is helpful.