Ah, sorry, I probably should have explained something important: around a decade or so ago, Lesswrong people and others noticed that the act of finding and justifying contempt for Acceptable Targets was actually an unexpectedly serious flaw in the human brain. It looks kinda bad on the surface (causing global conflict and outgrouping and all), but it’s actually far, far worse.
I think this might have been noticed around the time of the rise of wokeness, and when EA started getting closer to the rationalist movement (EAs, too, often felt intense negative emotions about people who aren’t “getting with the program”, although now that they know about it, most know to mitigate the effect).
The rabbit hole for this is surprisingly deep, and different Lesswrong users have different stances on the human drive for search and justification of Acceptable Targets. You basically walked into invisible helicopter blades here; I’m not sure what could have possibly have been done to avoid it.
That’s interesting, I was not aware of this dynamic. Were there any particular posts that served as a good summary or lodestar for people’s stances on this topic?
As a relative outsider and with the job I have (and also fully acknowledging the self-serving aspect of what I’m about to say) this strikes me as a naive and self-destructive position to hold. People in the real world lie, cheat, manipulate, and exploit others, and it seems patently obvious to me that we need mechanisms to discourage that behavior. A culture of disdain towards that conduct is only one aspect of that fight.
Never mind, I think it’s more of an EA thing than a Lesswrong thing. If you’re more focused on rationality than effective altruism then I’m not sure how helpful it will be.
Behavior discouraging mechanisms are of course a basic feature of life, but reality is often more complicated than that. I think that the lodestar post is Social Dark Matter, which as a public attorney you’ll probably find pretty interesting anyway even though it’s long.
Ah, sorry, I probably should have explained something important: around a decade or so ago, Lesswrong people and others noticed that the act of finding and justifying contempt for Acceptable Targets was actually an unexpectedly serious flaw in the human brain. It looks kinda bad on the surface (causing global conflict and outgrouping and all), but it’s actually far, far worse.
I think this might have been noticed around the time of the rise of wokeness, and when EA started getting closer to the rationalist movement (EAs, too, often felt intense negative emotions about people who aren’t “getting with the program”, although now that they know about it, most know to mitigate the effect).
The rabbit hole for this is surprisingly deep, and different Lesswrong users have different stances on the human drive for search and justification of Acceptable Targets. You basically walked into invisible helicopter blades here; I’m not sure what could have possibly have been done to avoid it.
That’s interesting, I was not aware of this dynamic. Were there any particular posts that served as a good summary or lodestar for people’s stances on this topic?
As a relative outsider and with the job I have (and also fully acknowledging the self-serving aspect of what I’m about to say) this strikes me as a naive and self-destructive position to hold. People in the real world lie, cheat, manipulate, and exploit others, and it seems patently obvious to me that we need mechanisms to discourage that behavior. A culture of disdain towards that conduct is only one aspect of that fight.
Never mind, I think it’s more of an EA thing than a Lesswrong thing. If you’re more focused on rationality than effective altruism then I’m not sure how helpful it will be.
Behavior discouraging mechanisms are of course a basic feature of life, but reality is often more complicated than that. I think that the lodestar post is Social Dark Matter, which as a public attorney you’ll probably find pretty interesting anyway even though it’s long.
This is not a LessWrong dynamic I’ve particularly noticed and it seems inaccurate to describe it as invisible helicopter blades to me
(I also don’t really get what trevor is talking about)