This is a tiny corner of the internet (Timnit Gebru and friends) and probably not worth engaging with, since they consider themselves diametrically opposed to techies/rationalists/etc and will not engage with them in good faith. They are also probably a single-digit number of people, albeit a group really good at getting under techies’ skin.
I’m mostly thinking about cost-benefit. I think even a tiny effort towards expressing empathy would have a moderately beneficial effect, even if only for the people we’re showing empathy towards.
It’s a beautiful dream, but I dunno, man. Have you ever seen Timnit engage charitably and in-good-faith with anyone she’s ever disagreed publicly with?
And absent such charity and good faith, what good could come of any interaction whatsoever?
Really? I think a tiny bit of effort will do exactly nothing, or at best further entrench their beliefs (“See? Even the rationalists think we have valid points!”). The best response is just to ignore them, like most trolls.
I’d further add—they appropriate the language of anti-appropriation, but are not themselves skilled at recognizing the seeking of equity in social systems. They seem socially disoriented by a threat they see, in a similar way to how I see yudkowsky crashing communicatively due to a threat. It doesn’t surprise me to see them upset at yudkowsky; both they and yudkowsky strike me as instantiating the waluigi of their own resistance to a thing as partly containing the thing they are afraid of. The things they claim to care about are things worth caring about, but I cannot endorse their strategy. Care for workers, but some of the elements of their acronym very much do intend to prioritize that, and it’s possible to simply ignore them and just keep on doing the right thing. Nobody can make you be a good person, and if someone is trying to, the only thing you can do is let their emotive words pass over you and their thoughtful words act as a claim about their own perspective.
Like yudkowsky, their perspectives on the threat are useful. But there’s no need for either to dismiss the other, in my view—they see the same threat and feel the other side can’t see it. Just keep trying to make the world better and it’ll solve both their problems.
So—anyone have any ideas for how to drastically improve the memetic resistance to confusion attacks of all beings, computer or chemical, and strengthen and broaden caring between circles of concern?
This is a tiny corner of the internet (Timnit Gebru and friends) and probably not worth engaging with
In hindsight, this seems quite obviously wrong, and efforts to extend more olive branches seems like it would have obviously been better—even if only to legibly demonstrate that safetyists attempted to play nice.
This is a tiny corner of the internet (Timnit Gebru and friends) and probably not worth engaging with, since they consider themselves diametrically opposed to techies/rationalists/etc and will not engage with them in good faith. They are also probably a single-digit number of people, albeit a group really good at getting under techies’ skin.
I’m mostly thinking about cost-benefit. I think even a tiny effort towards expressing empathy would have a moderately beneficial effect, even if only for the people we’re showing empathy towards.
It’s a beautiful dream, but I dunno, man. Have you ever seen Timnit engage charitably and in-good-faith with anyone she’s ever disagreed publicly with?
And absent such charity and good faith, what good could come of any interaction whatsoever?
Really? I think a tiny bit of effort will do exactly nothing, or at best further entrench their beliefs (“See? Even the rationalists think we have valid points!”). The best response is just to ignore them, like most trolls.
I’d further add—they appropriate the language of anti-appropriation, but are not themselves skilled at recognizing the seeking of equity in social systems. They seem socially disoriented by a threat they see, in a similar way to how I see yudkowsky crashing communicatively due to a threat. It doesn’t surprise me to see them upset at yudkowsky; both they and yudkowsky strike me as instantiating the waluigi of their own resistance to a thing as partly containing the thing they are afraid of. The things they claim to care about are things worth caring about, but I cannot endorse their strategy. Care for workers, but some of the elements of their acronym very much do intend to prioritize that, and it’s possible to simply ignore them and just keep on doing the right thing. Nobody can make you be a good person, and if someone is trying to, the only thing you can do is let their emotive words pass over you and their thoughtful words act as a claim about their own perspective.
Like yudkowsky, their perspectives on the threat are useful. But there’s no need for either to dismiss the other, in my view—they see the same threat and feel the other side can’t see it. Just keep trying to make the world better and it’ll solve both their problems.
So—anyone have any ideas for how to drastically improve the memetic resistance to confusion attacks of all beings, computer or chemical, and strengthen and broaden caring between circles of concern?
In hindsight, this seems quite obviously wrong, and efforts to extend more olive branches seems like it would have obviously been better—even if only to legibly demonstrate that safetyists attempted to play nice.