I think this post was among the more crisp updates that helped me understand Benquo’s worldview, and shifted my own. I think I still disagree with many of Benquo’s next-steps or approach, but I’m actually not sure. Rereading this post is highlighting some areas I notice I’m confused about.
This post clearly articulates a problem with having language both have a function of “communicating about object level facts” and “political coalitions, attacks/defense, etc”. It makes it really difficult to communicate about important true facts without poking at the social fabric, which often results in the social fabric poking back.
I’m still not sure what to to about this – the social fabric matters. But I do think the status quo of how language typically works (even on LW) is pretty bad. It’s particularly relevant among various AI orgs who’s members might have wildly different worldviews or strategies, who may think each other net negative, but who nonetheless (I think) are better of sharing information and collaborating in various ways anyway. (This isn’t exactly about “crimes”, but I think the subject matter of the post transfers to various other domains where stating your beliefs clearly can make people really upset)
I think this post was among the more crisp updates that helped me understand Benquo’s worldview, and shifted my own. I think I still disagree with many of Benquo’s next-steps or approach, but I’m actually not sure. Rereading this post is highlighting some areas I notice I’m confused about.
This post clearly articulates a problem with having language both have a function of “communicating about object level facts” and “political coalitions, attacks/defense, etc”. It makes it really difficult to communicate about important true facts without poking at the social fabric, which often results in the social fabric poking back.
I’m still not sure what to to about this – the social fabric matters. But I do think the status quo of how language typically works (even on LW) is pretty bad. It’s particularly relevant among various AI orgs who’s members might have wildly different worldviews or strategies, who may think each other net negative, but who nonetheless (I think) are better of sharing information and collaborating in various ways anyway. (This isn’t exactly about “crimes”, but I think the subject matter of the post transfers to various other domains where stating your beliefs clearly can make people really upset)