There are some conversations about policy & government response taking place. I think there are a few main reasons you don’t see them on LessWrong:
There really aren’t that many serious conversations about AI policy, particularly in future worlds where there is greater concern and political will. Much of the AI governance community focuses on things that are within the current Overton Window.
Some conversations take place among people who work for governments & aren’t allowed to (or are discouraged from) sharing a lot of their thinking online.
[Edited] The vast majority of high-quality content on LessWrong is about technical stuff, and it’s pretty rare to see high-quality policy discussions these days (Zvi’s coverage of various bills would be a notable exception). Partially as a result of this, some “serious policy people” don’t really think LW users will have much to add.
There’s a perception that LessWrong has a bit of a libertarian-leaning bias. Some people think LWers are generally kind of anti-government, pro-tech people who are more interested in metastrategies along the lines of “how can me and my smart technical friends save the world” as opposed to “how can governments intervene to prevent the premature development of dangerous technology.”
If anyone here is interested in thinking about “40% agreement” scenariosor more broadly interested in how governments should react in worlds where there is greater evidence of risk, feel free to DM me. Some of my current work focuses on the idea of “emergency preparedness”– how we can improve the government’s ability to detect & respond to AI-related emergencies.
LessWrong does not have a history of being a particularly thoughtful place for people to have policy discussions,
This seems wrong. Scott Alexander and Robin Hanson are two of the most thoughtful thinkers on policy in the world and have a long history of engaging with LessWrong and writing on here. Zvi is IMO also one of the top AI policy analysts right now.
Definitely true policy thinking here has a huge libertarian bent, but I think it’s pretty straightforwardly wrong to claim that LW does not have a history of being a thoughtful place to have policy discussions (indeed, I am hard-pressed to find any place in public with a better history)
I think you’re too close to see objectively. I haven’t observed any room for policy discussions in this forum that stray from what is acceptable to the mods and active participants. If a discussion doesn’t allow for opposing viewpoints, it’s of little value. In my experience, and from what I’ve heard from others who’ve tried posting here and quit, you have not succeeded in making this a forum where people with opposing viewpoints feel welcome.
But if the science symposium allows the janitor to interrupt the speakers and take all day pontificating about his crackpot perpetual motion machine, it’s also of little value. It gets worse if we then allow the conspiracy theorists to feed off of each other. Experts need a protected space to converse, or we’re stuck at the lowest common denominator (incoherent yelling, eventually). We unapologetically do not want trolls to feel welcome here.
Can you accept that the other extreme is bad? I’m not trying to motte-and-bailey you, but moderation is hard. The virtue lies between the extremes, but not always exactly in the center.
What I want from LessWrong is high epistemic standards. That’s compatible with opposing viewpoints, but only when they try to meet our standards, not when they’re making obvious mistakes in reasoning. Some of our highest-karma posts have been opposing views!
Do you have concrete examples? In each of those cases, are you confident it’s because of the opposing view, or could it be their low standards?
Your example of the janitor interrupting the scientist is a good demonstration of my point. I’ve organized over a hundred cybersecurity events featuring over a thousand speakers and I’ve never had a single janitor interrupt a talk. On the other hand, I’ve had numerous “experts” attempt to pass off fiction as fact, draw assumptions from faulty data, and generally behave far worse than any janitor might due to their inflated egos.
Based on my conversations with computer science and philosophy professors who aren’t EA-affiliated, and several who are, their posts are frequently down-voted simply because they represent opposite viewpoints.
Do the moderators of this forum do regular assessments to see how they can make improvements in the online culture so that there’s more diversity in perspective?
can’t comment on moderators, since I’m not one, but I’d be curious to see links you think were received worse than is justified and see if I can learn from them
I’m echoing other commenters somewhat, but—personally—I do not see people being down-voted simply for having different viewpoints. I’m very sympathetic to people trying to genuinely argue against “prevailing” attitudes or simply trying to foster a better general understanding. (E.g. I appreciate Matthew Barnett’s presence, even though I very much disagree with his conclusions and find him overconfident).
Now, of course, the fact that I don’t notice the kind of posts you say are being down-voted may be because they are sufficiently filtered out, which indeed would be undesirable from my perspective and good to know.
Oh good point– I think my original phrasing was too broad. I didn’t mean to suggest that there were no high-quality policy discussions on LW, moreso meant to claim that the proportion/frequency of policy content is relatively limited. I’ve edited to reflect a more precise claim:
The vast majority of high-quality content on LessWrong is about technical stuff, and it’s pretty rare to see high-quality policy discussions on LW these days (Zvi’s coverage of various bills would be a notable exception). Partially as a result of this, some “serious policy people” don’t really think LW users will have much to add.
(I haven’t seen much from Scott or Robin about AI policy topics recently– agree that Zvi’s posts have been helpful.)
(I also don’t know of many public places that have good AI policy discussions. I do think the difference in quality between “public discussions” and “private discussions” is quite high in policy. I’m not quite sure what the difference looks like for people who are deep into technical research, but it seems likely to me that policy culture is more private/secretive than technical culture.)
It’s not that people won’t talk about spherical policies in a vacuum, it’s that the actual next step of “how does this translate into actual politics” is forbidding. Which is kind of understandable, given that we’re probably not very peopley persons, so to speak, inclined to high decoupling, and politics can objectively get very stupid.
In fact my worst worry about this idea isn’t that there wouldn’t be consensus, it’s how it would end up polarising once it’s mainstream enough. Remember how COVID started as a broad “Let’s keep each other safe” reaction and then immediately collapsed into idiocy as soon as worrying about pesky viruses became coded as something for liberal pansies? I expect with AI something similar might happen, not sure in what direction either (there’s a certain anti-AI sentiment building up on the far left but ironically it denies entirely the existence of X-risks as a right wing delusion concocted to hype up AI more). Depending on how those chips fall, actual political action might require all sorts of compromises with annoying bedfellows.
There are some conversations about policy & government response taking place. I think there are a few main reasons you don’t see them on LessWrong:
There really aren’t that many serious conversations about AI policy, particularly in future worlds where there is greater concern and political will. Much of the AI governance community focuses on things that are within the current Overton Window.
Some conversations take place among people who work for governments & aren’t allowed to (or are discouraged from) sharing a lot of their thinking online.
[Edited] The vast majority of high-quality content on LessWrong is about technical stuff, and it’s pretty rare to see high-quality policy discussions these days (Zvi’s coverage of various bills would be a notable exception). Partially as a result of this, some “serious policy people” don’t really think LW users will have much to add.
There’s a perception that LessWrong has a bit of a libertarian-leaning bias. Some people think LWers are generally kind of anti-government, pro-tech people who are more interested in metastrategies along the lines of “how can me and my smart technical friends save the world” as opposed to “how can governments intervene to prevent the premature development of dangerous technology.”
If anyone here is interested in thinking about “40% agreement” scenarios or more broadly interested in how governments should react in worlds where there is greater evidence of risk, feel free to DM me. Some of my current work focuses on the idea of “emergency preparedness”– how we can improve the government’s ability to detect & respond to AI-related emergencies.
This seems wrong. Scott Alexander and Robin Hanson are two of the most thoughtful thinkers on policy in the world and have a long history of engaging with LessWrong and writing on here. Zvi is IMO also one of the top AI policy analysts right now.
Definitely true policy thinking here has a huge libertarian bent, but I think it’s pretty straightforwardly wrong to claim that LW does not have a history of being a thoughtful place to have policy discussions (indeed, I am hard-pressed to find any place in public with a better history)
I think you’re too close to see objectively. I haven’t observed any room for policy discussions in this forum that stray from what is acceptable to the mods and active participants. If a discussion doesn’t allow for opposing viewpoints, it’s of little value. In my experience, and from what I’ve heard from others who’ve tried posting here and quit, you have not succeeded in making this a forum where people with opposing viewpoints feel welcome.
You are not wrong to complain. That’s feedback. But this feels too vague to be actionable.
First, we may agree on more than you think. Yes, groupthink can be a problem, and gets worse over time, if not actively countered. True scientists are heretics.
But if the science symposium allows the janitor to interrupt the speakers and take all day pontificating about his crackpot perpetual motion machine, it’s also of little value. It gets worse if we then allow the conspiracy theorists to feed off of each other. Experts need a protected space to converse, or we’re stuck at the lowest common denominator (incoherent yelling, eventually). We unapologetically do not want trolls to feel welcome here.
Can you accept that the other extreme is bad? I’m not trying to motte-and-bailey you, but moderation is hard. The virtue lies between the extremes, but not always exactly in the center.
What I want from LessWrong is high epistemic standards. That’s compatible with opposing viewpoints, but only when they try to meet our standards, not when they’re making obvious mistakes in reasoning. Some of our highest-karma posts have been opposing views!
Do you have concrete examples? In each of those cases, are you confident it’s because of the opposing view, or could it be their low standards?
Your example of the janitor interrupting the scientist is a good demonstration of my point. I’ve organized over a hundred cybersecurity events featuring over a thousand speakers and I’ve never had a single janitor interrupt a talk. On the other hand, I’ve had numerous “experts” attempt to pass off fiction as fact, draw assumptions from faulty data, and generally behave far worse than any janitor might due to their inflated egos.
Based on my conversations with computer science and philosophy professors who aren’t EA-affiliated, and several who are, their posts are frequently down-voted simply because they represent opposite viewpoints.
Do the moderators of this forum do regular assessments to see how they can make improvements in the online culture so that there’s more diversity in perspective?
can’t comment on moderators, since I’m not one, but I’d be curious to see links you think were received worse than is justified and see if I can learn from them
I’m echoing other commenters somewhat, but—personally—I do not see people being down-voted simply for having different viewpoints. I’m very sympathetic to people trying to genuinely argue against “prevailing” attitudes or simply trying to foster a better general understanding. (E.g. I appreciate Matthew Barnett’s presence, even though I very much disagree with his conclusions and find him overconfident). Now, of course, the fact that I don’t notice the kind of posts you say are being down-voted may be because they are sufficiently filtered out, which indeed would be undesirable from my perspective and good to know.
Oh good point– I think my original phrasing was too broad. I didn’t mean to suggest that there were no high-quality policy discussions on LW, moreso meant to claim that the proportion/frequency of policy content is relatively limited. I’ve edited to reflect a more precise claim:
(I haven’t seen much from Scott or Robin about AI policy topics recently– agree that Zvi’s posts have been helpful.)
(I also don’t know of many public places that have good AI policy discussions. I do think the difference in quality between “public discussions” and “private discussions” is quite high in policy. I’m not quite sure what the difference looks like for people who are deep into technical research, but it seems likely to me that policy culture is more private/secretive than technical culture.)
It’s not that people won’t talk about spherical policies in a vacuum, it’s that the actual next step of “how does this translate into actual politics” is forbidding. Which is kind of understandable, given that we’re probably not very peopley persons, so to speak, inclined to high decoupling, and politics can objectively get very stupid.
In fact my worst worry about this idea isn’t that there wouldn’t be consensus, it’s how it would end up polarising once it’s mainstream enough. Remember how COVID started as a broad “Let’s keep each other safe” reaction and then immediately collapsed into idiocy as soon as worrying about pesky viruses became coded as something for liberal pansies? I expect with AI something similar might happen, not sure in what direction either (there’s a certain anti-AI sentiment building up on the far left but ironically it denies entirely the existence of X-risks as a right wing delusion concocted to hype up AI more). Depending on how those chips fall, actual political action might require all sorts of compromises with annoying bedfellows.