I went back and looked specifically at johnswentworth’s comments and he did nothing wrong, he made a valid criticism and stopped short a long way away from calling for the contest to be removed from the front page. He tried to open a discussion based on real and significant concerns, and other people very visibly took it too far.
Who do you believe are the several people who acted here without current situation with AI governance and AI policy?
The people involved with moderator power on LessWrong like Raemon and Ruby? Do you really think that people who know nothing about the current situation with AI governance and AI policy are given moderator power on LessWrong?
Their decision to take the contest off the front page indicates decent odds of that, from a bayesian standpoint. But there are also other factors, like the people who criticized the contest much more harshly than Johnswentworth who only initiated, as well as dozens of anonymous accounts that upvoted anything criticizing the contest, and there’s also the fact that few could have confidently predicted that the contest would suffer from a severe lack of attention as a result.
Obviously, there’s also the possibility that the mods have more info than I do about the importance of Lesswrong’s reputation of extreme honesty. But it looks unlikely that they had the policy experience they needed to know why the contest was extremely valuable for AI policy.
dozens of anonymous accounts that upvoted anything criticizing the contest
That’s not what happened. If you take for example Chris Leong’s post saying “That idea seems reasonable at first glance, but upon reflection, I think it’s a really bad idea” it has a total of six votes (counting both votes cast in favor and votes against it) at the time of my writing of this comment. LessWrong gives experienced (high karma) users more voting strength.
If you want to accuse Chris Leong of not understanding AI policy, he ran the Sydney AI Safety Fellowship. Whether you count such activity as policy experience depends a lot on your definitions. It’s a way to get other people to take beneficial actions when it comes to AI safety. On the other hand, he didn’t engage with governments to try to get them to change policy.
There are a lot of actions that can be taken for the sake of doing something about AI safety that are not helpful. This community convinced Elon Musk back in 2014 that AI safety is important and then he went ahead and funded OpenAI and people like Eliezer Yudkowsky argue that he produced net harm with that.
Experiences like that suggest that it’s not enough to convince people that AI safety is important, but that it’s actually important to get people to understand the AI safety problems more deeply. It’s possible that people in this community who have thought a lot about AI safety underrate the value of policymakers who don’t understand AI safety but who get convinced that they should do something about AI safety, but making ad hominems about those people not understanding current AI governance and policy is not helpful.
I went back and looked specifically at johnswentworth’s comments and he did nothing wrong, he made a valid criticism and stopped short a long way away from calling for the contest to be removed from the front page. He tried to open a discussion based on real and significant concerns, and other people very visibly took it too far.
The people involved with moderator power on LessWrong like Raemon and Ruby? Do you really think that people who know nothing about the current situation with AI governance and AI policy are given moderator power on LessWrong?
Their decision to take the contest off the front page indicates decent odds of that, from a bayesian standpoint. But there are also other factors, like the people who criticized the contest much more harshly than Johnswentworth who only initiated, as well as dozens of anonymous accounts that upvoted anything criticizing the contest, and there’s also the fact that few could have confidently predicted that the contest would suffer from a severe lack of attention as a result.
Obviously, there’s also the possibility that the mods have more info than I do about the importance of Lesswrong’s reputation of extreme honesty. But it looks unlikely that they had the policy experience they needed to know why the contest was extremely valuable for AI policy.
That’s not what happened. If you take for example Chris Leong’s post saying “That idea seems reasonable at first glance, but upon reflection, I think it’s a really bad idea” it has a total of six votes (counting both votes cast in favor and votes against it) at the time of my writing of this comment. LessWrong gives experienced (high karma) users more voting strength.
If you want to accuse Chris Leong of not understanding AI policy, he ran the Sydney AI Safety Fellowship. Whether you count such activity as policy experience depends a lot on your definitions. It’s a way to get other people to take beneficial actions when it comes to AI safety. On the other hand, he didn’t engage with governments to try to get them to change policy.
There are a lot of actions that can be taken for the sake of doing something about AI safety that are not helpful. This community convinced Elon Musk back in 2014 that AI safety is important and then he went ahead and funded OpenAI and people like Eliezer Yudkowsky argue that he produced net harm with that.
Experiences like that suggest that it’s not enough to convince people that AI safety is important, but that it’s actually important to get people to understand the AI safety problems more deeply. It’s possible that people in this community who have thought a lot about AI safety underrate the value of policymakers who don’t understand AI safety but who get convinced that they should do something about AI safety, but making ad hominems about those people not understanding current AI governance and policy is not helpful.