Alex_Altair asked a really reasonable question here, and we got some really good answers as well. I’ve learned a lot from this question post, and I’m really glad it was asked.
The extent of AI application in today’s economy is not obvious, and we should not shame people for being confused or befuddled by the current situation. If they want to learn, then the people willing to teach them should do so. There’s a ton of people who think that Alignment should be one of Bernie Sanders’s major political platforms, and no matter how dumb and wrong that is, if they’re willing to learn why that’s wrong then someone should come along and teach them, ASAP.
The opposite happened with the $20k AI risk rhetoric contest, where several people decided to sabotage the contest by demanding that it be prematurely removed from the front page, even though they clearly and visibly knew nothing about the current situation with AI governance and AI policy.
The problem is that charismatic rudeness and theatrical aggression are increasingly getting reinforced with anonymous upvotes, and that’s how normal garbage social media works like twitter. Otherwise incredible information sources like Gwern are rewarded with upvotes for saying “you’re dumb for not knowing x” and punished with downvotes for saying “you don’t know x and you should, right now”. That’s not why Lesswrong exists, charisma optimizes for entertainment instead of clear thinking and problem solving.
The opposite happened with the $20k AI risk rhetoric contest, where several people decided to sabotage the contest by demanding that it be prematurely removed from the front page, even though they clearly and visibly knew nothing about the current situation with AI governance and AI policy.
Are you saying that johnswentworth knows nothing about the current situation of AI governance and AI policy? That likely an incorrect ad hominem. He’s someone who gets grant money to among other topics think about AI risk issues.
You might not agree with his position but there’s no good reason for claiming a lack of knowledge with the status quo.
I went back and looked specifically at johnswentworth’s comments and he did nothing wrong, he made a valid criticism and stopped short a long way away from calling for the contest to be removed from the front page. He tried to open a discussion based on real and significant concerns, and other people very visibly took it too far.
Who do you believe are the several people who acted here without current situation with AI governance and AI policy?
The people involved with moderator power on LessWrong like Raemon and Ruby? Do you really think that people who know nothing about the current situation with AI governance and AI policy are given moderator power on LessWrong?
Their decision to take the contest off the front page indicates decent odds of that, from a bayesian standpoint. But there are also other factors, like the people who criticized the contest much more harshly than Johnswentworth who only initiated, as well as dozens of anonymous accounts that upvoted anything criticizing the contest, and there’s also the fact that few could have confidently predicted that the contest would suffer from a severe lack of attention as a result.
Obviously, there’s also the possibility that the mods have more info than I do about the importance of Lesswrong’s reputation of extreme honesty. But it looks unlikely that they had the policy experience they needed to know why the contest was extremely valuable for AI policy.
dozens of anonymous accounts that upvoted anything criticizing the contest
That’s not what happened. If you take for example Chris Leong’s post saying “That idea seems reasonable at first glance, but upon reflection, I think it’s a really bad idea” it has a total of six votes (counting both votes cast in favor and votes against it) at the time of my writing of this comment. LessWrong gives experienced (high karma) users more voting strength.
If you want to accuse Chris Leong of not understanding AI policy, he ran the Sydney AI Safety Fellowship. Whether you count such activity as policy experience depends a lot on your definitions. It’s a way to get other people to take beneficial actions when it comes to AI safety. On the other hand, he didn’t engage with governments to try to get them to change policy.
There are a lot of actions that can be taken for the sake of doing something about AI safety that are not helpful. This community convinced Elon Musk back in 2014 that AI safety is important and then he went ahead and funded OpenAI and people like Eliezer Yudkowsky argue that he produced net harm with that.
Experiences like that suggest that it’s not enough to convince people that AI safety is important, but that it’s actually important to get people to understand the AI safety problems more deeply. It’s possible that people in this community who have thought a lot about AI safety underrate the value of policymakers who don’t understand AI safety but who get convinced that they should do something about AI safety, but making ad hominems about those people not understanding current AI governance and policy is not helpful.
Alex_Altair asked a really reasonable question here, and we got some really good answers as well. I’ve learned a lot from this question post, and I’m really glad it was asked.
The extent of AI application in today’s economy is not obvious, and we should not shame people for being confused or befuddled by the current situation. If they want to learn, then the people willing to teach them should do so. There’s a ton of people who think that Alignment should be one of Bernie Sanders’s major political platforms, and no matter how dumb and wrong that is, if they’re willing to learn why that’s wrong then someone should come along and teach them, ASAP.
The opposite happened with the $20k AI risk rhetoric contest, where several people decided to sabotage the contest by demanding that it be prematurely removed from the front page, even though they clearly and visibly knew nothing about the current situation with AI governance and AI policy.
The problem is that charismatic rudeness and theatrical aggression are increasingly getting reinforced with anonymous upvotes, and that’s how normal garbage social media works like twitter. Otherwise incredible information sources like Gwern are rewarded with upvotes for saying “you’re dumb for not knowing x” and punished with downvotes for saying “you don’t know x and you should, right now”. That’s not why Lesswrong exists, charisma optimizes for entertainment instead of clear thinking and problem solving.
Are you saying that johnswentworth knows nothing about the current situation of AI governance and AI policy? That likely an incorrect ad hominem. He’s someone who gets grant money to among other topics think about AI risk issues.
You might not agree with his position but there’s no good reason for claiming a lack of knowledge with the status quo.
I went back and looked specifically at johnswentworth’s comments and he did nothing wrong, he made a valid criticism and stopped short a long way away from calling for the contest to be removed from the front page. He tried to open a discussion based on real and significant concerns, and other people very visibly took it too far.
The people involved with moderator power on LessWrong like Raemon and Ruby? Do you really think that people who know nothing about the current situation with AI governance and AI policy are given moderator power on LessWrong?
Their decision to take the contest off the front page indicates decent odds of that, from a bayesian standpoint. But there are also other factors, like the people who criticized the contest much more harshly than Johnswentworth who only initiated, as well as dozens of anonymous accounts that upvoted anything criticizing the contest, and there’s also the fact that few could have confidently predicted that the contest would suffer from a severe lack of attention as a result.
Obviously, there’s also the possibility that the mods have more info than I do about the importance of Lesswrong’s reputation of extreme honesty. But it looks unlikely that they had the policy experience they needed to know why the contest was extremely valuable for AI policy.
That’s not what happened. If you take for example Chris Leong’s post saying “That idea seems reasonable at first glance, but upon reflection, I think it’s a really bad idea” it has a total of six votes (counting both votes cast in favor and votes against it) at the time of my writing of this comment. LessWrong gives experienced (high karma) users more voting strength.
If you want to accuse Chris Leong of not understanding AI policy, he ran the Sydney AI Safety Fellowship. Whether you count such activity as policy experience depends a lot on your definitions. It’s a way to get other people to take beneficial actions when it comes to AI safety. On the other hand, he didn’t engage with governments to try to get them to change policy.
There are a lot of actions that can be taken for the sake of doing something about AI safety that are not helpful. This community convinced Elon Musk back in 2014 that AI safety is important and then he went ahead and funded OpenAI and people like Eliezer Yudkowsky argue that he produced net harm with that.
Experiences like that suggest that it’s not enough to convince people that AI safety is important, but that it’s actually important to get people to understand the AI safety problems more deeply. It’s possible that people in this community who have thought a lot about AI safety underrate the value of policymakers who don’t understand AI safety but who get convinced that they should do something about AI safety, but making ad hominems about those people not understanding current AI governance and policy is not helpful.