The majority of the entries are crappy 6-word slogans precisely because the contest is explicitly asking for one-liners to slap in the face of the audience. If the most effective strategy to solve something really is shouting one-liners to the policymakers, then I am the one who doesn’t want to live on this planet anymore.
I’d like to complain that this project sounds epistemically absolutely awful. It’s offering money for arguments explicitly optimized to be convincing (rather than true), it offers money only for prizes making one particular side of the case (i.e. no money for arguments that AI risk is no big deal), and to top it off it’s explicitly asking for one-liners.
I think we’re speaking different languages here, since I’m saying that the contest is obviously the right thing to do and you’re saying that the contest is obviously the wrong thing to do. I have a significant policy background and I can’t fathom why anyone would be so hostile to the contest; these people have short attention spans and expect to be lied to, so if we’re going to be honest to them then we might as well be charismatic and persuasive while doing so.
For what it’s worth, this is the second half of that comment by johnwentworth
I understand that it is plausibly worth doing regardless, but man, it feels so wrong having this on LessWrong.
Thank you for this post. I wish I had seen it earlier, but in the time I did have I had a lot of fun both coming up with my own stuff and binging a bunch of AI content and extracting the arguments that I found most compelling into a format suitable for the contest.
I would guess that the resistance in Washington, is not so much resistance to the basic idea of risk from AI, but resistance to the idea that anyone in particular has the answer, especially a group not directly affiliated with a major technology company. Does that sound right?
This is important! We need higher-quality entries (although, due to the Pareto principle, I’ve submitted a good chunk of the low-quality 6-word slogans :/ )
Point is: you can easily do better in this market.
When you tell someone that you think a supercomputer will one day spawn an unstoppable eldritch abomination, which proceeds to ruin everything for everyone forever, and the only solution is to give some people in SF a ton of money… the person you’re talking to, no matter who, tends to reevaluate associating themself with you (especially compared to their many alternatives in the DC networking scene).
I suspect that the best way of solving this problem is via social proof: get reputable people to acknowledge the problem and then say to the DC people “Look, Alice, Bob and Carol are all saying it’s a big deal”.
My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than “we should pay more attention to it”. Hopefully something like “I think there is a >20% chance that humanity will be wiped out from unfriendly AI some time in the next 50 years.”
It also seems worth doing some research into what sorts of statements the DC people would find convincing. Ie asking them “If I told you X how would you feel? What about Y? Z?” And also what sort of reputable people they would be influenced by. Professors? Tech CEOs? Public figures?
My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than “we should pay more attention to it”.
Fun fact: Elon Musk and Bill Gates have actually stopped saying that. Now it’s mostly crypto people like Sam Bankman-Fried and Peter Thiel, who will likely take the blame if revelations break that crypto was always just rich people minting worthless tokens and selling them to poor people.
It’s really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that “ignore people here at home”. Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.
It’s really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that “ignore people here at home”. Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.
This is a serious problem with most proposed AI governance and outreach plans that I find unaddressed. It’s not an unsolvable problem either, which irks me.
I threw in a few, I wasn’t expecting to win, and I’m expecting probability of win to correlate with overall forum karma. Aka, it’s not what’s said, it’s who’s saying it.
We’re still working on judging right now, but I want to assure you that we looked at neither the name of the submitter nor the number of upvotes when judging the prizes. Of course, some of the submissions are quotes from well known people like Stuart Russell and Stephen Hawking, and we do take that into account, but we didn’t include the names of individual submitters in judging any of the prizes. (Using a quote from Stephen Hawking can add some ethos to the outside world, but using a quote from “a high-karma LessWrong user” doesn’t.)
Of course, that doesn’t mean it isn’t going to correlate with forum karma; maybe people with more forum karma are better at writing. But the assertion “it’s not what’s said, it’s who’s saying it” is not true in the context of “who will be awarded”.
(Semi-dumb LW category suggestion: Posts That Could Have Made You Good Money In Hindsight)
this suggests also a category for posts that could have lost you good money in hindsight
The majority of the entries are crappy 6-word slogans precisely because the contest is explicitly asking for one-liners to slap in the face of the audience. If the most effective strategy to solve something really is shouting one-liners to the policymakers, then I am the one who doesn’t want to live on this planet anymore.
For what’s worth, I strongly upvoted the first comment by johnswentworth on that post:
I think we’re speaking different languages here, since I’m saying that the contest is obviously the right thing to do and you’re saying that the contest is obviously the wrong thing to do. I have a significant policy background and I can’t fathom why anyone would be so hostile to the contest; these people have short attention spans and expect to be lied to, so if we’re going to be honest to them then we might as well be charismatic and persuasive while doing so.
For what it’s worth, this is the second half of that comment by johnwentworth
Thank you for this post. I wish I had seen it earlier, but in the time I did have I had a lot of fun both coming up with my own stuff and binging a bunch of AI content and extracting the arguments that I found most compelling into a format suitable for the contest.
Meta: I endorse attempts to signal boost things that posters feel are neglected, especially things already on LessWrong. Upvoted.
I would guess that the resistance in Washington, is not so much resistance to the basic idea of risk from AI, but resistance to the idea that anyone in particular has the answer, especially a group not directly affiliated with a major technology company. Does that sound right?
This is important! We need higher-quality entries (although, due to the Pareto principle, I’ve submitted a good chunk of the low-quality 6-word slogans :/ )
Point is: you can easily do better in this market.
I suspect that the best way of solving this problem is via social proof: get reputable people to acknowledge the problem and then say to the DC people “Look, Alice, Bob and Carol are all saying it’s a big deal”.
My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than “we should pay more attention to it”. Hopefully something like “I think there is a >20% chance that humanity will be wiped out from unfriendly AI some time in the next 50 years.”
It also seems worth doing some research into what sorts of statements the DC people would find convincing. Ie asking them “If I told you X how would you feel? What about Y? Z?” And also what sort of reputable people they would be influenced by. Professors? Tech CEOs? Public figures?
Fun fact: Elon Musk and Bill Gates have actually stopped saying that. Now it’s mostly crypto people like Sam Bankman-Fried and Peter Thiel, who will likely take the blame if revelations break that crypto was always just rich people minting worthless tokens and selling them to poor people.
It’s really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that “ignore people here at home”. Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.
This is a serious problem with most proposed AI governance and outreach plans that I find unaddressed. It’s not an unsolvable problem either, which irks me.
I threw in a few, I wasn’t expecting to win, and I’m expecting probability of win to correlate with overall forum karma. Aka, it’s not what’s said, it’s who’s saying it.
We’re still working on judging right now, but I want to assure you that we looked at neither the name of the submitter nor the number of upvotes when judging the prizes. Of course, some of the submissions are quotes from well known people like Stuart Russell and Stephen Hawking, and we do take that into account, but we didn’t include the names of individual submitters in judging any of the prizes. (Using a quote from Stephen Hawking can add some ethos to the outside world, but using a quote from “a high-karma LessWrong user” doesn’t.)
Of course, that doesn’t mean it isn’t going to correlate with forum karma; maybe people with more forum karma are better at writing. But the assertion “it’s not what’s said, it’s who’s saying it” is not true in the context of “who will be awarded”.