When you tell someone that you think a supercomputer will one day spawn an unstoppable eldritch abomination, which proceeds to ruin everything for everyone forever, and the only solution is to give some people in SF a ton of money… the person you’re talking to, no matter who, tends to reevaluate associating themself with you (especially compared to their many alternatives in the DC networking scene).
I suspect that the best way of solving this problem is via social proof: get reputable people to acknowledge the problem and then say to the DC people “Look, Alice, Bob and Carol are all saying it’s a big deal”.
My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than “we should pay more attention to it”. Hopefully something like “I think there is a >20% chance that humanity will be wiped out from unfriendly AI some time in the next 50 years.”
It also seems worth doing some research into what sorts of statements the DC people would find convincing. Ie asking them “If I told you X how would you feel? What about Y? Z?” And also what sort of reputable people they would be influenced by. Professors? Tech CEOs? Public figures?
My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than “we should pay more attention to it”.
Fun fact: Elon Musk and Bill Gates have actually stopped saying that. Now it’s mostly crypto people like Sam Bankman-Fried and Peter Thiel, who will likely take the blame if revelations break that crypto was always just rich people minting worthless tokens and selling them to poor people.
It’s really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that “ignore people here at home”. Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.
It’s really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that “ignore people here at home”. Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.
This is a serious problem with most proposed AI governance and outreach plans that I find unaddressed. It’s not an unsolvable problem either, which irks me.
I suspect that the best way of solving this problem is via social proof: get reputable people to acknowledge the problem and then say to the DC people “Look, Alice, Bob and Carol are all saying it’s a big deal”.
My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than “we should pay more attention to it”. Hopefully something like “I think there is a >20% chance that humanity will be wiped out from unfriendly AI some time in the next 50 years.”
It also seems worth doing some research into what sorts of statements the DC people would find convincing. Ie asking them “If I told you X how would you feel? What about Y? Z?” And also what sort of reputable people they would be influenced by. Professors? Tech CEOs? Public figures?
Fun fact: Elon Musk and Bill Gates have actually stopped saying that. Now it’s mostly crypto people like Sam Bankman-Fried and Peter Thiel, who will likely take the blame if revelations break that crypto was always just rich people minting worthless tokens and selling them to poor people.
It’s really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that “ignore people here at home”. Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.
This is a serious problem with most proposed AI governance and outreach plans that I find unaddressed. It’s not an unsolvable problem either, which irks me.