First, if there were a widely known argument about the dangers of AI, on which most public intellectual agreed.
This is exactly what we have piloted at the Existential Risk Observatory, a Dutch nonprofit founded last year. I’d say we’re fairly successful so far. Our aim is to reduce human extinction risk (especially from AGI) by informing the public debate. Concretely, what we’ve done in the past year in the Netherlands is (I’m including the detailed description so others can copy our approach—I think they should):
We have set up a good-looking website, found a board, set up a legal entity.
Asked and obtained endorsement from academics already familiar with existential risk.
Found a freelance, well-known ex-journalist and ex-parliamentarian to work with us as a media strategist.
Wrote op-eds warning about AGI existential risk, as explicitly as possible, but heeding the media strategist’s advice. Sometimes we used academic co-authors. Fouroutofsix of our op-eds were published in leading newspapers in print.
Organized drinks, networked with journalists, introduced them to others who are into AGI existential risk (e.g. EAs).
Our most recent result (last weekend) is that a prominent columnist who is agenda-setting on tech and privacy issues in NRC Handelsblad, the Dutch equivalent of the New York Times, wrote a piece where he talked about AGI existential risk as an actual thing. We’ve also had a meeting with the chairwoman of the Dutch parliamentary committee on digitization (the line between a published article and a policy meeting is direct), and a debate about AGI xrisk in the leading debate centre now seems fairly likely.
We’re not there yet, but we’ve only done this for less than a year, we’re tiny, we don’t have anyone with a significant profile, and we were self-funded (we recently got our first funding from SFF—thanks guys!).
I don’t see any reason why our approach wouldn’t translate to other countries, including the US. If you do this for a few years, consistently, and in a coordinated and funded way, I would be very surprised if you cannot get to a situation where mainstream opinion in places like the Times and the Post regards AI as quite possibly capable of destroying the world.
I also think this could be one of our chances.
Would love to think further about this, and we’re open for cooperation.
This is exactly what we have piloted at the Existential Risk Observatory, a Dutch nonprofit founded last year. I’d say we’re fairly successful so far. Our aim is to reduce human extinction risk (especially from AGI) by informing the public debate. Concretely, what we’ve done in the past year in the Netherlands is (I’m including the detailed description so others can copy our approach—I think they should):
We have set up a good-looking website, found a board, set up a legal entity.
Asked and obtained endorsement from academics already familiar with existential risk.
Found a freelance, well-known ex-journalist and ex-parliamentarian to work with us as a media strategist.
Wrote op-eds warning about AGI existential risk, as explicitly as possible, but heeding the media strategist’s advice. Sometimes we used academic co-authors. Four out of six of our op-eds were published in leading newspapers in print.
Organized drinks, networked with journalists, introduced them to others who are into AGI existential risk (e.g. EAs).
Our most recent result (last weekend) is that a prominent columnist who is agenda-setting on tech and privacy issues in NRC Handelsblad, the Dutch equivalent of the New York Times, wrote a piece where he talked about AGI existential risk as an actual thing. We’ve also had a meeting with the chairwoman of the Dutch parliamentary committee on digitization (the line between a published article and a policy meeting is direct), and a debate about AGI xrisk in the leading debate centre now seems fairly likely.
We’re not there yet, but we’ve only done this for less than a year, we’re tiny, we don’t have anyone with a significant profile, and we were self-funded (we recently got our first funding from SFF—thanks guys!).
I don’t see any reason why our approach wouldn’t translate to other countries, including the US. If you do this for a few years, consistently, and in a coordinated and funded way, I would be very surprised if you cannot get to a situation where mainstream opinion in places like the Times and the Post regards AI as quite possibly capable of destroying the world.
I also think this could be one of our chances.
Would love to think further about this, and we’re open for cooperation.