Thank you for writing this post! I agree completely, which is perhaps unsurprising given my position stated back in 2020. Essentially, I think we should apply the precautionary principle for existentially risky technologies: do not build unless safety is proven.
A few words on where that position has brought me since then.
First, I concluded back then that there was little support for this position in rationalist or EA circles. I concluded as you did, that this had mostly to do with what people wanted (subjective techno-futurist desires), and less with what was possible or the best way to reduce human extinction risk. So I went ahead and started the Existential Risk Observatory anyway, a nonprofit aiming to reduce human extinction risk by informing the public debate. We think public awareness is essentially the bottleneck for effective risk reduction, and we hope more awareness will lead to increased amounts of talent, funding, institutes, diversity, and robustness for AI Safety, and increased support for constructive regulation. This can be in the form of software, research, data, or hardware regulation, with each having their own advantages and disadvantages. Our intuition is that with 50% awareness, countries should be able to implement some combination of the above that would effectively reduce AI existential risk, while trying to keep economic damage to a minimum (an international treaty may be needed, or a US-China deal, or using supply chain leverage, or some smarter idea). To our knowledge, no-one has worked out a detailed regulation proposal for this (perhaps this comes kind of close). If true, we think that’s embarrassing and regulation proposals should be worked out (and this work should be funded) with urgency. If there are regulation proposals which are not shared, we think people should share them and be less infohazardy about it.
So how did informing the societal debate go so far?
We started from a super crappy position: self-funded, hardly any connection to the xrisk space (that was also partially hostile to our concept), no media network to speak of, located in Amsterdam, far from everything. I had only some founding experience with a previous start-up. Still, I have to say that on balance, things went better than expected:
Setting up the organization went well. It was easy to attract talent through EA networks. My first lesson: even if some senior EAs and rationalists were not convinced about informing the societal debate, many juniors were.
We were successful in slowly working our way into the Dutch societal debate. One job opening led to another podcast led to another drink led to another op-ed, etc. It took a few months and lots of meetings with usually skeptical people, but we definitely made progress.
We published our first op-eds in leading Dutch newspapers after about six months. We are now publishing about one article per month, and have been in four podcasts as well. We have reached out to a few million people by readership, mostly in the Netherlands but also in the US.
We are now doing our first structured survey research measuring how effective our articles are. According to our first preliminary measurement data (report will be out in a few months), conversion rates for newspaper articles and youtube videos (the two interventions we measured so far) are actually fairly high (between ~25% and 65%). However, there aren’t too many good articles on the topic out there yet relative to population sizes, so if you just crunch the numbers, it seems likely that most people still haven’t heard of the topic. There’s also a group that has heard the arguments but doesn’t find them convincing. According to first measurements, this doesn’t correlate too much with education level or field. Our data is therefore pointing away from the idea that only brilliant people can be convinced of AI xrisk.
We obtained funding from SFF and ICFG. Apparently, getting funding for projects aiming to raise AI xrisk awareness, despite skepticism of this approach by some, was already doable last year. We seem to observe a shift towards our approach, so we would expect this to become easier.
There’s a direct connection between publishing articles and influencing policy. It wasn’t our goal to directly influence policy, but co-authors, journalists, and others are automatically asking when you write an article: so what do you propose? One can naturally include regulation proposals (or proposals for e.g. more AI Safety funding) into articles. It is also much easier to get meetings with politicians and policy makers after publishing articles. Our PA person has had meetings with three parliamentarians (two of parties in government) in the last few weeks, so we are moderately optimistic that we can influence policy in the medium term.
We think that if we can do this, many more people can. Raising awareness is constrained by many things, but most of all by manpower. Although there are definitely qualities that makes you better at this job (xrisk expertise, motivation, intelligence, writing and communication skills, management skills, network), you don’t need to be a super genius or have a very specific background to do communication. Many in the EA and rationalist communities who would love to do something about AI xrisk but aren’t machine learning experts could work in this field. With only about 3 FTE, I’m positive our org can inform millions of people. Imagine what dozens, hundreds, or thousands of people working in this field could achieve.
If we would all agree that AI xrisk comms is a good idea, I think humanity would have a good chance of making it through this century.
Thank you for writing this post! I agree completely, which is perhaps unsurprising given my position stated back in 2020. Essentially, I think we should apply the precautionary principle for existentially risky technologies: do not build unless safety is proven.
A few words on where that position has brought me since then.
First, I concluded back then that there was little support for this position in rationalist or EA circles. I concluded as you did, that this had mostly to do with what people wanted (subjective techno-futurist desires), and less with what was possible or the best way to reduce human extinction risk. So I went ahead and started the Existential Risk Observatory anyway, a nonprofit aiming to reduce human extinction risk by informing the public debate. We think public awareness is essentially the bottleneck for effective risk reduction, and we hope more awareness will lead to increased amounts of talent, funding, institutes, diversity, and robustness for AI Safety, and increased support for constructive regulation. This can be in the form of software, research, data, or hardware regulation, with each having their own advantages and disadvantages. Our intuition is that with 50% awareness, countries should be able to implement some combination of the above that would effectively reduce AI existential risk, while trying to keep economic damage to a minimum (an international treaty may be needed, or a US-China deal, or using supply chain leverage, or some smarter idea). To our knowledge, no-one has worked out a detailed regulation proposal for this (perhaps this comes kind of close). If true, we think that’s embarrassing and regulation proposals should be worked out (and this work should be funded) with urgency. If there are regulation proposals which are not shared, we think people should share them and be less infohazardy about it.
So how did informing the societal debate go so far?
We started from a super crappy position: self-funded, hardly any connection to the xrisk space (that was also partially hostile to our concept), no media network to speak of, located in Amsterdam, far from everything. I had only some founding experience with a previous start-up. Still, I have to say that on balance, things went better than expected:
Setting up the organization went well. It was easy to attract talent through EA networks. My first lesson: even if some senior EAs and rationalists were not convinced about informing the societal debate, many juniors were.
We were successful in slowly working our way into the Dutch societal debate. One job opening led to another podcast led to another drink led to another op-ed, etc. It took a few months and lots of meetings with usually skeptical people, but we definitely made progress.
We published our first op-eds in leading Dutch newspapers after about six months. We are now publishing about one article per month, and have been in four podcasts as well. We have reached out to a few million people by readership, mostly in the Netherlands but also in the US.
We are now doing our first structured survey research measuring how effective our articles are. According to our first preliminary measurement data (report will be out in a few months), conversion rates for newspaper articles and youtube videos (the two interventions we measured so far) are actually fairly high (between ~25% and 65%). However, there aren’t too many good articles on the topic out there yet relative to population sizes, so if you just crunch the numbers, it seems likely that most people still haven’t heard of the topic. There’s also a group that has heard the arguments but doesn’t find them convincing. According to first measurements, this doesn’t correlate too much with education level or field. Our data is therefore pointing away from the idea that only brilliant people can be convinced of AI xrisk.
We obtained funding from SFF and ICFG. Apparently, getting funding for projects aiming to raise AI xrisk awareness, despite skepticism of this approach by some, was already doable last year. We seem to observe a shift towards our approach, so we would expect this to become easier.
There’s a direct connection between publishing articles and influencing policy. It wasn’t our goal to directly influence policy, but co-authors, journalists, and others are automatically asking when you write an article: so what do you propose? One can naturally include regulation proposals (or proposals for e.g. more AI Safety funding) into articles. It is also much easier to get meetings with politicians and policy makers after publishing articles. Our PA person has had meetings with three parliamentarians (two of parties in government) in the last few weeks, so we are moderately optimistic that we can influence policy in the medium term.
We think that if we can do this, many more people can. Raising awareness is constrained by many things, but most of all by manpower. Although there are definitely qualities that makes you better at this job (xrisk expertise, motivation, intelligence, writing and communication skills, management skills, network), you don’t need to be a super genius or have a very specific background to do communication. Many in the EA and rationalist communities who would love to do something about AI xrisk but aren’t machine learning experts could work in this field. With only about 3 FTE, I’m positive our org can inform millions of people. Imagine what dozens, hundreds, or thousands of people working in this field could achieve.
If we would all agree that AI xrisk comms is a good idea, I think humanity would have a good chance of making it through this century.