Yes, I’m not convinced either way myself but here are some arguments against:
If the USA regulates AGI, China will get it first which seems worse as there’s less alignment-activity in China (as for US China coordination, lol, lmao)
Raising awareness of AGI Alignment also raises awareness of AGI. If we communicate the “AGI” part without the “Alignment” part we could speed up timelines
If there’s a massive influx of funding/interest from people who aren’t well informed, it could lead to “substitution hazards” like work on aligning weak models with methods that don’t scale to the superintelligent case (In climate change people substitute “solve climate change” to “I’ll reduce my own emissions” which is useless)
If we convince the public AGI is a threat, there could be widespread flailing (the bad kind) which reflects badly on Alignment researchers (e.g. if DeepMind researchers are receiving threats, their system 1 might generalize to “People worried about AGI are a doomsday cult and should be disregarded”)
Most of these I’ve heard from reading conversations on EleutherAI’s discord, Connor is typically the most pessimistic but some others are pessimistic too (Connor’s talk discusses substitution hazards in more detail)
TLDR: It’s hard to control the public once they’re involved. Climate change startups aren’t getting public funding, the public is more interested in virtue-signaling (In the climate case the public doesn’t really make things worse, but for AGI it could be different)
EDIT: I think I’ve presented the arguments badly, re-reading them I don’t find them convincing. You should seek out someone who presents them better.
Yes, I’m not convinced either way myself but here are some arguments against:
If the USA regulates AGI, China will get it first which seems worse as there’s less alignment-activity in China (as for US China coordination, lol, lmao)
Raising awareness of AGI Alignment also raises awareness of AGI. If we communicate the “AGI” part without the “Alignment” part we could speed up timelines
If there’s a massive influx of funding/interest from people who aren’t well informed, it could lead to “substitution hazards” like work on aligning weak models with methods that don’t scale to the superintelligent case (In climate change people substitute “solve climate change” to “I’ll reduce my own emissions” which is useless)
If we convince the public AGI is a threat, there could be widespread flailing (the bad kind) which reflects badly on Alignment researchers (e.g. if DeepMind researchers are receiving threats, their system 1 might generalize to “People worried about AGI are a doomsday cult and should be disregarded”)
Most of these I’ve heard from reading conversations on EleutherAI’s discord, Connor is typically the most pessimistic but some others are pessimistic too (Connor’s talk discusses substitution hazards in more detail)
TLDR: It’s hard to control the public once they’re involved. Climate change startups aren’t getting public funding, the public is more interested in virtue-signaling (In the climate case the public doesn’t really make things worse, but for AGI it could be different)
EDIT: I think I’ve presented the arguments badly, re-reading them I don’t find them convincing. You should seek out someone who presents them better.